paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Classification-Based Anomaly Detection for General Data
1 INTRODUCTION . Detecting anomalies in perceived data is a key ability for humans and for artificial intelligence . Humans often detect anomalies to give early indications of danger or to discover unique opportunities . Anomaly detection systems are being used by artificial intelligence to discover credit card fraud , for detecting cyber intrusion , alert predictive maintenance of industrial equipment and for discovering attractive stock market opportunities . The typical anomaly detection setting is a one class classification task , where the objective is to classify data as normal or anomalous . The importance of the task stems from being able to raise an alarm when detecting a different pattern from those seen in the past , therefore triggering further inspection . This is fundamentally different from supervised learning tasks , in which examples of all data classes are observed . There are different possible scenarios for anomaly detection methods . In supervised anomaly detection , we are given training examples of normal and anomalous patterns . This scenario can be quite well specified , however obtaining such supervision may not be possible . For example in cyber security settings , we will not have supervised examples of new , unknown computer viruses making supervised training difficult . On the other extreme , fully unsupervised anomaly detection , obtains a stream of data containing normal and anomalous patterns and attempts to detect the anomalous data . In this work we deal with the semi-supervised scenario . In this setting , we have a training set of normal examples ( which contains no anomalies ) . After training the anomaly detector , we detect anomalies in the test data , containing both normal and anomalous examples . This supervision is easy to obtain in many practical settings and is less difficult than the fully-unsupervised case . Many anomaly detection methods have been proposed over the last few decades . They can be broadly classified into reconstruction and statistically based methods . Recently , deep learning methods based on classification have achieved superior results . Most semi-supervised classificationbased methods attempt to solve anomaly detection directly , despite only having normal training data . One example is : Deep-SVDD ( Ruff et al. , 2018 ) - one-class classification using a learned deep space . Another type of classification-based methods is self-supervised i.e . methods that solve one or more classification-based auxiliary tasks on the normal training data , and this is shown to be useful for solving anomaly detection , the task of interest e.g . ( Golan & El-Yaniv , 2018 ) . Self-supervised classification-based methods have been proposed with the object of image anomaly detection , but we show that by generalizing the class of transformations they can apply to all data types . In this paper , we introduce a novel technique , GOAD , for anomaly detection which unifies current state-of-the-art methods that use normal training data only and are based on classification . Our method first transforms the data into M subspaces , and learns a feature space such that inter-class separation is larger than intra-class separation . For the learned features , the distance from the cluster center is correlated with the likelihood of anomaly . We use this criterion to determine if a new data point is normal or anomalous . We also generalize the class of transformation functions to include affine transformation which allows our method to generalize to non-image data . This is significant as tabular data is probably the most important for applications of anomaly detection . Our method is evaluated on anomaly detection on image and tabular datasets ( cyber security and medical ) and is shown to significantly improve over the state-of-the-art . 1.1 PREVIOUS WORKS . Anomaly detection methods can be generally divided into the following categories : Reconstruction Methods : Some of the most common anomaly detection methods are reconstructionbased . The general idea behind such methods is that every normal sample should be reconstructed accurately using a limited set of basis functions , whereas anomalous data should suffer from larger reconstruction costs . The choice of features , basis and loss functions differentiates between the different methods . Some of the earliest methods use : nearest neighbors ( Eskin et al. , 2002 ) , low-rank PCA ( Jolliffe , 2011 ; Candès et al. , 2011 ) or K-means ( Hartigan & Wong , 1979 ) as the reconstruction basis . Most recently , neural networks were used ( Sakurada & Yairi , 2014 ; Xia et al. , 2015 ) for learning deep basis functions for reconstruction . Another set of recent methods ( Schlegl et al. , 2017 ; Deecke et al. , 2018 ) use GANs to learn a reconstruction basis function . GANs suffer from mode-collapse and are difficult to invert , which limits the performance of such methods . Distributional Methods : Another set of commonly used methods are distribution-based . The main theme in such methods is to model the distribution of normal data . The expectation is that anomalous test data will have low likelihood under the probabilistic model while normal data will have higher likelihoods . Methods differ in the features used to describe the data and the probabilistic model used to estimate the normal distribution . Some early methods used Gaussian or Gaussian mixture models . Such models will only work if the data under the selected feature space satisfies the probabilistic assumptions implicied by the model . Another set of methods used non-parametric density estimate methods such as kernel density estimate ( Parzen , 1962 ) . Recently , deep learning methods ( autoencoders or variational autoencoders ) were used to learn deep features which are sometimes easier to model than raw features ( Yang et al. , 2017 ) . DAGMM introduced by Zong et al . ( 2018 ) learn the probabilistic model jointly with the deep features therefore shaping the features space to better conform with the probabilistic assumption . Classification-Based Methods : Another paradigm for anomaly detection is separation between space regions containing normal data from all other regions . An example of such approach is One-Class SVM ( Scholkopf et al. , 2000 ) , which trains a classifier to perform this separation . Learning a good feature space for performing such separation is performed both by the classic kernel methods as well as by the recent deep learning approach ( Ruff et al. , 2018 ) . One of the main challenges in unsupervised ( or semi-supervised ) learning is providing an objective for learning features that are relevant to the task of interest . One method for learning good representations in a self-supervised way is by training a neural network to solve an auxiliary task for which obtaining data is free or at least very inexpensive . Auxiliary tasks for learning high-quality image features include : video frame prediction ( Mathieu et al. , 2016 ) , image colorization ( Zhang et al. , 2016 ; Larsson et al. , 2016 ) , puzzle solving ( Noroozi & Favaro , 2016 ) - predicting the correct order of random permuted image patches . Recently , Gidaris et al . ( 2018 ) used a set of image processing transformations ( rotation by 0 , 90 , 180 , 270 degrees around the image axis , and predicted the true image orientation has been used to learn high-quality image features . Golan & El-Yaniv ( 2018 ) , have used similar image-processing task prediction for detecting anomalies in images . This method has shown good performance on detecting images from anomalous classes . In this work , we overcome some of the limitations of previous classification-based methods and extend their applicability of self-supervised methods to general data types . We also show that our method is more robust to adversarial attacks . 2 CLASSIFICATION-BASED ANOMALY DETECTION . Classification-based methods have dominated supervised anomaly detection . In this section we will analyse semi-supervised classification-based methods : Let us assume all data lies in spaceRL ( where L is the data dimension ) . Normal data lie in subspace X ⊂ RL . We assume that all anomalies lie outside X . To detect anomalies , we would therefore like to build a classifier C , such that C ( x ) = 1 if x ∈ X and C ( x ) = 0 if x ∈ RL\X . One-class classification methods attempt to learn C directly as P ( x ∈ X ) . Classical approaches have learned a classifier either in input space or in a kernel space . Recently , Deep-SVDD ( Ruff et al. , 2018 ) learned end-to-end to i ) transform the data to an isotropic feature space f ( x ) ii ) fit the minimal hypersphere of radius R and center c0 around the features of the normal training data . Test data is classified as anomalous if the following normality score is positive : ‖f ( x ) − c0‖2 − R2 . Learning an effective feature space is not a simple task , as the trivial solution of f ( x ) = 0 ∀ x results in the smallest hypersphere , various tricks are used to avoid this possibility . Geometric-transformation classification ( GEOM ) , proposed by Golan & El-Yaniv ( 2018 ) first transforms the normal data subspace X into M subspaces X1 .. XM . This is done by transforming each image x ∈ X using M different geometric transformations ( rotation , reflection , translation ) into T ( x , 1 ) .. T ( x , M ) . Although these transformations are image specific , we will later extend the class of transformations to all affine transformations making this applicable to non-image data . They set an auxiliary task of learning a classifier able to predict the transformation labelm given transformed data point T ( x , m ) . As the training set consists of normal data only , each sample is x ∈ X and the transformed sample is in ∪mXm . The method attempts to estimate the following conditional probability : P ( m′|T ( x , m ) ) = P ( T ( x , m ) ∈ Xm ′ ) P ( m′ ) ∑ m̃ P ( T ( x , m ) ∈ Xm̃ ) P ( m̃ ) = P ( T ( x , m ) ∈ Xm′ ) ∑ m̃ P ( T ( x , m ) ∈ Xm̃ ) ( 1 ) Where the second equality follows by design of the training set , and where every training sample is transformed exactly once by each transformation leading to equal priors . For anomalous data x ∈ RL\X , by construction of the subspace , if the transformations T are oneto-one , it follows that the transformed sample does not fall in the appropriate subspace : T ( x , m ) ∈ RL\Xm . GEOM uses P ( m|T ( x , m ) ) as a score for determining if x is anomalous i.e . that x ∈ RL\X . GEOM gives samples with low probabilities P ( m|T ( x , m ) ) high anomaly scores . A significant issue with this methodology , is that the learned classifier P ( m′|T ( x , m ) ) is only valid for samples x ∈ X which were found in the training set . For x ∈ RL\X we should in fact have P ( T ( x , m ) ∈ Xm′ ) = 0 for all m = 1 .. M ( as the transformed x is not in any of the subsets ) . This makes the anomaly score P ( m′|T ( x , m ) ) have very high variance for anomalies . One way to overcome this issue is by using examples of anomalies xa and training P ( m|T ( x , m ) ) = 1 M on anomalous data . This corresponds to the supervised scenario and was recently introduced as Outlier Exposure ( Hendrycks et al. , 2018 ) . Although getting such supervision is possible for some image tasks ( where large external datasets can be used ) this is not possible in the general case e.g . for tabular data which exhibits much more variation between datasets .
This paper proposes a novel approach to classification-based anomaly detection for general data. Classification-based anomaly detection uses auxiliary tasks (transformations) to train a model to extract useful features from the data. This approach is well-known in image data, where auxiliary tasks such as classification of rotated or flipped images have been demonstrated to work effectively. The paper generalizes to the task by using the affine transformation y = Wx+b. A novel distance-based classification is also devised to learn the model in such as way that it generalizes to unseen data. This is achieved by modeling the each auxiliary task subspace by a sphere and by using the distance to the center for the calculation of the loss function. The anomaly score then becomes the product of the probabilities that the transformed samples are in their respective subspaces. The paper provides comparison to SOT methods for both Cifar10 and 4 non-image datasets. The proposed method substantially outperforms SOT on all datasets. A section is devoted to explore the benefits of this approach on adversarial attacks using PGD. It is shown that random transformations (implemented with the affine transformation and a random matrix) do increase the robustness of the models by 50%. Another section is devoted to studying the effect of contamination (anomaly data in the training set). The approach is shown to degrade more gracefully than DAGMM on KDDCUP99. Finally, a section studies the effect of the number of tasks on the performance, showing that after a certain number of task (which is probably problem-dependent), the accuracy stabilizes.
SP:5674e8decbf353c9e5590e5c85ee5b8397a5db08
Classification-Based Anomaly Detection for General Data
1 INTRODUCTION . Detecting anomalies in perceived data is a key ability for humans and for artificial intelligence . Humans often detect anomalies to give early indications of danger or to discover unique opportunities . Anomaly detection systems are being used by artificial intelligence to discover credit card fraud , for detecting cyber intrusion , alert predictive maintenance of industrial equipment and for discovering attractive stock market opportunities . The typical anomaly detection setting is a one class classification task , where the objective is to classify data as normal or anomalous . The importance of the task stems from being able to raise an alarm when detecting a different pattern from those seen in the past , therefore triggering further inspection . This is fundamentally different from supervised learning tasks , in which examples of all data classes are observed . There are different possible scenarios for anomaly detection methods . In supervised anomaly detection , we are given training examples of normal and anomalous patterns . This scenario can be quite well specified , however obtaining such supervision may not be possible . For example in cyber security settings , we will not have supervised examples of new , unknown computer viruses making supervised training difficult . On the other extreme , fully unsupervised anomaly detection , obtains a stream of data containing normal and anomalous patterns and attempts to detect the anomalous data . In this work we deal with the semi-supervised scenario . In this setting , we have a training set of normal examples ( which contains no anomalies ) . After training the anomaly detector , we detect anomalies in the test data , containing both normal and anomalous examples . This supervision is easy to obtain in many practical settings and is less difficult than the fully-unsupervised case . Many anomaly detection methods have been proposed over the last few decades . They can be broadly classified into reconstruction and statistically based methods . Recently , deep learning methods based on classification have achieved superior results . Most semi-supervised classificationbased methods attempt to solve anomaly detection directly , despite only having normal training data . One example is : Deep-SVDD ( Ruff et al. , 2018 ) - one-class classification using a learned deep space . Another type of classification-based methods is self-supervised i.e . methods that solve one or more classification-based auxiliary tasks on the normal training data , and this is shown to be useful for solving anomaly detection , the task of interest e.g . ( Golan & El-Yaniv , 2018 ) . Self-supervised classification-based methods have been proposed with the object of image anomaly detection , but we show that by generalizing the class of transformations they can apply to all data types . In this paper , we introduce a novel technique , GOAD , for anomaly detection which unifies current state-of-the-art methods that use normal training data only and are based on classification . Our method first transforms the data into M subspaces , and learns a feature space such that inter-class separation is larger than intra-class separation . For the learned features , the distance from the cluster center is correlated with the likelihood of anomaly . We use this criterion to determine if a new data point is normal or anomalous . We also generalize the class of transformation functions to include affine transformation which allows our method to generalize to non-image data . This is significant as tabular data is probably the most important for applications of anomaly detection . Our method is evaluated on anomaly detection on image and tabular datasets ( cyber security and medical ) and is shown to significantly improve over the state-of-the-art . 1.1 PREVIOUS WORKS . Anomaly detection methods can be generally divided into the following categories : Reconstruction Methods : Some of the most common anomaly detection methods are reconstructionbased . The general idea behind such methods is that every normal sample should be reconstructed accurately using a limited set of basis functions , whereas anomalous data should suffer from larger reconstruction costs . The choice of features , basis and loss functions differentiates between the different methods . Some of the earliest methods use : nearest neighbors ( Eskin et al. , 2002 ) , low-rank PCA ( Jolliffe , 2011 ; Candès et al. , 2011 ) or K-means ( Hartigan & Wong , 1979 ) as the reconstruction basis . Most recently , neural networks were used ( Sakurada & Yairi , 2014 ; Xia et al. , 2015 ) for learning deep basis functions for reconstruction . Another set of recent methods ( Schlegl et al. , 2017 ; Deecke et al. , 2018 ) use GANs to learn a reconstruction basis function . GANs suffer from mode-collapse and are difficult to invert , which limits the performance of such methods . Distributional Methods : Another set of commonly used methods are distribution-based . The main theme in such methods is to model the distribution of normal data . The expectation is that anomalous test data will have low likelihood under the probabilistic model while normal data will have higher likelihoods . Methods differ in the features used to describe the data and the probabilistic model used to estimate the normal distribution . Some early methods used Gaussian or Gaussian mixture models . Such models will only work if the data under the selected feature space satisfies the probabilistic assumptions implicied by the model . Another set of methods used non-parametric density estimate methods such as kernel density estimate ( Parzen , 1962 ) . Recently , deep learning methods ( autoencoders or variational autoencoders ) were used to learn deep features which are sometimes easier to model than raw features ( Yang et al. , 2017 ) . DAGMM introduced by Zong et al . ( 2018 ) learn the probabilistic model jointly with the deep features therefore shaping the features space to better conform with the probabilistic assumption . Classification-Based Methods : Another paradigm for anomaly detection is separation between space regions containing normal data from all other regions . An example of such approach is One-Class SVM ( Scholkopf et al. , 2000 ) , which trains a classifier to perform this separation . Learning a good feature space for performing such separation is performed both by the classic kernel methods as well as by the recent deep learning approach ( Ruff et al. , 2018 ) . One of the main challenges in unsupervised ( or semi-supervised ) learning is providing an objective for learning features that are relevant to the task of interest . One method for learning good representations in a self-supervised way is by training a neural network to solve an auxiliary task for which obtaining data is free or at least very inexpensive . Auxiliary tasks for learning high-quality image features include : video frame prediction ( Mathieu et al. , 2016 ) , image colorization ( Zhang et al. , 2016 ; Larsson et al. , 2016 ) , puzzle solving ( Noroozi & Favaro , 2016 ) - predicting the correct order of random permuted image patches . Recently , Gidaris et al . ( 2018 ) used a set of image processing transformations ( rotation by 0 , 90 , 180 , 270 degrees around the image axis , and predicted the true image orientation has been used to learn high-quality image features . Golan & El-Yaniv ( 2018 ) , have used similar image-processing task prediction for detecting anomalies in images . This method has shown good performance on detecting images from anomalous classes . In this work , we overcome some of the limitations of previous classification-based methods and extend their applicability of self-supervised methods to general data types . We also show that our method is more robust to adversarial attacks . 2 CLASSIFICATION-BASED ANOMALY DETECTION . Classification-based methods have dominated supervised anomaly detection . In this section we will analyse semi-supervised classification-based methods : Let us assume all data lies in spaceRL ( where L is the data dimension ) . Normal data lie in subspace X ⊂ RL . We assume that all anomalies lie outside X . To detect anomalies , we would therefore like to build a classifier C , such that C ( x ) = 1 if x ∈ X and C ( x ) = 0 if x ∈ RL\X . One-class classification methods attempt to learn C directly as P ( x ∈ X ) . Classical approaches have learned a classifier either in input space or in a kernel space . Recently , Deep-SVDD ( Ruff et al. , 2018 ) learned end-to-end to i ) transform the data to an isotropic feature space f ( x ) ii ) fit the minimal hypersphere of radius R and center c0 around the features of the normal training data . Test data is classified as anomalous if the following normality score is positive : ‖f ( x ) − c0‖2 − R2 . Learning an effective feature space is not a simple task , as the trivial solution of f ( x ) = 0 ∀ x results in the smallest hypersphere , various tricks are used to avoid this possibility . Geometric-transformation classification ( GEOM ) , proposed by Golan & El-Yaniv ( 2018 ) first transforms the normal data subspace X into M subspaces X1 .. XM . This is done by transforming each image x ∈ X using M different geometric transformations ( rotation , reflection , translation ) into T ( x , 1 ) .. T ( x , M ) . Although these transformations are image specific , we will later extend the class of transformations to all affine transformations making this applicable to non-image data . They set an auxiliary task of learning a classifier able to predict the transformation labelm given transformed data point T ( x , m ) . As the training set consists of normal data only , each sample is x ∈ X and the transformed sample is in ∪mXm . The method attempts to estimate the following conditional probability : P ( m′|T ( x , m ) ) = P ( T ( x , m ) ∈ Xm ′ ) P ( m′ ) ∑ m̃ P ( T ( x , m ) ∈ Xm̃ ) P ( m̃ ) = P ( T ( x , m ) ∈ Xm′ ) ∑ m̃ P ( T ( x , m ) ∈ Xm̃ ) ( 1 ) Where the second equality follows by design of the training set , and where every training sample is transformed exactly once by each transformation leading to equal priors . For anomalous data x ∈ RL\X , by construction of the subspace , if the transformations T are oneto-one , it follows that the transformed sample does not fall in the appropriate subspace : T ( x , m ) ∈ RL\Xm . GEOM uses P ( m|T ( x , m ) ) as a score for determining if x is anomalous i.e . that x ∈ RL\X . GEOM gives samples with low probabilities P ( m|T ( x , m ) ) high anomaly scores . A significant issue with this methodology , is that the learned classifier P ( m′|T ( x , m ) ) is only valid for samples x ∈ X which were found in the training set . For x ∈ RL\X we should in fact have P ( T ( x , m ) ∈ Xm′ ) = 0 for all m = 1 .. M ( as the transformed x is not in any of the subsets ) . This makes the anomaly score P ( m′|T ( x , m ) ) have very high variance for anomalies . One way to overcome this issue is by using examples of anomalies xa and training P ( m|T ( x , m ) ) = 1 M on anomalous data . This corresponds to the supervised scenario and was recently introduced as Outlier Exposure ( Hendrycks et al. , 2018 ) . Although getting such supervision is possible for some image tasks ( where large external datasets can be used ) this is not possible in the general case e.g . for tabular data which exhibits much more variation between datasets .
Review: The paper proposes a technique for anomaly detection. It presents a novel method that unifies the current classification-based approaches to overcome generalization issues and outperforms the state of the art. This work also generalizes to non-image data by extending the transformation functions to include random affine transformations. A lot of important applications of anomaly detection are based on tabular data so this is significant. The “normal” data is divided into M subspaces where there are M different transformations, the idea is to then learn a feature space using triplet loss that learns supervised clusters with low intra-class variation and high inter-class variation. A score is computed (using the probabilities based on the learnt feature space) on the test samples to obtain their degree of anomalousness. The intuition behind this self-supervised approach is that learning to discriminate between many types of geometric transformations applied to normal images can help to learn cues useful for detecting novelties.
SP:5674e8decbf353c9e5590e5c85ee5b8397a5db08
Context Based Machine Translation With Recurrent Neural Network For English-Amharic Translation
1 INTRODUCTION . Context based machine translation ( CBMT ) is a phrase-based machine translation ( PBMT ) approach proposed by Miller et al . ( 2006 ) . Unlike most PBMT approaches that rely on statistical occurrence of the phrases , CBMT works on the contextual occurrence of the phrases . CBMT uses bilingual dictionary as its main translator and produces phrases to be flooded into a large target corpus . The CBMT approach addresses the problem of parallel corpus scarcity between language pairs . The parallel corpus set for English-Amharic language pair , for instance , composes of the Bible , the Ethiopian constitution and international documents . These sources use words specific to their domain and overlook phrases and words used by novels , news and similar literary documents . The CBMT uses synonyms of words in place of rare words and rely on large target corpus and a bilingual dictionary to help with data scarcity ( Miller et al. , 2006 ) . It is not dependent on large parallel corpus like most PBMT such as the statistical machine translation ( SMT ) ( Brown et al. , 1990 ) and the example-based machine translation EBMT ( Gangadharaiah , 2011 ) . The CBMT , however , fails in fluently translating texts compared to the neural machine translation ( NMT ) . The NMT learns the pattern of humans ’ translation using human translated parallel corpus . Its translations are more fluent and accurate than all the rest so far when evaluated individually ( Popovic , 2017 ) . However , NMT struggles to translate properly rare words and words not commonly used ( Wu et al. , 2016 ) . In addition , NMT requires large parallel corpus for training . The aim of this research is to build a system by combining the CBMT with the NMT for English to Amharic translation . The combination of PBMT and NMT is the future and most promising than the individual approaches themselves ( Popovic , 2017 ) . CBMT ’ s ability to address rare words and the NMT ’ s ability to produce fluent translation along with their context awareness makes them complementary couple . The combination is done by providing the NMT with two inputs , one from the source language and the other from the output of the CBMT to produces the final target sentence . In this paper , we show that this approach utilizes the strength of each method to achieve a significant translation performance improvement over simple NMT . The improvement is mostly dependent on the performance of the CBMT and mostly on the bilingual dictionary of the CBMT . 2 RELATED WORKS . PBMT approaches are mostly used to translate English to Amharic as in the case of Gasser ( 2012 ) , Tadesse & Mekuria ( 2000 ) , Teshome ( 2000 ) , Besacier et al . ( 2000 ) , Zewgneh ( 2017 ) and Taye et al . ( 2015 ) . Below we summarize the researches with most significance to ours . The SMT approach takes a parallel corpus as an input and it selects the most frequent target phrase based on statistical analysis for each searched source phrase ( Brown et al. , 1990 ) . The SMT approach applied to the English-Amharic pair has produced 18.74 % BLEU score ( Tadesse & Mekuria , 2000 ) . The SMT has good accuracy in translating all the words in a sentence but it is not fluent ( Oladosu et al. , 2016 ) . Hybrid of SMT and rule based machine translation ( RBMT ) translates and orders the source text based on the grammar rules of the target language and sends it to the SMT for final translation ( Yulianti et al. , 2011 ) ( Labaka et al. , 2014 ) . The hybrid approach for English-Amharic pair has achieved a 15 % improvement over SMT on simple sentence and 20 % improvement for complex sentence ( Zewgneh , 2017 ) . Hybrid of RBMT and SMT gets fluency from RBMT and accuracy from SMT but for longer sentences , the reordering fails ( Oladosu et al. , 2016 ) . The CBMT approach has been implemented for the language pair Spanish-English . In CBMT , the source phrases are translated using bilingual dictionary and flooded to target corpus . It has achieved 64.62 % BLEU score for the researchers ’ dataset ( Miller et al. , 2006 ) . The CBMT outperforms SMT in accuracy and fluency but translation of phrases with words not in the bilingual dictionary is weak ( Miller et al. , 2006 ) . The NMT has been researched by different groups and here the research by Googles ’ researchers on the language pair English-French is presented . The NMT model is trained using parallel corpus . The source sentence is encoded as a vector and then decoded with the help of an attention model . Googles ’ NMT model has achieved 38.95 % BLEU score ( Wu et al. , 2016 ) . The NMT has accuracy and fluency but it fails to translate the whole sentence and also fails to perform well with rare words ( Wu et al. , 2016 ) . To solve this using sub-word units has been suggested ( Sennrich et al. , 2016 ) but Amharic has unique treats like ” Tebko-lalto ” , one word with two different meanings , which can only be addressed by using context . The NMT has been modified to translate low resourced languages . One approach uses universal lexical representation ( ULR ) were a word is represented using universal word embeddings . This benefits low resource languages which have semantic similarity with high resourced languages ( Gu et al. , 2018 ) . This achieved 5 % BLEU score improvement over normal NMT . However , most southern Semitic languages like Amharic do not have a strong semantic relative with large resource . NMT has also been modified to work with monolingual corpus instead of parallel corpus using cross-lingual word embedding ( Artetxe et al. , 2017 ) . Such an approach achieved a 15.56 % BLEU score which was less that the semi-supervised and supervised which achieved 21.81 % BLEU score and 20.48 % BLEU score respectively . Combination of NMT and PBMT which takes the output of SMT ( a PBMT ) and the source sentence to train the NMT model has been used for the language pair English-German . It has achieved 2 BLEU points over basic NMT and PBMT ( Niehues et al. , 2016 ) . Combination of NMT and PBMT which takes three inputs ; output of basic NMT , output of SMT and output of Hierarchical PBMT ( HPBMT ) of SMT has been implemented for English-Chinese language pair . It achieved 6 BLEU points over basic NMT and 5.3 BLEU points over HPBMT ( Zhang et al. , 2017 ) . The combination of PBMT and NMT performs better ( Popovic , 2017 ) in terms of accuracy and fluency but it is dependent on the performance of the chosen PBMT approach . 3 METHODOLOGY . In this research , we have selected CBMT and NMT to form a combinational system . This approach addresses the limitation of context unawareness of some PBMT approaches like SMT and the need for large parallel corpus of simple NMT . In our approach , the source sentence in English and the translation output of the CBMT in Amharic has been fed to the NMT ’ s encoder-decoder model as shown in Figure 1 . The NMT model then produces the final Amharic translation . The combination of the CBMT and the NMT follows the mixed approach proposed by Niehues et al . ( 2016 ) . Their mixed approach feeds the NMT with the source sentence and the output of the PBMT . The research by Zhang et al . ( 2017 ) also supports this way of combining different systems . 3.1 CBMT SYSTEM . The CBMT outperforms RBMT , SMT and EBMT when it comes to languages with less parallel corpora ( Miller et al. , 2006 ) . It uses a bilingual dictionary , a large target corpus and smaller source corpus , which is optional . In the context based machine translation , there are different components working together to produce the translation . Figure 2 shows the flow of data in the different components of the CBMT . The source sentence is converted into N-gram phrases and then it is translated using bilingual dictionary . CBMT ’ s performance is mostly dependent on the efficiency of the dictionary . We have manually built a phrase based dictionary aided by Google translate . A synonym finder helps the dictionary ’ s search using WordNet ( Soergel , 1998 ) . WordNet is a library with large lexical database of English words . It provides synonyms of the English words whose Amharic translations are not in dictionary . In this paper , a maximum of four N-grams has been used . Most phrases of English that are translated to a single word in Amharic have a length of four or less words . For example the English phrase ” everyone who calls on the name of the lord will be saved ” has the translations in Output 1 using our dictionary . Output 1 : The translated output of the N-grams These translations have been combined into sentences in the same order as the source sentence . Then each sentence is converted into N-grams of variable length . The maximum flooded N-grams length is len ( Ngram ) 2 + 1 if len ( Ngram ) ≥4 else it is equal to len ( Ngram ) . This provides a good variable range to capture neighboring words in a single N-gram . Output 2 shows sentences formed using the translations in Output 1 . The N-grams to be flooded are formed by sliding one word from left to right through the combined sentence . Output 2 : The translated output combined into sentence Output 3 shows the N-grams for the translated sentences shown in Output 2 . Output 3 : The N-grams for the translated sentences The flooder is then responsible for searching the translated phrases in the target corpus and finding the longest N-gram match . For each phrase to be flooded , it selects a phrase in the target corpus with the most translated words and least in-between words amongst the words matched . The flooder produces the result in Output 4 with the Book of Romans as the target corpus to be flooded . Output 4 : Final output of flooder for single flooded file The N-gram connector combines the flooded text to find the longest overlap of the translated target text . The overlapping system favors those with the least number of not searched words found in between the searched N-grams when calculating the overlap . Output 5 shows the final outcome of the N-gram connector . The system selects the maximum or longest overlapping phrases from the combiner and merges them to form the final target sentence . So finally , the translation for the example English phrase ” everyone who calls on the name of the lord will be saved ” is . 3.2 NMT SYSTEM . In this paper , we have used RNN ( recurrent neural networks ) for the NMT . In RNN , the output is fed back to the neuron to learn from both the fresh input and its previous output . This improves RNN ’ s performance because it learns from its errors while training . The neural cell used is the LSTM ( long short-term memory ) , introduced by Hochreiter & Schmidhuber ( 1997 ) . We have used LSTM cells for both the encoding and decoding of the sentences . For the decoding , a greedy algorithm has been used . The algorithm selects the first fit word that has the highest probability of occurrence . Probability refers to the probability of being translated and appearing next to the word before itself . The system has an attention layer between the encoder layer and the decoder layer . We have used the Luong attention model ( Luong et al. , 2015 ) . Equation 1 through Equation 4 shows Luong attention model ’ s computation . αts = exp ( score ( ht , hs ) ) S∑ s′=1 exp ( score ( ht , hs′ ) ) [ Attention Weights ] ( 1 ) Output 5 : Final Output of N-gram connector ct = ∑ s αtshs [ Context vector ] ( 2 ) at = f ( ct , ht ) = tanh ( Wc [ ct : ht ] ) [ Attention Vector ] ( 3 ) score ( ht , hs ) = h T t Whs [ Luong ’ s multiplicative style ] ( 4 ) The score function , calculated using Equation 4 , is used to compare the output of the decoder ( ht ) with the output of the encoder ( hs ) in order to find the attention weight calculated using Equation 1 . The Attention weights ( alphats ) are then used for the context vector ( ct ) calculated by Equation 2 . This context vector as well as the output of the decoder is then used to produce the final output of the decoder using Equation 3 .
This paper aims to combine a traditional CBMT system with an NMT system. The core idea of the paper is to use the output of the CBMT system as a second source to a multi-source NMT system. The first source of the system is CBMT, the second source is the original source and the output is the translation in the target language. All the experiments are conducted for English-Amharic with a small amount of parallel data from the Bible. A lot of details are provided about the generation of the CBMT system using Google Translate and details are provided about the approach to create such a system.
SP:e65bda143869e9a4d75b7e7ee893a2ed7b8e822a
Context Based Machine Translation With Recurrent Neural Network For English-Amharic Translation
1 INTRODUCTION . Context based machine translation ( CBMT ) is a phrase-based machine translation ( PBMT ) approach proposed by Miller et al . ( 2006 ) . Unlike most PBMT approaches that rely on statistical occurrence of the phrases , CBMT works on the contextual occurrence of the phrases . CBMT uses bilingual dictionary as its main translator and produces phrases to be flooded into a large target corpus . The CBMT approach addresses the problem of parallel corpus scarcity between language pairs . The parallel corpus set for English-Amharic language pair , for instance , composes of the Bible , the Ethiopian constitution and international documents . These sources use words specific to their domain and overlook phrases and words used by novels , news and similar literary documents . The CBMT uses synonyms of words in place of rare words and rely on large target corpus and a bilingual dictionary to help with data scarcity ( Miller et al. , 2006 ) . It is not dependent on large parallel corpus like most PBMT such as the statistical machine translation ( SMT ) ( Brown et al. , 1990 ) and the example-based machine translation EBMT ( Gangadharaiah , 2011 ) . The CBMT , however , fails in fluently translating texts compared to the neural machine translation ( NMT ) . The NMT learns the pattern of humans ’ translation using human translated parallel corpus . Its translations are more fluent and accurate than all the rest so far when evaluated individually ( Popovic , 2017 ) . However , NMT struggles to translate properly rare words and words not commonly used ( Wu et al. , 2016 ) . In addition , NMT requires large parallel corpus for training . The aim of this research is to build a system by combining the CBMT with the NMT for English to Amharic translation . The combination of PBMT and NMT is the future and most promising than the individual approaches themselves ( Popovic , 2017 ) . CBMT ’ s ability to address rare words and the NMT ’ s ability to produce fluent translation along with their context awareness makes them complementary couple . The combination is done by providing the NMT with two inputs , one from the source language and the other from the output of the CBMT to produces the final target sentence . In this paper , we show that this approach utilizes the strength of each method to achieve a significant translation performance improvement over simple NMT . The improvement is mostly dependent on the performance of the CBMT and mostly on the bilingual dictionary of the CBMT . 2 RELATED WORKS . PBMT approaches are mostly used to translate English to Amharic as in the case of Gasser ( 2012 ) , Tadesse & Mekuria ( 2000 ) , Teshome ( 2000 ) , Besacier et al . ( 2000 ) , Zewgneh ( 2017 ) and Taye et al . ( 2015 ) . Below we summarize the researches with most significance to ours . The SMT approach takes a parallel corpus as an input and it selects the most frequent target phrase based on statistical analysis for each searched source phrase ( Brown et al. , 1990 ) . The SMT approach applied to the English-Amharic pair has produced 18.74 % BLEU score ( Tadesse & Mekuria , 2000 ) . The SMT has good accuracy in translating all the words in a sentence but it is not fluent ( Oladosu et al. , 2016 ) . Hybrid of SMT and rule based machine translation ( RBMT ) translates and orders the source text based on the grammar rules of the target language and sends it to the SMT for final translation ( Yulianti et al. , 2011 ) ( Labaka et al. , 2014 ) . The hybrid approach for English-Amharic pair has achieved a 15 % improvement over SMT on simple sentence and 20 % improvement for complex sentence ( Zewgneh , 2017 ) . Hybrid of RBMT and SMT gets fluency from RBMT and accuracy from SMT but for longer sentences , the reordering fails ( Oladosu et al. , 2016 ) . The CBMT approach has been implemented for the language pair Spanish-English . In CBMT , the source phrases are translated using bilingual dictionary and flooded to target corpus . It has achieved 64.62 % BLEU score for the researchers ’ dataset ( Miller et al. , 2006 ) . The CBMT outperforms SMT in accuracy and fluency but translation of phrases with words not in the bilingual dictionary is weak ( Miller et al. , 2006 ) . The NMT has been researched by different groups and here the research by Googles ’ researchers on the language pair English-French is presented . The NMT model is trained using parallel corpus . The source sentence is encoded as a vector and then decoded with the help of an attention model . Googles ’ NMT model has achieved 38.95 % BLEU score ( Wu et al. , 2016 ) . The NMT has accuracy and fluency but it fails to translate the whole sentence and also fails to perform well with rare words ( Wu et al. , 2016 ) . To solve this using sub-word units has been suggested ( Sennrich et al. , 2016 ) but Amharic has unique treats like ” Tebko-lalto ” , one word with two different meanings , which can only be addressed by using context . The NMT has been modified to translate low resourced languages . One approach uses universal lexical representation ( ULR ) were a word is represented using universal word embeddings . This benefits low resource languages which have semantic similarity with high resourced languages ( Gu et al. , 2018 ) . This achieved 5 % BLEU score improvement over normal NMT . However , most southern Semitic languages like Amharic do not have a strong semantic relative with large resource . NMT has also been modified to work with monolingual corpus instead of parallel corpus using cross-lingual word embedding ( Artetxe et al. , 2017 ) . Such an approach achieved a 15.56 % BLEU score which was less that the semi-supervised and supervised which achieved 21.81 % BLEU score and 20.48 % BLEU score respectively . Combination of NMT and PBMT which takes the output of SMT ( a PBMT ) and the source sentence to train the NMT model has been used for the language pair English-German . It has achieved 2 BLEU points over basic NMT and PBMT ( Niehues et al. , 2016 ) . Combination of NMT and PBMT which takes three inputs ; output of basic NMT , output of SMT and output of Hierarchical PBMT ( HPBMT ) of SMT has been implemented for English-Chinese language pair . It achieved 6 BLEU points over basic NMT and 5.3 BLEU points over HPBMT ( Zhang et al. , 2017 ) . The combination of PBMT and NMT performs better ( Popovic , 2017 ) in terms of accuracy and fluency but it is dependent on the performance of the chosen PBMT approach . 3 METHODOLOGY . In this research , we have selected CBMT and NMT to form a combinational system . This approach addresses the limitation of context unawareness of some PBMT approaches like SMT and the need for large parallel corpus of simple NMT . In our approach , the source sentence in English and the translation output of the CBMT in Amharic has been fed to the NMT ’ s encoder-decoder model as shown in Figure 1 . The NMT model then produces the final Amharic translation . The combination of the CBMT and the NMT follows the mixed approach proposed by Niehues et al . ( 2016 ) . Their mixed approach feeds the NMT with the source sentence and the output of the PBMT . The research by Zhang et al . ( 2017 ) also supports this way of combining different systems . 3.1 CBMT SYSTEM . The CBMT outperforms RBMT , SMT and EBMT when it comes to languages with less parallel corpora ( Miller et al. , 2006 ) . It uses a bilingual dictionary , a large target corpus and smaller source corpus , which is optional . In the context based machine translation , there are different components working together to produce the translation . Figure 2 shows the flow of data in the different components of the CBMT . The source sentence is converted into N-gram phrases and then it is translated using bilingual dictionary . CBMT ’ s performance is mostly dependent on the efficiency of the dictionary . We have manually built a phrase based dictionary aided by Google translate . A synonym finder helps the dictionary ’ s search using WordNet ( Soergel , 1998 ) . WordNet is a library with large lexical database of English words . It provides synonyms of the English words whose Amharic translations are not in dictionary . In this paper , a maximum of four N-grams has been used . Most phrases of English that are translated to a single word in Amharic have a length of four or less words . For example the English phrase ” everyone who calls on the name of the lord will be saved ” has the translations in Output 1 using our dictionary . Output 1 : The translated output of the N-grams These translations have been combined into sentences in the same order as the source sentence . Then each sentence is converted into N-grams of variable length . The maximum flooded N-grams length is len ( Ngram ) 2 + 1 if len ( Ngram ) ≥4 else it is equal to len ( Ngram ) . This provides a good variable range to capture neighboring words in a single N-gram . Output 2 shows sentences formed using the translations in Output 1 . The N-grams to be flooded are formed by sliding one word from left to right through the combined sentence . Output 2 : The translated output combined into sentence Output 3 shows the N-grams for the translated sentences shown in Output 2 . Output 3 : The N-grams for the translated sentences The flooder is then responsible for searching the translated phrases in the target corpus and finding the longest N-gram match . For each phrase to be flooded , it selects a phrase in the target corpus with the most translated words and least in-between words amongst the words matched . The flooder produces the result in Output 4 with the Book of Romans as the target corpus to be flooded . Output 4 : Final output of flooder for single flooded file The N-gram connector combines the flooded text to find the longest overlap of the translated target text . The overlapping system favors those with the least number of not searched words found in between the searched N-grams when calculating the overlap . Output 5 shows the final outcome of the N-gram connector . The system selects the maximum or longest overlapping phrases from the combiner and merges them to form the final target sentence . So finally , the translation for the example English phrase ” everyone who calls on the name of the lord will be saved ” is . 3.2 NMT SYSTEM . In this paper , we have used RNN ( recurrent neural networks ) for the NMT . In RNN , the output is fed back to the neuron to learn from both the fresh input and its previous output . This improves RNN ’ s performance because it learns from its errors while training . The neural cell used is the LSTM ( long short-term memory ) , introduced by Hochreiter & Schmidhuber ( 1997 ) . We have used LSTM cells for both the encoding and decoding of the sentences . For the decoding , a greedy algorithm has been used . The algorithm selects the first fit word that has the highest probability of occurrence . Probability refers to the probability of being translated and appearing next to the word before itself . The system has an attention layer between the encoder layer and the decoder layer . We have used the Luong attention model ( Luong et al. , 2015 ) . Equation 1 through Equation 4 shows Luong attention model ’ s computation . αts = exp ( score ( ht , hs ) ) S∑ s′=1 exp ( score ( ht , hs′ ) ) [ Attention Weights ] ( 1 ) Output 5 : Final Output of N-gram connector ct = ∑ s αtshs [ Context vector ] ( 2 ) at = f ( ct , ht ) = tanh ( Wc [ ct : ht ] ) [ Attention Vector ] ( 3 ) score ( ht , hs ) = h T t Whs [ Luong ’ s multiplicative style ] ( 4 ) The score function , calculated using Equation 4 , is used to compare the output of the decoder ( ht ) with the output of the encoder ( hs ) in order to find the attention weight calculated using Equation 1 . The Attention weights ( alphats ) are then used for the context vector ( ct ) calculated by Equation 2 . This context vector as well as the output of the decoder is then used to produce the final output of the decoder using Equation 3 .
This paper presents a machine translation system based on a combination of a neural machine translation system (NMT) and a context-based machine translation (CBMT). The method is evaluated on a small parallel corpus application of English-Amharic translation. The idea is that in the small corpus setting, the CBMT can leverage a manually built bilingual dictionary
SP:e65bda143869e9a4d75b7e7ee893a2ed7b8e822a
Information Geometry of Orthogonal Initializations and Training
1 INTRODUCTION . Deep neural networks ( DNN ) have shown tremendous success in computer vision problems , speech recognition , amortized probabilistic inference , and the modelling of neural data . Despite their performance , DNNs face obstacles in their practical application , which stem from both the excessive computational cost of running gradient descent for a large number of epochs , as well as the inherent brittleness of gradient descent applied to very deep models . A number of heuristic approaches such as batch normalization , weight normalization and residual connections ( He et al. , 2016 ; Ioffe & Szegedy , 2015 ; Salimans & Kingma , 2016 ) have emerged in an attempt to address these trainability issues . Recently mean field theory has been successful in developing a more principled analysis of gradients of neural networks , and has become the basis for a new random initialization principle . The mean field approach postulates that in the limit of infinitely wide random weight matrices , the distribution of pre-activations converges weakly to a Gaussian . Using this approach , a series of works proposed to initialize the networks in such a way that for each layer the input-output Jacobian has mean singular values of 1 ( Schoenholz et al. , 2017 ) . This requirement was further strengthened to suggest that the spectrum of singular values of the input-output Jacobian should concentrate on 1 , and it was shown that this can only be achieved with random orthogonal weight matrices . Under these conditions the backpropagated gradients are bounded in ` 2 norm ( Pennington et al. , 2017 ) irrespective of depth , i.e. , they neither vanish nor explode . It was shown experimentally in ( Pennington et al. , 2017 ; Xiao et al. , 2018 ; Chen et al. , 2018 ) that networks with these critical initial conditions train orders of magnitude faster than networks with arbitrary initializations . The empirical success invites questions from an optimization perspective on how the spectrum of the hidden layer input-output Jacobian relates to notions of curvature of the parameters space , and consequentially to convergence rate . The largest effective ( initial ) step size η0 for stochastic gradient descent is inversely proportional to the local gradient smoothness M ( Bottou et al. , 2018 ; Boyd & Vandenberghe , 2004 ) . Intuitively , the gradient step can be at most as large as the fastest change in the parameter landscape . Recent attempts have been made to analyze the mean field geometry of the optimization using the Fisher information matrix ( FIM ) ( Amari et al. , 2019 ; Karakida et al. , 2019 ) . The theoretical and practical appeal of measuring curvature with the FIM is due to among other reasons the fact that the FIM is necessarily positive ( semi- ) definite even for non-convex objectives , and due to it its intimate relationship with the Hessian matrix . Karakida et al . ( 2019 ) derived an upper bound on the maximum eigenvalue , however this bound is not satisfactory since it is agnostic of the entire spectrum of singular values and therefore can not differentiate between Gaussian and orthogonal weight initalizations . In this paper , we develop a new bound on the parameter curvature M given the maximum eigenvalue of the Fisher information λmax ( G ) which holds for random neural networks with both Gaussian and orthogonal weights . We derive this quantity to inspect the relation between the singular value distribution of the input-output Jacobian and locally maximal curvature of the parameter space . We use this result to probe different orthogonal , nearly isometric initializations , and observe that , broadly speaking , networks with a smaller initial curvature train faster and generalize better , as expected . However , consistent with a previous report ( Pennington et al. , 2018 ) , we also observe highly isometric networks perform worse despite having a slowly varying loss landscape ( i.e . small initial λmax ( G ) ) . We conjecture that the long term optimization behavior is depends on trivially on the smallest eigenvalue m and therefore surprisingly there is a sweetspot with the condition number being mM > 1 . We then investigate whether constraining the spectrum of the Jacobian matrix of each layer affects optimization rate . We do so by training networks using Riemannian optimization to constrain their weights to be orthogonal , or nearly orthogonal and we find that manifold constrained networks are insensitive to the maximal curvature at the beginning of training unlike the unconstrained gradient descent ( hereafter “ Euclidean ” ) . In particular , we observe that the advantage conferred by optimizing over manifolds can not be explained by the improvement of the gradient smoothness as measured by λmax ( G ) . Finally , we observe that contrary to Bansal et al . ( 2018 ) ’ s results Euclidean networks with a carefully designed initialization reduce the test misclassification error at approximately the same rate as their manifold constrained counterparts , and overall attain a higher accuracy . 2 BACKGROUND . 2.1 FORMAL DESCRIPTION OF THE NETWORK . Following ( Pennington et al. , 2017 ; 2018 ; Schoenholz et al. , 2017 ) , we consider a feed-forward , fully connected neural network with L hidden layers . Each layer l ∈ { 1 , . . . , L } is given as a recursion of the form xl = φ ( hl ) , hl = Wlxl−1 + bl ( 1 ) where xl are the activations , hl are the pre-activations , Wl ∈ RN l×N l−1 are the weight matrices , bl are the bias vectors , and φ ( · ) is the activation function . The input is denoted as x0 . The output layer of the network computes ŷ = g−1 ( hg ) where g is the link function of some generalized linear model ( GLM ) and hg = WgxL + bg . The hidden layer input-output Jacobian matrix Jx L x0 is , Jx L x0 , ∂xL ∂x0 = L∏ l=1 DlWl ( 2 ) where Dl is a diagonal matrix with entries Dli , i = φ ′ ( hli ) . As pointed out in ( Pennington et al. , 2017 ; Schoenholz et al. , 2017 ) , the conditioning of the Jacobian matrix affects the conditioning of the back-propagated gradients for all layers . 2.2 CRITICAL INITIALIZATIONS . Extending the classic result on the Gaussian process limit for wide layer width obtained by ( Neal , 1996 ) , recent work ( Matthews et al. , 2018 ; Lee et al. , 2018 ) has shown that for deep untrained networks with elements of their weight matrices Wi , j drawn from a Gaussian distribution N ( 0 , σ 2 W N l ) the empirical distribution of the pre-activations hl converges weakly to a Gaussian distribution N ( 0 , qlI ) for each layer l in the limit of the width N → ∞ . Similarly , it has been postulated that random orthogonal matrices scaled by σW give rise to the same limit . Under this mean-field condition , the variance of the pre-activation distribution ql is recursively given by , ql = σ2W ∫ φ ( √ ql−1h ) dµ ( h ) + σ2b ( 3 ) where µ ( h ) denotes the standard Gaussian measure ∫ dh√ 2π exp ( −h 2 2 ) and σ 2 b denotes the variance of the Gaussian distributed biases ( Schoenholz et al. , 2017 ) . The variance of the first layer preactivations q1 depends on ` 2 norm squared of inputs q1 = σ2W N1 ∥∥p ( x0 ) ∥∥2 2 + σ2b . The recursion defined in equation 3 has a fixed point q∗ = σ2W ∫ φ ( √ q∗h ) dµ ( h ) + σ2b ( 4 ) which can be satisfied for all layers by appropriately choosing σW , σb and scaling the input x0 . To permit the mean field analysis of backpropagated signals , the authors ( Schoenholz et al. , 2017 ; Pennington et al. , 2017 ; 2018 ; Karakida et al. , 2019 ) further assume the propagated activations and back propagated gradients to be independent . Specifically , Assumption 1 . [ Mean field assumptions ] ( i ) limN→∞ h d−→ N ( 0 , q∗ ) ( ii ) limN→∞ Cov [ Jgxi+1h i , Jgxj+1h j ] = 0 for all i 6= j Under this assumption , the authors ( Schoenholz et al. , 2017 ; Pennington et al. , 2017 ) analyze distributions of singular values of Jacobian matrices between different layers in terms of a small number of parameters , with the calculations of the backpropagated signals proceeding in a selfsame fashion as calculations for the forward propagation of activations . The corollaries of Assumption 1 and condition in equation 4 is that φ′ ( hl ) for 1 ≤ l ≤ L are i.i.d . In order to ensure that JxLx0 is well conditioned , ( Pennington et al. , 2017 ) require that in addition to the variance of pre-activation being constant for all layers , two additional constraints be met . Firstly , they require that the mean square singular value of DW for each layer has a certain value in expectation . χ = 1 N E [ Tr [ ( DW ) > DW ] ] = σ2W ∫ [ φ′ ( √ q∗h ) ] 2 dµ ( h ) ( 5 ) Given that the mean squared singular value of the Jacobian matrix Jx L x0 is ( χ ) L , setting χ = 1 corresponds to a critical initialization where the gradients are asymptotically stable as L → ∞ . Secondly , they require that the maximal squared singular value s2max of the Jacobian J xL x0 be bounded . Pennington et al . ( 2017 ) showed that for weights with Gaussian distributed elements , the maximal singular value increases linearly in depth even if the network is initialized with χ = 1 . Fortunately , for orthogonal weights , the maximal singular value smax is bounded even as L→∞ ( Pennington et al. , 2018 ) . 3 THEORETICAL RESULTS : RELATING THE SPECTRA OF JACOBIAN AND . FISHER INFORMATION MATRICES To better understand the geometry of the optimization landscape , we wish to put a Lipschitz bound on the gradient , which in turn gives an upper bound on the largest step size of any first order optimization algorithm . For a general objective function f , the condition is equivalent to ‖∇f ( x ) −∇f ( x′ ) ‖2 ≤M‖x− x′‖2 for all x , x′ ⊂ S ⊆ Rd The Lipschitz constant ensures that the gradient doesn ’ t change arbitrarily fast with respect to x , x′ , and therefore ∇f defines a descent direction for the objective over a distance M . In general estimating the Lipschitz constant is NP-hard ( Kunstner et al. , 2019 ) , therefore we seek to find local measures of curvature along the optimization trajectory . As we will show below the approximate gradient smoothness is tractable for randomly initialized neural networks . The analytical study of Hessians of random neural networks started with ( Pennington & Bahri , 2017 ) , but was limited to shallow architectures . Subsequent work by Amari et al . ( 2019 ) and Karakida et al . ( 2019 ) on second order geometry of random networks shares much of the spirit of the current work , in that it proposes to replace the possibly indefinite Hessian with the related Fisher information matrix as a measure of curvature . The Fisher information matrix plays a fundamental role in the geometry of probabilistic models under the Kullback-Leibler divergence loss — it defines a ( local ) Riemannian metric , which in turn defines distances on the manifold of probability distributions generated by the model . Notably , the FIM does not define a unique metric on this statistical manifold , and alternative notions of intrinsic curvature can be derived by replacing the Kullback-Leibler divergence with the 2-Wasserstein distance ( Li & Montúfar , 2018 ) . Moreover , since the Fisher information matrix bears a special relation to the Hessian it can also be seen as defining an approximate curvature matrix for second order optimization . Recall that the FIM is defined as Definition . Fisher Information Matrix G , Epθ ( y|x0 ) [ Ep ( x0 ) [ ∇θ log pθ ( y|x0 ) ∇θ log pθ ( y|x0 ) > ] ] ( 6 ) = Epθ ( y|x0 ) [ Ep ( x0 ) [ Jh g > θ ∇2hgLJh g θ ] ] = Epθ ( y|x0 ) [ Ep ( x0 ) [ H− ∑ k ∇xgLk∇2θhgk ] ] ( 7 ) where L denotes the loss and hg is the output layer . The relation between the Hessian and Fisher Information matrices is apparent from equation 7 , showing that the Hessian H is a quadratic form of the Jacobian matrices plus the possibly indefinite matrix of second derivatives with respect to parameters . Our goal is to express the gradient smoothness using the results of the previous section . Given equation 7 we can derive an analytical approximation to the Lipschitz bound using the results from the previous section ; i.e . we will express the expected maximum eigenvalue of the random Fisher information matrix in terms of the expected maximum singular value of the Jacobian Jh L h1 . To do so , let us consider the output of a multilayer perceptron as defining a conditional probability distribution pθ ( y|x0 ) , where Θ = { vec ( W1 ) , . . . , vec ( WL ) , b1 , . . . , bL } is the set of all hidden layer parameters , and θ is the column vector containing the concatenation of all the parameters in Θ . As observed by Martens & Grosse ( 2015 ) the Fisher of a multilayer network naturally has a block structure , with each corresponding to the weights and biases of each layer . These blocks with respect to parameter vectors a , b ∈ Θ can further be expressed as Ga , b = J hg a > HgJ hg b ( 8 ) where the final layer Hessian Hg is defined as∇2hg log pθ ( y|x0 ) . We can re-express the outer product of the score function∇hg log pθ ( y|x0 ) as the second derivative of the log-likelihood ( see equation 6 ) , provided it satisfies certain technical conditions . What is important for us is that all canonical link function for generalized linear models , like the softmax function and the identity function allow this re-writing , and that this re-writing allows us drop the conditional expectation with respect to pθ ( y|x0 ) . The Jacobians in equation 8 can be computed iteratively . Importantly the Jacobian from the output layer to the a-th parameter block is just the product of diagonal activations and weight matrices multiplied by the Jacobian from the α-th layer to the a-th parameter . We define these matrices of partial derivatives of the α-th layer pre-activations with respect to the layer specific parameters separately for Wα and bα as : Jh α a = x α−1 > ⊗ I for a = vec ( Wα ) ( 9 ) Jh α a = I for a = b α ( 10 ) Under the infinitesmally weak correlation assumption ( see Assumption 1 ) , we can further simplify the expression for the blocks of the Fisher information matrix equation 8 . Lemma 1 . The expected blocks with respect to weight matrices for all layers α , β 6= 1 are Gvec ( Wα ) , vec ( Wβ ) = E [ xα−1xβ−1 > ] ⊗ E [ Jhghα > HgJhhβ ] ( 11 ) Lemma 2 . The expected blocks with respect to a weight matrix Wα and a bias vector bβ are Gvec ( Wα ) , bβ = E [ xα−1 > ⊗ I ] E [ Jh g hα > HgJ hg hβ ] ( 12 ) The crucial observation here is that in the mean-field limit the expectation of the product of activations xα−1 , xβ−1 is either zero or rank 1 for activations in different layers . The case when both activations are in the same layer is trivially taken care of by our mean-field assumptions — the term is equal to the 2nd non-central moment , i.e . the covariance plus potentially a rank one mean term . Now , leveraging Lemmas 1 and 2 we derive a block diagonal approximation which in turn allows us to bound the maximum eigenvalue λmax ( G ) . In doing so we will use a corollary of the block Gershgorin theorem . Proposition 1 ( ( informal ) Block Gershgorin theorem ) . The maximum eigenvalue λmax ( G ) is contained in a union of disks centered around the maximal eigenvalue of each diagonal block with radia equal to the sum of the singular values of the off-diagonal terms . For a rigorous statement of the theorem see Appendix A.1 . It is noteworthy that block-diagonal approximations have been crucial to the application of Fisher Information matrices as preconditioners in stochastic second order methods ( Botev et al. , 2017 ; Martens & Grosse , 2015 ) . These methods were motivated by practical performance in their choice of number of diagonal blocks used for preconditioning . Under the mean-field assumptions we are able to show computable bounds on the error in approximating the spectrum of the Fisher information matrix . The proposition 1 suggest a simple , easily computable way to bound the expected maximal eigenvalue of the Fisher information matrix—choose the block with the largest eigenvalue and expected spectral radia for the corresponding off diagonal terms . We do so by making an auxiliary assumption : Assumption 2 . The maximum singular value of Jh g hα monotonically increases as α ↓ 1 . We motivate this assumption in a twofold fashion : firstly the work done by Pennington et al . ( 2017 ; 2018 ) shows that the spectral edge , i.e . the maximal , non-negative singular value in the support of of the spectral distribution increases with depth , secondly it has been commonly observed in numerical experiments that very deep neural networks have ill conditioned gradients . Under this assumption it is sufficient to study the maximal singular value of blocks of the Fisher information matrix with respect to vec ( W1 ) , b1 and the spectral norms of its corresponding offdiagonal blocks . We define functions Σmax of each block as upper bounds on the spectral bounds of the respective block . The specific values are given in the following Lemma : Lemma 3 . The maximum expected singular values of the off-diagonal blocks ∀β 6= 1 are bounded by Σmax ( · ) : for weight-to-weight blocks E [ σmax ( Gvec ( W1 ) , vec ( Wβ ) ) ] ≤ Σmax ( Gvec ( W1 ) , vec ( Wβ ) ) ( 13 ) , √ Nβ |E [ φ ( h ) ] | ∥∥E [ x0 ] ∥∥ 2 E [ σmax ( Jh g > h1 ) ] E [ σmax ( Hg ) ] E [ σmax ( Jh g hβ ) ] ( 14 ) for weight-to-bias blocks E [ σmax ( Gvec ( W1 ) , bβ ) ] ≤ Σmax ( Gvec ( W1 ) , bβ ) ( 15 ) , |E [ φ ( h ) ] |E [ σmax ( Jh g > h1 ) ] E [ σmax ( Hg ) ] E [ σmax ( Jh g hβ ) ] ( 16 ) and for bias-to-bias blocks E [ σmax ( Gb1 , bβ ) ] ≤ Σmax ( Gb1 , bβ ) , E [ σmax ( Jh g > h1 ) ] E [ σmax ( Hg ) ] E [ σmax ( Jh g hβ ) ] ( 17 )
This paper analyses the training behavior of wide networks and argues orthogonal initialization helps the training. They suggest projections to the manifold of orthogonal weights during training and provide analysis. Their main result seems to be a bound on the eigen-values of the Fisher information matrix for wide networks (Theorem on pg 6). In their experiments they train Stiefel and Oblique networks as examples of manifold constrained networks and claim they converge faster than unconstrained networks.
SP:7a47dd0e43f8e18913551cdb7207ad3333472e22
Information Geometry of Orthogonal Initializations and Training
1 INTRODUCTION . Deep neural networks ( DNN ) have shown tremendous success in computer vision problems , speech recognition , amortized probabilistic inference , and the modelling of neural data . Despite their performance , DNNs face obstacles in their practical application , which stem from both the excessive computational cost of running gradient descent for a large number of epochs , as well as the inherent brittleness of gradient descent applied to very deep models . A number of heuristic approaches such as batch normalization , weight normalization and residual connections ( He et al. , 2016 ; Ioffe & Szegedy , 2015 ; Salimans & Kingma , 2016 ) have emerged in an attempt to address these trainability issues . Recently mean field theory has been successful in developing a more principled analysis of gradients of neural networks , and has become the basis for a new random initialization principle . The mean field approach postulates that in the limit of infinitely wide random weight matrices , the distribution of pre-activations converges weakly to a Gaussian . Using this approach , a series of works proposed to initialize the networks in such a way that for each layer the input-output Jacobian has mean singular values of 1 ( Schoenholz et al. , 2017 ) . This requirement was further strengthened to suggest that the spectrum of singular values of the input-output Jacobian should concentrate on 1 , and it was shown that this can only be achieved with random orthogonal weight matrices . Under these conditions the backpropagated gradients are bounded in ` 2 norm ( Pennington et al. , 2017 ) irrespective of depth , i.e. , they neither vanish nor explode . It was shown experimentally in ( Pennington et al. , 2017 ; Xiao et al. , 2018 ; Chen et al. , 2018 ) that networks with these critical initial conditions train orders of magnitude faster than networks with arbitrary initializations . The empirical success invites questions from an optimization perspective on how the spectrum of the hidden layer input-output Jacobian relates to notions of curvature of the parameters space , and consequentially to convergence rate . The largest effective ( initial ) step size η0 for stochastic gradient descent is inversely proportional to the local gradient smoothness M ( Bottou et al. , 2018 ; Boyd & Vandenberghe , 2004 ) . Intuitively , the gradient step can be at most as large as the fastest change in the parameter landscape . Recent attempts have been made to analyze the mean field geometry of the optimization using the Fisher information matrix ( FIM ) ( Amari et al. , 2019 ; Karakida et al. , 2019 ) . The theoretical and practical appeal of measuring curvature with the FIM is due to among other reasons the fact that the FIM is necessarily positive ( semi- ) definite even for non-convex objectives , and due to it its intimate relationship with the Hessian matrix . Karakida et al . ( 2019 ) derived an upper bound on the maximum eigenvalue , however this bound is not satisfactory since it is agnostic of the entire spectrum of singular values and therefore can not differentiate between Gaussian and orthogonal weight initalizations . In this paper , we develop a new bound on the parameter curvature M given the maximum eigenvalue of the Fisher information λmax ( G ) which holds for random neural networks with both Gaussian and orthogonal weights . We derive this quantity to inspect the relation between the singular value distribution of the input-output Jacobian and locally maximal curvature of the parameter space . We use this result to probe different orthogonal , nearly isometric initializations , and observe that , broadly speaking , networks with a smaller initial curvature train faster and generalize better , as expected . However , consistent with a previous report ( Pennington et al. , 2018 ) , we also observe highly isometric networks perform worse despite having a slowly varying loss landscape ( i.e . small initial λmax ( G ) ) . We conjecture that the long term optimization behavior is depends on trivially on the smallest eigenvalue m and therefore surprisingly there is a sweetspot with the condition number being mM > 1 . We then investigate whether constraining the spectrum of the Jacobian matrix of each layer affects optimization rate . We do so by training networks using Riemannian optimization to constrain their weights to be orthogonal , or nearly orthogonal and we find that manifold constrained networks are insensitive to the maximal curvature at the beginning of training unlike the unconstrained gradient descent ( hereafter “ Euclidean ” ) . In particular , we observe that the advantage conferred by optimizing over manifolds can not be explained by the improvement of the gradient smoothness as measured by λmax ( G ) . Finally , we observe that contrary to Bansal et al . ( 2018 ) ’ s results Euclidean networks with a carefully designed initialization reduce the test misclassification error at approximately the same rate as their manifold constrained counterparts , and overall attain a higher accuracy . 2 BACKGROUND . 2.1 FORMAL DESCRIPTION OF THE NETWORK . Following ( Pennington et al. , 2017 ; 2018 ; Schoenholz et al. , 2017 ) , we consider a feed-forward , fully connected neural network with L hidden layers . Each layer l ∈ { 1 , . . . , L } is given as a recursion of the form xl = φ ( hl ) , hl = Wlxl−1 + bl ( 1 ) where xl are the activations , hl are the pre-activations , Wl ∈ RN l×N l−1 are the weight matrices , bl are the bias vectors , and φ ( · ) is the activation function . The input is denoted as x0 . The output layer of the network computes ŷ = g−1 ( hg ) where g is the link function of some generalized linear model ( GLM ) and hg = WgxL + bg . The hidden layer input-output Jacobian matrix Jx L x0 is , Jx L x0 , ∂xL ∂x0 = L∏ l=1 DlWl ( 2 ) where Dl is a diagonal matrix with entries Dli , i = φ ′ ( hli ) . As pointed out in ( Pennington et al. , 2017 ; Schoenholz et al. , 2017 ) , the conditioning of the Jacobian matrix affects the conditioning of the back-propagated gradients for all layers . 2.2 CRITICAL INITIALIZATIONS . Extending the classic result on the Gaussian process limit for wide layer width obtained by ( Neal , 1996 ) , recent work ( Matthews et al. , 2018 ; Lee et al. , 2018 ) has shown that for deep untrained networks with elements of their weight matrices Wi , j drawn from a Gaussian distribution N ( 0 , σ 2 W N l ) the empirical distribution of the pre-activations hl converges weakly to a Gaussian distribution N ( 0 , qlI ) for each layer l in the limit of the width N → ∞ . Similarly , it has been postulated that random orthogonal matrices scaled by σW give rise to the same limit . Under this mean-field condition , the variance of the pre-activation distribution ql is recursively given by , ql = σ2W ∫ φ ( √ ql−1h ) dµ ( h ) + σ2b ( 3 ) where µ ( h ) denotes the standard Gaussian measure ∫ dh√ 2π exp ( −h 2 2 ) and σ 2 b denotes the variance of the Gaussian distributed biases ( Schoenholz et al. , 2017 ) . The variance of the first layer preactivations q1 depends on ` 2 norm squared of inputs q1 = σ2W N1 ∥∥p ( x0 ) ∥∥2 2 + σ2b . The recursion defined in equation 3 has a fixed point q∗ = σ2W ∫ φ ( √ q∗h ) dµ ( h ) + σ2b ( 4 ) which can be satisfied for all layers by appropriately choosing σW , σb and scaling the input x0 . To permit the mean field analysis of backpropagated signals , the authors ( Schoenholz et al. , 2017 ; Pennington et al. , 2017 ; 2018 ; Karakida et al. , 2019 ) further assume the propagated activations and back propagated gradients to be independent . Specifically , Assumption 1 . [ Mean field assumptions ] ( i ) limN→∞ h d−→ N ( 0 , q∗ ) ( ii ) limN→∞ Cov [ Jgxi+1h i , Jgxj+1h j ] = 0 for all i 6= j Under this assumption , the authors ( Schoenholz et al. , 2017 ; Pennington et al. , 2017 ) analyze distributions of singular values of Jacobian matrices between different layers in terms of a small number of parameters , with the calculations of the backpropagated signals proceeding in a selfsame fashion as calculations for the forward propagation of activations . The corollaries of Assumption 1 and condition in equation 4 is that φ′ ( hl ) for 1 ≤ l ≤ L are i.i.d . In order to ensure that JxLx0 is well conditioned , ( Pennington et al. , 2017 ) require that in addition to the variance of pre-activation being constant for all layers , two additional constraints be met . Firstly , they require that the mean square singular value of DW for each layer has a certain value in expectation . χ = 1 N E [ Tr [ ( DW ) > DW ] ] = σ2W ∫ [ φ′ ( √ q∗h ) ] 2 dµ ( h ) ( 5 ) Given that the mean squared singular value of the Jacobian matrix Jx L x0 is ( χ ) L , setting χ = 1 corresponds to a critical initialization where the gradients are asymptotically stable as L → ∞ . Secondly , they require that the maximal squared singular value s2max of the Jacobian J xL x0 be bounded . Pennington et al . ( 2017 ) showed that for weights with Gaussian distributed elements , the maximal singular value increases linearly in depth even if the network is initialized with χ = 1 . Fortunately , for orthogonal weights , the maximal singular value smax is bounded even as L→∞ ( Pennington et al. , 2018 ) . 3 THEORETICAL RESULTS : RELATING THE SPECTRA OF JACOBIAN AND . FISHER INFORMATION MATRICES To better understand the geometry of the optimization landscape , we wish to put a Lipschitz bound on the gradient , which in turn gives an upper bound on the largest step size of any first order optimization algorithm . For a general objective function f , the condition is equivalent to ‖∇f ( x ) −∇f ( x′ ) ‖2 ≤M‖x− x′‖2 for all x , x′ ⊂ S ⊆ Rd The Lipschitz constant ensures that the gradient doesn ’ t change arbitrarily fast with respect to x , x′ , and therefore ∇f defines a descent direction for the objective over a distance M . In general estimating the Lipschitz constant is NP-hard ( Kunstner et al. , 2019 ) , therefore we seek to find local measures of curvature along the optimization trajectory . As we will show below the approximate gradient smoothness is tractable for randomly initialized neural networks . The analytical study of Hessians of random neural networks started with ( Pennington & Bahri , 2017 ) , but was limited to shallow architectures . Subsequent work by Amari et al . ( 2019 ) and Karakida et al . ( 2019 ) on second order geometry of random networks shares much of the spirit of the current work , in that it proposes to replace the possibly indefinite Hessian with the related Fisher information matrix as a measure of curvature . The Fisher information matrix plays a fundamental role in the geometry of probabilistic models under the Kullback-Leibler divergence loss — it defines a ( local ) Riemannian metric , which in turn defines distances on the manifold of probability distributions generated by the model . Notably , the FIM does not define a unique metric on this statistical manifold , and alternative notions of intrinsic curvature can be derived by replacing the Kullback-Leibler divergence with the 2-Wasserstein distance ( Li & Montúfar , 2018 ) . Moreover , since the Fisher information matrix bears a special relation to the Hessian it can also be seen as defining an approximate curvature matrix for second order optimization . Recall that the FIM is defined as Definition . Fisher Information Matrix G , Epθ ( y|x0 ) [ Ep ( x0 ) [ ∇θ log pθ ( y|x0 ) ∇θ log pθ ( y|x0 ) > ] ] ( 6 ) = Epθ ( y|x0 ) [ Ep ( x0 ) [ Jh g > θ ∇2hgLJh g θ ] ] = Epθ ( y|x0 ) [ Ep ( x0 ) [ H− ∑ k ∇xgLk∇2θhgk ] ] ( 7 ) where L denotes the loss and hg is the output layer . The relation between the Hessian and Fisher Information matrices is apparent from equation 7 , showing that the Hessian H is a quadratic form of the Jacobian matrices plus the possibly indefinite matrix of second derivatives with respect to parameters . Our goal is to express the gradient smoothness using the results of the previous section . Given equation 7 we can derive an analytical approximation to the Lipschitz bound using the results from the previous section ; i.e . we will express the expected maximum eigenvalue of the random Fisher information matrix in terms of the expected maximum singular value of the Jacobian Jh L h1 . To do so , let us consider the output of a multilayer perceptron as defining a conditional probability distribution pθ ( y|x0 ) , where Θ = { vec ( W1 ) , . . . , vec ( WL ) , b1 , . . . , bL } is the set of all hidden layer parameters , and θ is the column vector containing the concatenation of all the parameters in Θ . As observed by Martens & Grosse ( 2015 ) the Fisher of a multilayer network naturally has a block structure , with each corresponding to the weights and biases of each layer . These blocks with respect to parameter vectors a , b ∈ Θ can further be expressed as Ga , b = J hg a > HgJ hg b ( 8 ) where the final layer Hessian Hg is defined as∇2hg log pθ ( y|x0 ) . We can re-express the outer product of the score function∇hg log pθ ( y|x0 ) as the second derivative of the log-likelihood ( see equation 6 ) , provided it satisfies certain technical conditions . What is important for us is that all canonical link function for generalized linear models , like the softmax function and the identity function allow this re-writing , and that this re-writing allows us drop the conditional expectation with respect to pθ ( y|x0 ) . The Jacobians in equation 8 can be computed iteratively . Importantly the Jacobian from the output layer to the a-th parameter block is just the product of diagonal activations and weight matrices multiplied by the Jacobian from the α-th layer to the a-th parameter . We define these matrices of partial derivatives of the α-th layer pre-activations with respect to the layer specific parameters separately for Wα and bα as : Jh α a = x α−1 > ⊗ I for a = vec ( Wα ) ( 9 ) Jh α a = I for a = b α ( 10 ) Under the infinitesmally weak correlation assumption ( see Assumption 1 ) , we can further simplify the expression for the blocks of the Fisher information matrix equation 8 . Lemma 1 . The expected blocks with respect to weight matrices for all layers α , β 6= 1 are Gvec ( Wα ) , vec ( Wβ ) = E [ xα−1xβ−1 > ] ⊗ E [ Jhghα > HgJhhβ ] ( 11 ) Lemma 2 . The expected blocks with respect to a weight matrix Wα and a bias vector bβ are Gvec ( Wα ) , bβ = E [ xα−1 > ⊗ I ] E [ Jh g hα > HgJ hg hβ ] ( 12 ) The crucial observation here is that in the mean-field limit the expectation of the product of activations xα−1 , xβ−1 is either zero or rank 1 for activations in different layers . The case when both activations are in the same layer is trivially taken care of by our mean-field assumptions — the term is equal to the 2nd non-central moment , i.e . the covariance plus potentially a rank one mean term . Now , leveraging Lemmas 1 and 2 we derive a block diagonal approximation which in turn allows us to bound the maximum eigenvalue λmax ( G ) . In doing so we will use a corollary of the block Gershgorin theorem . Proposition 1 ( ( informal ) Block Gershgorin theorem ) . The maximum eigenvalue λmax ( G ) is contained in a union of disks centered around the maximal eigenvalue of each diagonal block with radia equal to the sum of the singular values of the off-diagonal terms . For a rigorous statement of the theorem see Appendix A.1 . It is noteworthy that block-diagonal approximations have been crucial to the application of Fisher Information matrices as preconditioners in stochastic second order methods ( Botev et al. , 2017 ; Martens & Grosse , 2015 ) . These methods were motivated by practical performance in their choice of number of diagonal blocks used for preconditioning . Under the mean-field assumptions we are able to show computable bounds on the error in approximating the spectrum of the Fisher information matrix . The proposition 1 suggest a simple , easily computable way to bound the expected maximal eigenvalue of the Fisher information matrix—choose the block with the largest eigenvalue and expected spectral radia for the corresponding off diagonal terms . We do so by making an auxiliary assumption : Assumption 2 . The maximum singular value of Jh g hα monotonically increases as α ↓ 1 . We motivate this assumption in a twofold fashion : firstly the work done by Pennington et al . ( 2017 ; 2018 ) shows that the spectral edge , i.e . the maximal , non-negative singular value in the support of of the spectral distribution increases with depth , secondly it has been commonly observed in numerical experiments that very deep neural networks have ill conditioned gradients . Under this assumption it is sufficient to study the maximal singular value of blocks of the Fisher information matrix with respect to vec ( W1 ) , b1 and the spectral norms of its corresponding offdiagonal blocks . We define functions Σmax of each block as upper bounds on the spectral bounds of the respective block . The specific values are given in the following Lemma : Lemma 3 . The maximum expected singular values of the off-diagonal blocks ∀β 6= 1 are bounded by Σmax ( · ) : for weight-to-weight blocks E [ σmax ( Gvec ( W1 ) , vec ( Wβ ) ) ] ≤ Σmax ( Gvec ( W1 ) , vec ( Wβ ) ) ( 13 ) , √ Nβ |E [ φ ( h ) ] | ∥∥E [ x0 ] ∥∥ 2 E [ σmax ( Jh g > h1 ) ] E [ σmax ( Hg ) ] E [ σmax ( Jh g hβ ) ] ( 14 ) for weight-to-bias blocks E [ σmax ( Gvec ( W1 ) , bβ ) ] ≤ Σmax ( Gvec ( W1 ) , bβ ) ( 15 ) , |E [ φ ( h ) ] |E [ σmax ( Jh g > h1 ) ] E [ σmax ( Hg ) ] E [ σmax ( Jh g hβ ) ] ( 16 ) and for bias-to-bias blocks E [ σmax ( Gb1 , bβ ) ] ≤ Σmax ( Gb1 , bβ ) , E [ σmax ( Jh g > h1 ) ] E [ σmax ( Hg ) ] E [ σmax ( Jh g hβ ) ] ( 17 )
This paper formulates a connection between the Fisher information matrix (FIM) and the spectral radius of the input-output Jacobian in neural networks. This results derive the eigenvalues' bound to theoretically study the convergence of several networks. Here the upper bound further improves the upper bound of FIM derived in (Karakida et al., 2018).
SP:7a47dd0e43f8e18913551cdb7207ad3333472e22
Latent Normalizing Flows for Many-to-Many Cross-Domain Mappings
1 INTRODUCTION Joint image-text representations find application in cross-domain tasks such as imageconditioned text generation ( captioning ; Mao et al. , 2015 ; Karpathy & Fei-Fei , 2017 ; Xu et al. , 2018 ) and text-conditioned image synthesis ( Reed et al. , 2016 ) . Yet , image and text distributions follow distinct generative processes , making joint generative modeling of the two distributions challenging . Current state-of-the-art models for learning joint image-text distributions encode the two domains in a common shared latent space in a fully supervised setup ( Gu et al. , 2018 ; Wang et al. , 2019 ) . While such approaches can model supervised information in the shared latent space , they do not preserve domain-specific information . However , as the domains under consideration , e.g . images and texts , follow distinct generative processes , many-to-many mappings naturally emerge – there are many likely captions for a given image and vice versa . Therefore , it is crucial to also encode domain-specific variations in the latent space to enable many-to-many mappings . State-of-the-art models for cross-domain synthesis leverage conditional variational autoencoders ( VAEs , cVAEs ; Kingma & Welling , 2014 ) or generative adversarial networks ( GANs ; Goodfellow et al. , 2014 ) for learning conditional distributions . However , such generative models ( e.g. , Wang et al. , 2017 ; Aneja et al. , 2019 ) enforce a Gaussian prior in the latent space . Gaussian priors can result in strong regularization or posterior collapse as they impose stringent constraints while modeling complex distributions in the latent space ( Tomczak & Welling , 2018 ) . This severely limits the accuracy and diversity of the cross-domain generative model . Recent work ( Ziegler & Rush , 2019 ; Bhattacharyya et al. , 2019 ) has found normalizing flows ( Dinh et al. , 2015 ) advantageous for modeling complex distributions in the latent space . Normalizing flows can capture a high degree of multimodality in the latent space through a series of transformations from a simple distribution to a complex data-dependent prior . Ziegler & Rush ( 2019 ) apply normalizing flow-based priors in the latent space of unconditional variational autoencoders for discrete distributions and character-level modeling . We propose to leverage normalizing flows to overcome the limitations of existing cross-domain generative models in capturing heterogeneous distributions and introduce a novel semi-supervised Latent Normalizing Flows for Many-to-Many Mappings ( LNFMM ) framework . We exploit normalizing flows ( Dinh et al. , 2015 ) to capture complex joint distributions in the latent space of our model ( Fig . 1 ) . Moreover , since the domains under consideration , e.g . images and texts , have different generative processes , the latent representation for each distribution is modeled such that it contains both shared cross-domain information as well as domain-specific information . The latent dimensions constrained by supervised information from paired data model the common ( semantic ) information across images and texts . The diversity within the image and text distributions , e.g . different visual or textual styles , are encoded in the residual latent dimensions , thus preserving domain-specific variation . We can hence synthesize diverse samples from a distribution given a reference point in the other domain in a many-to-many setup . We show the benefits of our learned many-to-many latent spaces for real-world image captioning and text-to-image synthesis tasks on the COCO dataset ( Lin et al. , 2014 ) . Our model outperforms the current state of the art for image captioning w.r.t . the Bleu and CIDEr metrics for accuracy as well as on various diversity metrics . Additionally , we also show improvements in diversity metrics over the state of the art in text-to-image generation . 2 RELATED WORK . Diverse image captioning . Recent work on image captioning introduces stochastic behavior in captioning and thus encourages diversity by mapping an image to many captions . Vijayakumar et al . ( 2018 ) sample captions from a very high-dimensional space based on word-to-word Hamming distance and parts-of-speech information , respectively . To overcome the limitation of sampling from a high-dimensional space , Shetty et al . ( 2017 ) ; Dai et al . ( 2017 ) ; Li et al . ( 2018 ) build on Generative Adversarial Networks ( GANs ) and modify the training objective of the generator , matching generated captions to human captions . While GAN-based models can generate diverse captions by sampling from a noise distribution , they suffer on accuracy due to the inability of the model to capture the true underlying distribution . Wang et al . ( 2017 ) ; Aneja et al . ( 2019 ) , therefore , leverage conditional Variational Autoencoders ( cVAEs ) to learn latent representations conditioned on images based on supervised information and sequential latent spaces , respectively , to improve accuracy and diversity . Without supervision , cVAEs with conditional Gaussian priors suffer from posterior collapse . This results in a strong trade-off between accuracy and diversity ; e.g . Aneja et al . ( 2019 ) learn sequential latent spaces with a Gaussian prior to improve diversity , but suffer on perceptual metrics . Moreover , sampling captions based only on supervised information limits the diversity in the captions . In this work we show that by learning complex multimodal priors , we can model text distributions efficiently in the latent space without specific supervised clustering information and generate captions that are more diverse and accurate . Diverse text-to-image synthesis . State-of-the-art methods for text-to-image synthesis are based on conditional GANs ( Reed et al. , 2016 ) . Much of the research for text-conditioned image generation has focused on generating high-resolution images similar to the ground truth . Zhang et al . ( 2017 ; 2019b ) introduce a series of generators in different stages for high-resolution images . AttnGAN ( Xu et al. , 2018 ) and MirrorGAN ( Qiao et al. , 2019 ) aim at synthesizing fine-grained image features by attending to different words in the text description . Dash et al . ( 2017 ) condition image generation on class information in addition to texts . Yin et al . ( 2019 ) use a Siamese architecture to generate images with similar high-level semantics but different low-level semantics.In this work , we instead focus on generating diverse images for a given text with powerful latent semantic spaces , unlike GANs with Gaussian priors , which fail to capture the true underlying distributions and result in mode collapse . Normalizing flows & Variational Autoencoders . Normalizing flows ( NF ) are a class of density estimation methods that allow exact inference by transforming a complex distribution to a simple distribution using the change-of-variables rule . Dinh et al . ( 2015 ) develop flow-based generative models with affine transformations to make the computation of the Jacobian efficient . Recent works ( Dinh et al. , 2017 ; Kingma & Dhariwal , 2018 ; Ardizzone et al. , 2019 ; Behrmann et al. , 2019 ) extend flow-based generative models to multi-scale architectures to model complex dependencies across dimensions . Vanilla Variational Autoencoders ( VAEs ; Kingma & Welling , 2014 ) consider simple Gaussian priors in the latent space . Simple priors can provide very strong constraints , resulting in poor latent representations ( Hoffman & Johnson , 2016 ) . Recent work has , therefore , considered modeling complex priors in VAEs . Particularly , Wang et al . ( 2017 ) ; Tomczak & Welling ( 2018 ) propose mixtures of Gaussians with predefined clusters , Chen et al . ( 2017 ) use neural autoregressive model priors , and van den Oord et al . ( 2017 ) use discrete models in the latent space , which improves results for image synthesis . Ziegler & Rush ( 2019 ) learn a prior based on normalizing flows to model multimodal discrete distributions of character-level texts in the latent spaces with nonlinear flow layers . However , this invertible layer is difficult to be optimized in both directions . Bhattacharyya et al . ( 2019 ) learn conditional priors based on normalizing flows to model conditional distributions in the latent space of cVAEs . In this work , we learn a conditional prior using normalizing flows in the latent space of our variational inference model , modeling joint complex distributions in the latent space , particularly of images and texts for diverse cross-domain many-to-many mappings . 3 METHOD . To learn joint distributions pµ ( xt , xv ) of texts and images that follow distinct generative processes with ground-truth distributions pt ( xt ) and pv ( xv ) , respectively , in a semi-supervised setting , we formulate a novel joint generative model based on variational inference : Latent Normalizing Flows for Many-to-Many Mappings ( LNFMM ) . Our model defines a joint probability distribution over the data { xt , xv } and latent variables z with a distribution pµ ( xt , xv , z ) = pµ ( xt , xv|z ) pµ ( z ) , parameterized by µ . We maximize the likelihood of pµ ( xt , xv ) using a variational posterior qθ ( z|xt , xv ) , parameterized by variables θ . As we are interested in jointly modeling distributions with distinct generative processes , e.g . images and texts , the choice of the latent distribution is crucial . Mapping to a shared latent distribution can be very restrictive ( Xu et al. , 2018 ) . We begin with a discussion of our variational posterior qθ ( z|xt , xv ) and its the factorization in our LNFMM model , followed by our normalizing flow-based priors , which enable qθ ( z|xt , xv ) to be complex and multimodal , allowing for diverse many-to-many mappings . Factorizing the latent posterior . We choose a novel factorized posterior distribution with both shared and domain-specific components . The shared component zs is learned with supervision from paired image-text data and encodes information common to both domains . The domain-specific components encode information that is unique to each domain , thus preserving the heterogeneous structure of the data in the latent space . Specifically , consider zt and zv as the latent variables to model text and image distributions . Recall from above that zs denotes the shared latent variable for supervised learning , which encodes information shared between the data points xt and xv . Given this supervised information , the residual information specific to each domain is encoded in z′t and z ′ v . This leads to the factorization of the variational posterior of our LNFMM model with zt = [ zs z′t ] and zv = [ zs z′v ] , log qθ ( zs , z ′ t , z ′ v|xt , xv ) = log qθ1 ( zs|xt , xv ) + log qθ2 ( z′t|xt , zs ) + log qθ3 ( z′v|xv , zs ) . ( 1 ) Next , we derive our LNFMM model in detail . Since directly maximizing the log-likelihood of pµ ( xt , xv ) with the variational posterior is intractable , we derive the log-evidence lower bound for learning the posterior distributions of the latent variables z = { zs , z′t , z′v } . 3.1 DERIVING THE LOG-EVIDENCE LOWER BOUND . Maximizing the marginal likelihood pµ ( xt , xv ) given a set of observation points { xt , xv } is generally intractable . Therefore , we develop a variational inference framework that maximizes a variational lower bound on the data log-likelihood – the log-evidence lower bound ( ELBO ) with the proposed factorization in Eq . ( 1 ) , log pµ ( xt , xv ) ≥ Eqθ ( z|xt , xv ) [ log pµ ( xt , xv|z ) ] + Eqθ ( z|xt , xv ) [ log pφ ( z ) − log qθ ( z|xt , xv ) ] , ( 2 ) where z = { zs , z′t , z′v } are the latent variables . The first expectation term is the reconstruction error . The second expectation term minimizes the KL-divergence between the variational posterior qθ ( z|xt , xv ) and a prior pφ ( z ) . Taking into account the factorization in Eq . ( 1 ) , we now derive the ELBO for our LNFMM model . We first rewrite the reconstruction term as Eqθ ( zs , z′t , z′v|xt , xv ) [ log pµ ( xt|zs , z′t , z′v ) + log pµ ( xv|zs , z′t , z′v ) ] , ( 3 ) which assumes conditional independence given the domain-specific latent dimensions z′t , z ′ v and the shared latent dimensions zs . Thus , the reconstruction term can be further simplified as Eqθ1 ( zs|xt , xv ) qθ2 ( z′t|xt , zs ) [ log pµ ( xt|zs , z′t ) ] + Eqθ1 ( zs|xt , xv ) qθ3 ( z′v|xv , zs ) [ log pµ ( xv|zs , z′v ) ] . ( 4 ) Next , we simplify the KL-divergence term on the right of Eq . ( 2 ) . We use the chain rule along with Eq . ( 1 ) to obtain DKL ( qθ ( zs , z ′ t , z ′ v|xt , xv ) ∥∥ pφ ( zs , z′t , z′v ) ) = DKL ( qθ1 ( zs|xt , xv ) ∥∥ pφs ( zs ) ) + DKL ( qθ2 ( z ′ t|xt , zs ) ∥∥ pφt ( z′t|zs ) ) +DKL ( qθ3 ( z′v|xv , zs ) ∥∥ pφv ( z′v|zs ) ) . ( 5 ) This assumes a factorized prior of the form pφ ( zs , z′t , z ′ v ) = pφs ( zs ) pφt ( z ′ t|zs ) pφv ( z′v|zs ) , consistent with our conditional independence assumptions , given that information specific to each distribution is encoded in { z′t , z′v } . The final ELBO can then be expressed as log pµ ( xt , xv ) ≥ Eqθ1 ( zs|xt , xv ) qθ2 ( z′t|xt , zs ) [ log pµ ( xt|zs , z′t ) ] +Eqθ1 ( zs|xt , xv ) qθ3 ( z′v|xv , zs ) [ log pµ ( xv|zs , z′v ) ] −DKL ( qθ1 ( zs|xt , xv ) ∥∥ pφs ( zs ) ) −DKL ( qθ2 ( z ′ t|xt , zs ) ∥∥ pφt ( z′t|zs ) ) −DKL ( qθ3 ( z′v|xv , zs ) ∥∥ pφv ( z′v|zs ) ) . ( 6 ) In the standard VAE formulation ( Kingma & Welling , 2014 ) , the priors corresponding to pφt ( z ′ t|zs ) and pφv ( z ′ v|zs ) are modeled as standard normal distributions . However , Gaussian priors limit the expressiveness of the model in the latent space since they result in strong constraints on the posterior ( Tomczak & Welling , 2018 ; Razavi et al. , 2019 ; Ziegler & Rush , 2019 ) . Specifically , optimizing with a Gaussian prior pushes the posterior distribution towards the mean , limiting diversity and hence generative power ( Tomczak & Welling , 2018 ) . This is especially true for complex multimodal image and text distributions . Furthermore , alternatives like Gaussian mixture model-based priors ( Wang et al. , 2017 ) also suffer from similar drawbacks and additionally depend on predefined heuristics like the number of components in the mixture model . Analogously , the VampPrior ( Tomczak & Welling , 2018 ) depends on a predefined number of pseudo-inputs to learn the prior in the latent space . Similar to Ziegler & Rush ( 2019 ) ; Bhattacharyya et al . ( 2019 ) , which learn priors based on exact inference models , we propose to learn the conditional priors pφt ( z ′ t|zs ) and pφv ( z′v|zs ) jointly with the variational posterior in Eq . ( 1 ) using normalizing flows .
This paper addresses the problem of many-to-many cross domain mapping tasks (such as captioning or text-to-image synthesis). It proposes a double variational auto-encoder architecture mapping data to a factored latent representation with both shared and domain-specific components. The proposed model makes use of normalizing flow-based priors to enrich the latent representation and of an invertible network for ensuring the consistency of the shared component across the two autoencoders. Experiments are thorough and demonstrate results that are competitive or better than the state-of-the-art.
SP:22065b789e9ea434dcfae0443f24f1bbd95e116f
Latent Normalizing Flows for Many-to-Many Cross-Domain Mappings
1 INTRODUCTION Joint image-text representations find application in cross-domain tasks such as imageconditioned text generation ( captioning ; Mao et al. , 2015 ; Karpathy & Fei-Fei , 2017 ; Xu et al. , 2018 ) and text-conditioned image synthesis ( Reed et al. , 2016 ) . Yet , image and text distributions follow distinct generative processes , making joint generative modeling of the two distributions challenging . Current state-of-the-art models for learning joint image-text distributions encode the two domains in a common shared latent space in a fully supervised setup ( Gu et al. , 2018 ; Wang et al. , 2019 ) . While such approaches can model supervised information in the shared latent space , they do not preserve domain-specific information . However , as the domains under consideration , e.g . images and texts , follow distinct generative processes , many-to-many mappings naturally emerge – there are many likely captions for a given image and vice versa . Therefore , it is crucial to also encode domain-specific variations in the latent space to enable many-to-many mappings . State-of-the-art models for cross-domain synthesis leverage conditional variational autoencoders ( VAEs , cVAEs ; Kingma & Welling , 2014 ) or generative adversarial networks ( GANs ; Goodfellow et al. , 2014 ) for learning conditional distributions . However , such generative models ( e.g. , Wang et al. , 2017 ; Aneja et al. , 2019 ) enforce a Gaussian prior in the latent space . Gaussian priors can result in strong regularization or posterior collapse as they impose stringent constraints while modeling complex distributions in the latent space ( Tomczak & Welling , 2018 ) . This severely limits the accuracy and diversity of the cross-domain generative model . Recent work ( Ziegler & Rush , 2019 ; Bhattacharyya et al. , 2019 ) has found normalizing flows ( Dinh et al. , 2015 ) advantageous for modeling complex distributions in the latent space . Normalizing flows can capture a high degree of multimodality in the latent space through a series of transformations from a simple distribution to a complex data-dependent prior . Ziegler & Rush ( 2019 ) apply normalizing flow-based priors in the latent space of unconditional variational autoencoders for discrete distributions and character-level modeling . We propose to leverage normalizing flows to overcome the limitations of existing cross-domain generative models in capturing heterogeneous distributions and introduce a novel semi-supervised Latent Normalizing Flows for Many-to-Many Mappings ( LNFMM ) framework . We exploit normalizing flows ( Dinh et al. , 2015 ) to capture complex joint distributions in the latent space of our model ( Fig . 1 ) . Moreover , since the domains under consideration , e.g . images and texts , have different generative processes , the latent representation for each distribution is modeled such that it contains both shared cross-domain information as well as domain-specific information . The latent dimensions constrained by supervised information from paired data model the common ( semantic ) information across images and texts . The diversity within the image and text distributions , e.g . different visual or textual styles , are encoded in the residual latent dimensions , thus preserving domain-specific variation . We can hence synthesize diverse samples from a distribution given a reference point in the other domain in a many-to-many setup . We show the benefits of our learned many-to-many latent spaces for real-world image captioning and text-to-image synthesis tasks on the COCO dataset ( Lin et al. , 2014 ) . Our model outperforms the current state of the art for image captioning w.r.t . the Bleu and CIDEr metrics for accuracy as well as on various diversity metrics . Additionally , we also show improvements in diversity metrics over the state of the art in text-to-image generation . 2 RELATED WORK . Diverse image captioning . Recent work on image captioning introduces stochastic behavior in captioning and thus encourages diversity by mapping an image to many captions . Vijayakumar et al . ( 2018 ) sample captions from a very high-dimensional space based on word-to-word Hamming distance and parts-of-speech information , respectively . To overcome the limitation of sampling from a high-dimensional space , Shetty et al . ( 2017 ) ; Dai et al . ( 2017 ) ; Li et al . ( 2018 ) build on Generative Adversarial Networks ( GANs ) and modify the training objective of the generator , matching generated captions to human captions . While GAN-based models can generate diverse captions by sampling from a noise distribution , they suffer on accuracy due to the inability of the model to capture the true underlying distribution . Wang et al . ( 2017 ) ; Aneja et al . ( 2019 ) , therefore , leverage conditional Variational Autoencoders ( cVAEs ) to learn latent representations conditioned on images based on supervised information and sequential latent spaces , respectively , to improve accuracy and diversity . Without supervision , cVAEs with conditional Gaussian priors suffer from posterior collapse . This results in a strong trade-off between accuracy and diversity ; e.g . Aneja et al . ( 2019 ) learn sequential latent spaces with a Gaussian prior to improve diversity , but suffer on perceptual metrics . Moreover , sampling captions based only on supervised information limits the diversity in the captions . In this work we show that by learning complex multimodal priors , we can model text distributions efficiently in the latent space without specific supervised clustering information and generate captions that are more diverse and accurate . Diverse text-to-image synthesis . State-of-the-art methods for text-to-image synthesis are based on conditional GANs ( Reed et al. , 2016 ) . Much of the research for text-conditioned image generation has focused on generating high-resolution images similar to the ground truth . Zhang et al . ( 2017 ; 2019b ) introduce a series of generators in different stages for high-resolution images . AttnGAN ( Xu et al. , 2018 ) and MirrorGAN ( Qiao et al. , 2019 ) aim at synthesizing fine-grained image features by attending to different words in the text description . Dash et al . ( 2017 ) condition image generation on class information in addition to texts . Yin et al . ( 2019 ) use a Siamese architecture to generate images with similar high-level semantics but different low-level semantics.In this work , we instead focus on generating diverse images for a given text with powerful latent semantic spaces , unlike GANs with Gaussian priors , which fail to capture the true underlying distributions and result in mode collapse . Normalizing flows & Variational Autoencoders . Normalizing flows ( NF ) are a class of density estimation methods that allow exact inference by transforming a complex distribution to a simple distribution using the change-of-variables rule . Dinh et al . ( 2015 ) develop flow-based generative models with affine transformations to make the computation of the Jacobian efficient . Recent works ( Dinh et al. , 2017 ; Kingma & Dhariwal , 2018 ; Ardizzone et al. , 2019 ; Behrmann et al. , 2019 ) extend flow-based generative models to multi-scale architectures to model complex dependencies across dimensions . Vanilla Variational Autoencoders ( VAEs ; Kingma & Welling , 2014 ) consider simple Gaussian priors in the latent space . Simple priors can provide very strong constraints , resulting in poor latent representations ( Hoffman & Johnson , 2016 ) . Recent work has , therefore , considered modeling complex priors in VAEs . Particularly , Wang et al . ( 2017 ) ; Tomczak & Welling ( 2018 ) propose mixtures of Gaussians with predefined clusters , Chen et al . ( 2017 ) use neural autoregressive model priors , and van den Oord et al . ( 2017 ) use discrete models in the latent space , which improves results for image synthesis . Ziegler & Rush ( 2019 ) learn a prior based on normalizing flows to model multimodal discrete distributions of character-level texts in the latent spaces with nonlinear flow layers . However , this invertible layer is difficult to be optimized in both directions . Bhattacharyya et al . ( 2019 ) learn conditional priors based on normalizing flows to model conditional distributions in the latent space of cVAEs . In this work , we learn a conditional prior using normalizing flows in the latent space of our variational inference model , modeling joint complex distributions in the latent space , particularly of images and texts for diverse cross-domain many-to-many mappings . 3 METHOD . To learn joint distributions pµ ( xt , xv ) of texts and images that follow distinct generative processes with ground-truth distributions pt ( xt ) and pv ( xv ) , respectively , in a semi-supervised setting , we formulate a novel joint generative model based on variational inference : Latent Normalizing Flows for Many-to-Many Mappings ( LNFMM ) . Our model defines a joint probability distribution over the data { xt , xv } and latent variables z with a distribution pµ ( xt , xv , z ) = pµ ( xt , xv|z ) pµ ( z ) , parameterized by µ . We maximize the likelihood of pµ ( xt , xv ) using a variational posterior qθ ( z|xt , xv ) , parameterized by variables θ . As we are interested in jointly modeling distributions with distinct generative processes , e.g . images and texts , the choice of the latent distribution is crucial . Mapping to a shared latent distribution can be very restrictive ( Xu et al. , 2018 ) . We begin with a discussion of our variational posterior qθ ( z|xt , xv ) and its the factorization in our LNFMM model , followed by our normalizing flow-based priors , which enable qθ ( z|xt , xv ) to be complex and multimodal , allowing for diverse many-to-many mappings . Factorizing the latent posterior . We choose a novel factorized posterior distribution with both shared and domain-specific components . The shared component zs is learned with supervision from paired image-text data and encodes information common to both domains . The domain-specific components encode information that is unique to each domain , thus preserving the heterogeneous structure of the data in the latent space . Specifically , consider zt and zv as the latent variables to model text and image distributions . Recall from above that zs denotes the shared latent variable for supervised learning , which encodes information shared between the data points xt and xv . Given this supervised information , the residual information specific to each domain is encoded in z′t and z ′ v . This leads to the factorization of the variational posterior of our LNFMM model with zt = [ zs z′t ] and zv = [ zs z′v ] , log qθ ( zs , z ′ t , z ′ v|xt , xv ) = log qθ1 ( zs|xt , xv ) + log qθ2 ( z′t|xt , zs ) + log qθ3 ( z′v|xv , zs ) . ( 1 ) Next , we derive our LNFMM model in detail . Since directly maximizing the log-likelihood of pµ ( xt , xv ) with the variational posterior is intractable , we derive the log-evidence lower bound for learning the posterior distributions of the latent variables z = { zs , z′t , z′v } . 3.1 DERIVING THE LOG-EVIDENCE LOWER BOUND . Maximizing the marginal likelihood pµ ( xt , xv ) given a set of observation points { xt , xv } is generally intractable . Therefore , we develop a variational inference framework that maximizes a variational lower bound on the data log-likelihood – the log-evidence lower bound ( ELBO ) with the proposed factorization in Eq . ( 1 ) , log pµ ( xt , xv ) ≥ Eqθ ( z|xt , xv ) [ log pµ ( xt , xv|z ) ] + Eqθ ( z|xt , xv ) [ log pφ ( z ) − log qθ ( z|xt , xv ) ] , ( 2 ) where z = { zs , z′t , z′v } are the latent variables . The first expectation term is the reconstruction error . The second expectation term minimizes the KL-divergence between the variational posterior qθ ( z|xt , xv ) and a prior pφ ( z ) . Taking into account the factorization in Eq . ( 1 ) , we now derive the ELBO for our LNFMM model . We first rewrite the reconstruction term as Eqθ ( zs , z′t , z′v|xt , xv ) [ log pµ ( xt|zs , z′t , z′v ) + log pµ ( xv|zs , z′t , z′v ) ] , ( 3 ) which assumes conditional independence given the domain-specific latent dimensions z′t , z ′ v and the shared latent dimensions zs . Thus , the reconstruction term can be further simplified as Eqθ1 ( zs|xt , xv ) qθ2 ( z′t|xt , zs ) [ log pµ ( xt|zs , z′t ) ] + Eqθ1 ( zs|xt , xv ) qθ3 ( z′v|xv , zs ) [ log pµ ( xv|zs , z′v ) ] . ( 4 ) Next , we simplify the KL-divergence term on the right of Eq . ( 2 ) . We use the chain rule along with Eq . ( 1 ) to obtain DKL ( qθ ( zs , z ′ t , z ′ v|xt , xv ) ∥∥ pφ ( zs , z′t , z′v ) ) = DKL ( qθ1 ( zs|xt , xv ) ∥∥ pφs ( zs ) ) + DKL ( qθ2 ( z ′ t|xt , zs ) ∥∥ pφt ( z′t|zs ) ) +DKL ( qθ3 ( z′v|xv , zs ) ∥∥ pφv ( z′v|zs ) ) . ( 5 ) This assumes a factorized prior of the form pφ ( zs , z′t , z ′ v ) = pφs ( zs ) pφt ( z ′ t|zs ) pφv ( z′v|zs ) , consistent with our conditional independence assumptions , given that information specific to each distribution is encoded in { z′t , z′v } . The final ELBO can then be expressed as log pµ ( xt , xv ) ≥ Eqθ1 ( zs|xt , xv ) qθ2 ( z′t|xt , zs ) [ log pµ ( xt|zs , z′t ) ] +Eqθ1 ( zs|xt , xv ) qθ3 ( z′v|xv , zs ) [ log pµ ( xv|zs , z′v ) ] −DKL ( qθ1 ( zs|xt , xv ) ∥∥ pφs ( zs ) ) −DKL ( qθ2 ( z ′ t|xt , zs ) ∥∥ pφt ( z′t|zs ) ) −DKL ( qθ3 ( z′v|xv , zs ) ∥∥ pφv ( z′v|zs ) ) . ( 6 ) In the standard VAE formulation ( Kingma & Welling , 2014 ) , the priors corresponding to pφt ( z ′ t|zs ) and pφv ( z ′ v|zs ) are modeled as standard normal distributions . However , Gaussian priors limit the expressiveness of the model in the latent space since they result in strong constraints on the posterior ( Tomczak & Welling , 2018 ; Razavi et al. , 2019 ; Ziegler & Rush , 2019 ) . Specifically , optimizing with a Gaussian prior pushes the posterior distribution towards the mean , limiting diversity and hence generative power ( Tomczak & Welling , 2018 ) . This is especially true for complex multimodal image and text distributions . Furthermore , alternatives like Gaussian mixture model-based priors ( Wang et al. , 2017 ) also suffer from similar drawbacks and additionally depend on predefined heuristics like the number of components in the mixture model . Analogously , the VampPrior ( Tomczak & Welling , 2018 ) depends on a predefined number of pseudo-inputs to learn the prior in the latent space . Similar to Ziegler & Rush ( 2019 ) ; Bhattacharyya et al . ( 2019 ) , which learn priors based on exact inference models , we propose to learn the conditional priors pφt ( z ′ t|zs ) and pφv ( z′v|zs ) jointly with the variational posterior in Eq . ( 1 ) using normalizing flows .
The paper introduces a variational model for text to image and image to text mappings. The novelty consists in separating the modeling of text and image latent representations on one hand and the modeling of a shared content representation on the other hand. Priors for text, image and shared representations are generated through an invertible – flow model. The motivation for this is to allow for complex priors. Training for the shared component is supervised using aligned text and image data, while training for the residual text and image components is unsupervised. Experiments are performed for text and image generation, using training data from the COCO dataset.
SP:22065b789e9ea434dcfae0443f24f1bbd95e116f
Topology-Aware Pooling via Graph Attention
Pooling operations have shown to be effective on various tasks in computer vision and natural language processing . One challenge of performing pooling operations on graph data is the lack of locality that is not well-defined on graphs . Previous studies used global ranking methods to sample some of the important nodes , but most of them are not able to incorporate graph topology information in computing ranking scores . In this work , we propose the topology-aware pooling ( TAP ) layer that uses attention operators to generate ranking scores for each node by attending each node to its neighboring nodes . The ranking scores are generated locally while the selection is performed globally , which enables the pooling operation to consider topology information . To encourage better graph connectivity in the sampled graph , we propose to add a graph connectivity term to the computation of ranking scores in the TAP layer . Based on our TAP layer , we develop a network on graph data , known as the topology-aware pooling network . Experimental results on graph classification tasks demonstrate that our methods achieve consistently better performance than previous models . 1 INTRODUCTION . Pooling operations have been widely applied in various fields such as computer vision ( He et al. , 2016 ; Huang et al. , 2017 ) , and natural language processing ( Zhang et al. , 2015 ) . Pooling operations can effectively reduce dimensional sizes ( Simonyan & Zisserman , 2015 ) and enlarge receptive fields ( Chen et al. , 2016 ) . The application of regular pooling operations depends on the well-defined spatial locality in grid-like data such as images and texts . However , it is still challenging to perform pooling operations on graph data . In particular , there is no spatial locality or order information among nodes ( Gao et al. , 2018 ) . Some works try to overcome this limitation with two kinds of methods ; those are node clustering ( Ying et al. , 2018 ) and primary nodes sampling ( Gao & Ji , 2019 ; Zhang et al. , 2018 ) . The node clustering methods create graphs with super-nodes by learning a nodes assignment matrix . These methods suffer from the over-fitting problem and need auxiliary link prediction tasks to stabilize the training ( Ying et al. , 2018 ) . The primary nodes sampling methods like top-k pooling ( Gao & Ji , 2019 ) rank the nodes in a graph and sample top-k nodes to form the sampled graph . It uses a small number of additional trainable parameters and is shown to be more powerful ( Gao & Ji , 2019 ) . However , the top-k pooling layer does not explicitly incorporate the topology information in a graph when computing ranking scores , which may cause performance loss . In this work , we propose a novel topology-aware pooling ( TAP ) layer that explicitly encodes the topology information when computing ranking scores . We use an attention operator to compute similarity scores between each node and its neighboring nodes . The average similarity score of a node is used as its ranking score in the selection process . To avoid isolated nodes problem in our TAP layer , we further propose a graph connectivity term for computing the ranking scores of nodes . The graph connectivity term uses degree information as a bias term to encourage the layer to select highly connected nodes to form the sampled graph . Based on the TAP layer , we develop topology-aware pooling networks for network embedding learning . Experimental results on graph classification tasks demonstrate that our proposed networks with TAP layer consistently outperform previous models . The comparison results between our TAP layer and other pooling layers based on the same network architecture demonstrate the effectiveness of our method compared to other pooling methods . 2 BACKGROUND AND RELATED WORK . In this section , we describe graph pooling operations and attention operators . 2.1 GRAPH POOLING OPERATIONS . The pooling operations on graph data mainly include two categories ; those are node clustering and node sampling . DIFFPOOL ( Ying et al. , 2018 ) realizes graph pooling operation by clustering nodes into super-nodes . By learning an assignment matrix , DIFFPOOL softly assigns each node to different clusters in the new graph with specified probabilities . The pooling operations under this category retain and encode all nodes information into the new graph . One challenge of methods in this category is that they may increase the risk of over-fitting by training another network to learn the assignment matrix . In addition , the new graph is mostly connected where each edge value represents the strength of connectivity between two nodes . The connectivity pattern in the new graph may greatly differ from that of the original graph . The node sampling methods mainly select a fixed number k of the most important nodes to form a new graph . In SortPool ( Zhang et al. , 2018 ) , the same feature of each node is used for ranking and k nodes with the largest values in this feature are selected to form the coarsened graph . Top-k pooling ( Gao & Ji , 2019 ) generates the ranking scores by using a trainable projection vector that projects feature vectors of nodes into scalar values . k nodes with the largest scalar values are selected to form the coarsened graph . These methods involve none or a very small number of extra trainable parameters , thereby avoiding the risk of over-fitting . However , these methods suffer from one limitation that they do not explicitly consider the topology information during pooling . Both SortPool and top-k pooling select nodes based on scalar values that do not explicitly incorporate topology information . In this work , we propose a pooling operation that explicitly encodes topology information in ranking scores , thereby leading to an improved operation . 2.2 ATTENTION OPERATORS . Attention operator has shown to be effective in challenging tasks in various fields such as computer vision ( Xu et al. , 2015b ; Lu et al. , 2016 ; Li et al. , 2018 ) and natural language processing ( Malinowski et al. , 2018 ; Bahdanau et al. , 2015 ; Vaswani et al. , 2017 ) . Attention operator is capable of capturing long-range relationships , thereby leading to better performances ( Wang et al. , 2018 ) . The inputs to an attention operator consist of three matrices ; those are a query matrix Q ∈ Rd×m , a key matrix K ∈ Rd×n , and a value matrix V ∈ Rp×n . The attention operator computes the response of each query vector in Q by attending it to all key vectors in K. It uses the resulting coefficient vector to take a weighted sum over value vectors in V . The layer-wise operation of an attention operation is defined as O = V softmax ( KTQ ) . When the attention operator is applied to graph , each node only attends to its neighboring nodes ( Veličković et al. , 2017 ) . Self-attention can also produce an attention mask to control information flow on selected nodes in pooling operation ( Lee et al. , 2019 ) . In our proposed pooling operation , we employ an attention operator to compute ranking scores that explicitly encode topology information . 3 TOPOLOGY-AWARE POOLING LAYERS AND NETWORKS . In this work , we propose the topology-aware pooling ( TAP ) layer that uses attention operators to encode topology information in ranking scores for node selection . We also propose a graph connectivity term in the computation of ranking scores , which encourages better graph connectivity in the coarsened graph . Based on our TAP layer , we propose the topology-aware pooling networks for network representation learning . 3.1 TOPOLOGY-AWARE POOLING LAYER . Pooling layers have shown to be important on grid-like data with regard to reducing feature map sizes and enlarging receptive fields ( Yu & Koltun , 2016 ; Carreira et al. , 2012 ) . On graph data , two kinds of pooling layers have been proposed ; those are node clustering ( Ying et al. , 2018 ) and primary nodes sampling ( Gao & Ji , 2019 ; Zhang et al. , 2018 ) . A primary nodes sampling method , known as top-k pooling ( Gao & Ji , 2019 ) , uses a projection vector to generate ranking scores for each node in the graph . The graph is created by choosing nodes with k-largest scores . However , the sampling process relies on the projection vector and does not explicitly consider the topology information in the graph . In this section , we propose the topology-aware pooling ( TAP ) layer that performs primary nodes sampling by considering the graph topology . In this layer , we generate the ranking scores based on local information . To this end , we employ an attention operator to compute the similarity scores between each node and its neighboring nodes . The ranking score for a node i is the mean value of the similarity scores with its neighboring nodes . The resulting ranking score for a node indicates the similarity between this node and its neighboring nodes . If a node has a high ranking score , it can highly represent a local graph that consists of it and its neighboring nodes . By choosing nodes with the highest ranking scores , we can retain the maximum information in the sampled graph . Suppose there are N nodes in a graph G , each of which contains C features . In layer ` , we use two matrices to represent the graph ; those are the adjacency matrix A ( ` ) ∈ RN×N and the feature matrix X ( ` ) ∈ RN×C . The non-zero entries in A ( ` ) represent edges in the graph . The ith row in X ( ` ) denotes the feature vector of node i . The layer-wise forward propagation rule of TAP in layer ` is defined as K = X ( ` ) W ( ` ) , ∈RN×C ( 1 ) E = X ( ` ) KT , ∈RN×N ( 2 ) Ẽ = E ◦A ( ` ) , ∈RN×N ( 3 ) d = N∑ j=1 A ( ` ) : j , ∈R N ( 4 ) s = sigmoid ( ∑N j=1 Ẽ : j d ) , ∈RN ( 5 ) idx = Rankingk ( s ) , ∈Rk ( 6 ) A ( ` +1 ) = A ( ` ) ( idx , idx ) , ∈Rk×k ( 7 ) X ( ` +1 ) = X ( ` ) ( idx , : ) diag ( s ( idx ) ) , ∈Rk×C ( 8 ) where W ( ` ) ∈ RC×C is a trainable weight matrix , A ( ` ) : j is the jth column of matrix A ( ` ) , ◦ denotes the element-wise matrix multiplication , Ẽ : j is the jth column of matrix Ẽ , k is the number of nodes selected in the sampled graph , and diag ( · ) constructs a diagonal matrix using input vector as diagonal elements . Rankingk operator ranks the scores and return the indices of k-largest values in s. To compute attention scores , we perform a linear transformation on feature matrix X ( ` ) in Eq . ( 1 ) , which results in the key matrix K. We use the input feature matrix as the query matrix . The similarity score matrix E is obtained by the matrix multiplication between X ( ` ) and K in Eq . ( 2 ) . Each value eij in E measures the similarity between node i and node j . Since E contains similarity scores for nodes that are not directly connected , we use the adjacency matrix A ( ` ) as a mask to set these entries in E to zeros in Eq . ( 3 ) , resulting in Ẽ . We compute the degree of each node in Eq . ( 4 ) . The ranking score of a node is computed in Eq . ( 5 ) by taking the average of similarity scores between this node and its neighboring nodes followed by a sigmoid operation . Here , we perform element-wise division between two vectors . The resulting score vector is s = [ s1 , s2 , . . . , sN ] T where si represents the ranking score of node i. Rankingk is an operator that selects k-largest values and returns the corresponding indices . In Eq . ( 6 ) , we use Rankingk to select the k-most important nodes with indices in idx . Using idx , we extract new adjacency matrix A ( ` +1 ) in Eq . ( 7 ) and new feature matrix X ( ` +1 ) in Eq . ( 8 ) . Here , we use the ranking scores s ( idx ) as gates to control information flow and enable the gradient back-propagation for trainable transformation matrix W ( ` ) ( Gao & Ji , 2019 ) . This method can be considered as a local-voting , global-ranking process . In our TAP layer , the ranking scores are derived from the similarity scores of each node with its neighboring nodes , thereby encoding the topology information of each node in its ranking score . This can be considered as a local voting process that each node gets its votes from the local neighborhood . When performing global ranking , the nodes that get the highest votes from local neighborhoods are selected such that maximum information in the graph can be retained . Figure 1 provides an illustration of our proposed TAP layer . Compared to the top-k pooling ( Gao & Ji , 2019 ) , our TAP layer considers topology information in the graph , thereby leading to a better coarsened graph .
This paper proposes a topology-aware pooling method on graph data, which explicitly encodes the topology information when computing ranking scores. More specifically, the proposed method uses an attention operator to compute similarity scores between each node and its neighborhood nodes, and then uses the average similarity score of each node as the ranking score in the node selection process. This topology-aware pooling technique can be applied to graph neural networks on downstream tasks such as graph classification. Experimental results demonstrate the effectiveness of the proposed method, which outperform previous state-of-the-art models consistently.
SP:be7c42be6523a9923bf4701c9854816d0d8d2494
Topology-Aware Pooling via Graph Attention
Pooling operations have shown to be effective on various tasks in computer vision and natural language processing . One challenge of performing pooling operations on graph data is the lack of locality that is not well-defined on graphs . Previous studies used global ranking methods to sample some of the important nodes , but most of them are not able to incorporate graph topology information in computing ranking scores . In this work , we propose the topology-aware pooling ( TAP ) layer that uses attention operators to generate ranking scores for each node by attending each node to its neighboring nodes . The ranking scores are generated locally while the selection is performed globally , which enables the pooling operation to consider topology information . To encourage better graph connectivity in the sampled graph , we propose to add a graph connectivity term to the computation of ranking scores in the TAP layer . Based on our TAP layer , we develop a network on graph data , known as the topology-aware pooling network . Experimental results on graph classification tasks demonstrate that our methods achieve consistently better performance than previous models . 1 INTRODUCTION . Pooling operations have been widely applied in various fields such as computer vision ( He et al. , 2016 ; Huang et al. , 2017 ) , and natural language processing ( Zhang et al. , 2015 ) . Pooling operations can effectively reduce dimensional sizes ( Simonyan & Zisserman , 2015 ) and enlarge receptive fields ( Chen et al. , 2016 ) . The application of regular pooling operations depends on the well-defined spatial locality in grid-like data such as images and texts . However , it is still challenging to perform pooling operations on graph data . In particular , there is no spatial locality or order information among nodes ( Gao et al. , 2018 ) . Some works try to overcome this limitation with two kinds of methods ; those are node clustering ( Ying et al. , 2018 ) and primary nodes sampling ( Gao & Ji , 2019 ; Zhang et al. , 2018 ) . The node clustering methods create graphs with super-nodes by learning a nodes assignment matrix . These methods suffer from the over-fitting problem and need auxiliary link prediction tasks to stabilize the training ( Ying et al. , 2018 ) . The primary nodes sampling methods like top-k pooling ( Gao & Ji , 2019 ) rank the nodes in a graph and sample top-k nodes to form the sampled graph . It uses a small number of additional trainable parameters and is shown to be more powerful ( Gao & Ji , 2019 ) . However , the top-k pooling layer does not explicitly incorporate the topology information in a graph when computing ranking scores , which may cause performance loss . In this work , we propose a novel topology-aware pooling ( TAP ) layer that explicitly encodes the topology information when computing ranking scores . We use an attention operator to compute similarity scores between each node and its neighboring nodes . The average similarity score of a node is used as its ranking score in the selection process . To avoid isolated nodes problem in our TAP layer , we further propose a graph connectivity term for computing the ranking scores of nodes . The graph connectivity term uses degree information as a bias term to encourage the layer to select highly connected nodes to form the sampled graph . Based on the TAP layer , we develop topology-aware pooling networks for network embedding learning . Experimental results on graph classification tasks demonstrate that our proposed networks with TAP layer consistently outperform previous models . The comparison results between our TAP layer and other pooling layers based on the same network architecture demonstrate the effectiveness of our method compared to other pooling methods . 2 BACKGROUND AND RELATED WORK . In this section , we describe graph pooling operations and attention operators . 2.1 GRAPH POOLING OPERATIONS . The pooling operations on graph data mainly include two categories ; those are node clustering and node sampling . DIFFPOOL ( Ying et al. , 2018 ) realizes graph pooling operation by clustering nodes into super-nodes . By learning an assignment matrix , DIFFPOOL softly assigns each node to different clusters in the new graph with specified probabilities . The pooling operations under this category retain and encode all nodes information into the new graph . One challenge of methods in this category is that they may increase the risk of over-fitting by training another network to learn the assignment matrix . In addition , the new graph is mostly connected where each edge value represents the strength of connectivity between two nodes . The connectivity pattern in the new graph may greatly differ from that of the original graph . The node sampling methods mainly select a fixed number k of the most important nodes to form a new graph . In SortPool ( Zhang et al. , 2018 ) , the same feature of each node is used for ranking and k nodes with the largest values in this feature are selected to form the coarsened graph . Top-k pooling ( Gao & Ji , 2019 ) generates the ranking scores by using a trainable projection vector that projects feature vectors of nodes into scalar values . k nodes with the largest scalar values are selected to form the coarsened graph . These methods involve none or a very small number of extra trainable parameters , thereby avoiding the risk of over-fitting . However , these methods suffer from one limitation that they do not explicitly consider the topology information during pooling . Both SortPool and top-k pooling select nodes based on scalar values that do not explicitly incorporate topology information . In this work , we propose a pooling operation that explicitly encodes topology information in ranking scores , thereby leading to an improved operation . 2.2 ATTENTION OPERATORS . Attention operator has shown to be effective in challenging tasks in various fields such as computer vision ( Xu et al. , 2015b ; Lu et al. , 2016 ; Li et al. , 2018 ) and natural language processing ( Malinowski et al. , 2018 ; Bahdanau et al. , 2015 ; Vaswani et al. , 2017 ) . Attention operator is capable of capturing long-range relationships , thereby leading to better performances ( Wang et al. , 2018 ) . The inputs to an attention operator consist of three matrices ; those are a query matrix Q ∈ Rd×m , a key matrix K ∈ Rd×n , and a value matrix V ∈ Rp×n . The attention operator computes the response of each query vector in Q by attending it to all key vectors in K. It uses the resulting coefficient vector to take a weighted sum over value vectors in V . The layer-wise operation of an attention operation is defined as O = V softmax ( KTQ ) . When the attention operator is applied to graph , each node only attends to its neighboring nodes ( Veličković et al. , 2017 ) . Self-attention can also produce an attention mask to control information flow on selected nodes in pooling operation ( Lee et al. , 2019 ) . In our proposed pooling operation , we employ an attention operator to compute ranking scores that explicitly encode topology information . 3 TOPOLOGY-AWARE POOLING LAYERS AND NETWORKS . In this work , we propose the topology-aware pooling ( TAP ) layer that uses attention operators to encode topology information in ranking scores for node selection . We also propose a graph connectivity term in the computation of ranking scores , which encourages better graph connectivity in the coarsened graph . Based on our TAP layer , we propose the topology-aware pooling networks for network representation learning . 3.1 TOPOLOGY-AWARE POOLING LAYER . Pooling layers have shown to be important on grid-like data with regard to reducing feature map sizes and enlarging receptive fields ( Yu & Koltun , 2016 ; Carreira et al. , 2012 ) . On graph data , two kinds of pooling layers have been proposed ; those are node clustering ( Ying et al. , 2018 ) and primary nodes sampling ( Gao & Ji , 2019 ; Zhang et al. , 2018 ) . A primary nodes sampling method , known as top-k pooling ( Gao & Ji , 2019 ) , uses a projection vector to generate ranking scores for each node in the graph . The graph is created by choosing nodes with k-largest scores . However , the sampling process relies on the projection vector and does not explicitly consider the topology information in the graph . In this section , we propose the topology-aware pooling ( TAP ) layer that performs primary nodes sampling by considering the graph topology . In this layer , we generate the ranking scores based on local information . To this end , we employ an attention operator to compute the similarity scores between each node and its neighboring nodes . The ranking score for a node i is the mean value of the similarity scores with its neighboring nodes . The resulting ranking score for a node indicates the similarity between this node and its neighboring nodes . If a node has a high ranking score , it can highly represent a local graph that consists of it and its neighboring nodes . By choosing nodes with the highest ranking scores , we can retain the maximum information in the sampled graph . Suppose there are N nodes in a graph G , each of which contains C features . In layer ` , we use two matrices to represent the graph ; those are the adjacency matrix A ( ` ) ∈ RN×N and the feature matrix X ( ` ) ∈ RN×C . The non-zero entries in A ( ` ) represent edges in the graph . The ith row in X ( ` ) denotes the feature vector of node i . The layer-wise forward propagation rule of TAP in layer ` is defined as K = X ( ` ) W ( ` ) , ∈RN×C ( 1 ) E = X ( ` ) KT , ∈RN×N ( 2 ) Ẽ = E ◦A ( ` ) , ∈RN×N ( 3 ) d = N∑ j=1 A ( ` ) : j , ∈R N ( 4 ) s = sigmoid ( ∑N j=1 Ẽ : j d ) , ∈RN ( 5 ) idx = Rankingk ( s ) , ∈Rk ( 6 ) A ( ` +1 ) = A ( ` ) ( idx , idx ) , ∈Rk×k ( 7 ) X ( ` +1 ) = X ( ` ) ( idx , : ) diag ( s ( idx ) ) , ∈Rk×C ( 8 ) where W ( ` ) ∈ RC×C is a trainable weight matrix , A ( ` ) : j is the jth column of matrix A ( ` ) , ◦ denotes the element-wise matrix multiplication , Ẽ : j is the jth column of matrix Ẽ , k is the number of nodes selected in the sampled graph , and diag ( · ) constructs a diagonal matrix using input vector as diagonal elements . Rankingk operator ranks the scores and return the indices of k-largest values in s. To compute attention scores , we perform a linear transformation on feature matrix X ( ` ) in Eq . ( 1 ) , which results in the key matrix K. We use the input feature matrix as the query matrix . The similarity score matrix E is obtained by the matrix multiplication between X ( ` ) and K in Eq . ( 2 ) . Each value eij in E measures the similarity between node i and node j . Since E contains similarity scores for nodes that are not directly connected , we use the adjacency matrix A ( ` ) as a mask to set these entries in E to zeros in Eq . ( 3 ) , resulting in Ẽ . We compute the degree of each node in Eq . ( 4 ) . The ranking score of a node is computed in Eq . ( 5 ) by taking the average of similarity scores between this node and its neighboring nodes followed by a sigmoid operation . Here , we perform element-wise division between two vectors . The resulting score vector is s = [ s1 , s2 , . . . , sN ] T where si represents the ranking score of node i. Rankingk is an operator that selects k-largest values and returns the corresponding indices . In Eq . ( 6 ) , we use Rankingk to select the k-most important nodes with indices in idx . Using idx , we extract new adjacency matrix A ( ` +1 ) in Eq . ( 7 ) and new feature matrix X ( ` +1 ) in Eq . ( 8 ) . Here , we use the ranking scores s ( idx ) as gates to control information flow and enable the gradient back-propagation for trainable transformation matrix W ( ` ) ( Gao & Ji , 2019 ) . This method can be considered as a local-voting , global-ranking process . In our TAP layer , the ranking scores are derived from the similarity scores of each node with its neighboring nodes , thereby encoding the topology information of each node in its ranking score . This can be considered as a local voting process that each node gets its votes from the local neighborhood . When performing global ranking , the nodes that get the highest votes from local neighborhoods are selected such that maximum information in the graph can be retained . Figure 1 provides an illustration of our proposed TAP layer . Compared to the top-k pooling ( Gao & Ji , 2019 ) , our TAP layer considers topology information in the graph , thereby leading to a better coarsened graph .
This paper presented a new pooling method for learning graph-level embeddings. The key idea is to use the initial node attributes to compute all-pair attention scores for each node pair and then use these attention scores to formulate a new graph adjacency matrix beyond the original raw graph adjacency matrix. As the result, each node can average these attention score edges to compute the overall importance. Based on these scores, the method chooses top-k nodes to perform graph coarsening operation. In addition, a graph connectivity term is proposed to address the problems of isolated nodes. Experiments are performed to validate the effectiveness of the proposed method.
SP:be7c42be6523a9923bf4701c9854816d0d8d2494
RGTI:Response generation via templates integration for End to End dialog
End-to-end models have achieved considerable success in task-oriented dialogue area , but suffer from the challenges of ( a ) poor semantic control , and ( b ) little interaction with auxiliary information . In this paper , we propose a novel yet simple end-to-end model for response generation via mixed templates , which can address above challenges . In our model , we retrieval candidate responses which contain abundant syntactic and sequence information by dialogue semantic information related to dialogue history . Then , we exploit candidate response attention to get templates which should be mentioned in response . Our model can integrate multi template information to guide the decoder module how to generate response better . We show that our proposed model learns useful templates information , which improves the performance of ” how to say ” and ” what to say ” in response generation . Experiments on the large-scale Multiwoz dataset demonstrate the effectiveness of our proposed model , which attain the state-of-the-art performance . 1 INTRODUCTION . Task-oriented dialogue is aim to help users to complete a task in specific field such as restaurant reservation , or booking film tickets . The traditional approach is to design pipeline architectures which have several modules : natural language understanding , dialogue manager and natural language generation ( Wen et al. , 2017 ) . It ’ s easy to control , but the dialogue system becomes more and more complicated . With the development of deep learning , end-to-end methods have shown hopeful results and received great attention in academic . They input user queries and generate system responses , which is relatively simple . However , the disadvantage is that end-to-end approaches are difficult to control generated results . Task-oriented responses should have correct entities and grammatical expressions , which means solving problems of ” what to say ” and ” how to say ” . If a user wants to ask for a restaurant of moderate price range , a good task-oriented dialogue system should return the response with right restaurants whose prices can not be high or low and use the proper wording which in clear and unambiguous expression . Researches get right entities by looking up the knowledge base ( KB ) . Sukhbaatar et al . ( 2015 ) introduce KB in the form of hidden states . Madotto et al . ( 2018 ) use attention and copy mechanism to produce words from KB . Most research on generating smooth response has been carried out in using templates . However , templates usually are designed by domain experts in advance ( Walker et al. , 2007b ) , so it needs huge cost and is difficult to transfer different domains . Wen et al . ( 2015 ) use Semantically Controlled LSTM ( SC-LSTM ) to control semantic results of language generation . Su et al . ( 2018 ) propose a hierarchical architecture using linguistic patterns to improve response generation . These models though achieving good performance , suffer from the problems as templates fixed or poorly controllability of semantic . In order to alleviate such issues , we propose an end-to-end response generation model via templates integration ( RGTI ) , which is composed of a encoder encoding the retrieval candidate responses with the form of triple into template representation and a decoder that integrates templates to generate the target response . Instead of encoding the templates directly , we construct a hierarchical encoding structure to make the templates contain semantic information and sequence information . During the decoder phase , we exploit a mixed decoder via templates and dialogue history introducing copy mechanism to generate better response . We empirically show that RGTI can achieve advanced performance using the triple encoder and mixed decoder . In the human-human multi-domain dialogue dataset Budzianowski et al . ( 2018 ) , RGTI is able to surpass the previous state-of-the-art on automatic evaluation , which further confirms the effectiveness of our proposed encoder-decoder model . 2 OUR MODEL . We now describe the RGTI framework composed of two parts : encoder for templates in triple form as well as dialogue history and mixed decoder in generation and copy mode , as shown in Figure 1 . The dialogue history X = ( x1 , ... , xn ) is the input , and the system response Y = ( y1 , ... , ym ) is the expected output , where n , m are the corresponding lengths . We first retrieval relevant candidate responses to the target responses . Then the encoder block uses multi-head attention to encode the candidate responses in triple form into templates representation . Next , the copy augmented decoder uses a gating mechanism for words selection from two sources : utterances and templates , while generating a target response . 2.1 RETRIEVAL REFERENCE RESPONSE . First we should retrieval reference responses from training sets . The benefit of this approach is we can get more semantically coherent responses . For getting the better reference responses , we use dialogue state , dialogue act and semantic information of responses as search criteria . ElasticSearch is been used , which can get faster and more acurrate results . 2.2 ENCODER BLOCK . We use multi-head attention to encode dialogue historyX and retrieval reference responsesR , which can automatically extract important information from sentences . If we feed the retrieved sentences directly into the model , the noise is large and not conducive to model learning , so we assume the closet part to the entity in the response is the most important . We divide the entity and its most relevant parts into triple form ( head , entity , tail ) . Head means the part of the sentence before the entity , and tail means the after part , for example if a response is ” enjoy your stay in value-place , goodbye ” , the corresponding head is ” enjoy your stay ” , entity is ” value-place ” , tail is ” goodbye ” . Using this form , we can pay attention to word order and obtain the relationship between the sentence structure and the entity in the sentence . We treat the encoded retrieval responses as template Q , which is a refined expression of retrieval results . 2.3 MIXED DECODER Add & Norm Add & Norm Feed Forward Add & Norm Add & Norm Feed Forward Add & Norm Liner Input Embedding Output Embedding Masked Multi-Head Attention Multi-Head Attention Multi-Head Attention Softmax MultiHead Attention Distribution Vocabulary Distribution P_gen Copy Distribution Copy_dist * ( 1-P_gen ) Generate_dist * P_gen Final Distribution Figure 3 : Mixed Decoder , We predict tokens of target response based on two mixed modes , generate-mode and copy-mode . Generate-mode is to generate words from the vocabulary directly and copy mode is to copy words from templates . Accordingly , our model uses two output layer : sequence prediction layer , template location copying layer . Then we use the gated mechanism of the above output layers to get the final generated words . The probability of generating target word yt is the sum of multiple probabilities at each time step t : p ( yt|st , yt−1 ) = ppr ( yt|st , yt−1 , ct ) · pm ( pr|st , yt−1 ) + pco ( yt|st , yt−1 , MQ ) · pm ( yt , ct|st , yt−1 ) where pr is the predict-mode , and comeans copy words from template , pm ( ·|· ) shows the probability for choosing different modes . The probabilities of these modes are calculated as follows : ppr ( yt|· ) = 1 Z eϕpr ( yt ) pco ( yt|· ) = 1 Z ∑ j : Qj=yt eϕco ( yt ) where ϕ is score function to choose mode , Z stands for the normalization term of two modes . Z = eϕpr ( v ) + ∑ j : Qj=v eϕco ( v ) Specifically , the score functions of two modes are given by : ϕ ( yt = vi ) = v T i Wpr [ st , ctem ] ϕ ( yt = xj ) = DNN ( hj , st , histQ ) 3 EXPERIMENTS . 3.1 DATASET . To verify the results of our model , we use recently proposed MultiWOZ dataset ( Budzianowski et al. , 2018 ) to carry out experiments . The MultiWOZ is the largest existing human-human conversational corpus spanning over seven domains ( attraction , hospital , police , hotel , restaurant , taxi , train ) , which contains 8438 multi-turn dialogues and the average length of each dialogue is 13.68 . Different from current mainstream task-oriented dialogue datasets like WOZ2.0 ( Wen et al. , 2017 ) , and DSTC2 which contain less than 10 slots and only a few hundred values . There are almost 30 ( domain , slot ) pairs and over 4500 possible values in MultiWOZ dataset . Each dialogue consists of a dialogue goal and a representation of multiple pairs of users and system utterances . In each turn of the dialogue , there are still two kinds of annotations , one is belief state and the other is the dialog action which are used to mark the status of the current conversation and the potential actions of the user . 3.2 TRAINING DETAILS . We trained our model end-to-end using Adam optimizer ( Kingma & Ba , 2014 ) , and uses multistep learning rate with milestones in 50 , 100 , 150 , 200 , the learning rate starts from 1e−3 and the parameter γ is 0.5 . All the embeddings are initialized randomly , and beam-search strategy is used during the decoding state . The hyper-parameters such as hidden size is 512 and the dropout rate is 0.2 . 3.3 EVALUATION 3.3.1 ENTITY F1 The main evaluation metric is F1 score which is the harmonic mean of precision and recall at word level between the predicted answer and ground truth . By comparing the ground truth system responses with the set of entities to select useful entities , this metric can evaluate the ability to generate relevant entities and to capture the semantics of the dialog ( Eric & Manning , 2017 ) and ( Eric et al. , 2017 ) 3.3.2 BLEU . We also use BLEU score in our evaluation which is often used to compute the word overlap between the generated output and the reference response.The early BLEU metrics was used in the field of translation . In recent years , it has also referred to the BLEU metrics in the end-to-end task-oriented dialogue and in the field of chat-bots . 3.4 ABLATION STUDY We perform ablation experiments on the test set to analyze the effectiveness of different module in our model . The results of these experiments are shown in the Table below . As one can observe from the Table , our model without copy mechanism has 2.5 % BlEU drop in generate results . On the other hand , RGTI without templates means that we do not consider the relevant templates information and thus lead to a reduction in BLEU . Note that if we remove the templates , then a 0.5 % increase can be observed in the table , which suggests that instead of applying context history makes the corresponding text noise have more side effects than his positive influence . 4 RELATED WORKS . Task-oriented dialog systems are mainly explored by following two different approaches : piplines and end-to-end . For the dialogue systems of piplines ( Williams Young , 2007 ; Wen et al. , 2017 ) , modules are separated into different trained models : ( i ) natural language understanding g ( Young et al. , 2013 ; Chen et al. , 2016 ) , which is used to understand human intention , dialogue state tracking ( Lee Stent , 2016 ; Zhong et al. , 2018 ) which is for estimating user goal at every step of the dialogue , dialogue management ( Su et al. , 2016 ) , and natural language generation ( Sharma et al.,2016 ) which is aimed to realize language surface form given the semantic constraint . These approaches achieve good stability via combining domain-specific knowledge and slot-filling techniques , but additional human labels are needed . On the other hand , end-to-end approaches have shown promising results recently . BiLSTM Zhao et al . ( 2017 ) use recurrent neural networks to generate final responses and achieve good results the previous state-of-art model about memory mechanism strengthen the reasoning ability by incorporating external knowledge into the neural network , Sequicity and HDSA represent the conversation history as belief span or dialog action which is used as compressed information for downstream model generate system response considering current dialog state . 4.1 TEMPLATE Using templates to guide response generation is a common method in task-oriented dialogue area . Stent et al . ( 2004 ) and Walker et al . ( 2007a ) use machine learning to train the templates selection . The cost to create and maintain templates is huge , which is a challenge in adapting current dialogue system to new domains or different language . Wiseman et al . ( 2018 ) introduce HMM into text generation , which can decode the state of generation templates . It ’ s a useful method to generate templates , however the variability and efficiency are not satisfied . 4.2 TRANSFORMER Recently , a new neural architecture called transformer has surpassed RNNs on sequence to sequence tasks , the paper on transformers by Vaswani et . al . [ 19 ] demonstrated that transformers produce state-of-the-art results on machine translation , which allowing for increased parallelization and significantly reduced training time . Otherwise , there has been little work around the use of transformer on end-to-end dialog system . 4.3 POINTER-GENERATOR NETWORKS The pointer-generator was first proposed by Vinyals , and then has been applied to several natural language processing tasks , including translation ( Gulcehre et al . [ 4 ] ) , language modeling ( Merity et al . [ 7 ] ) , and summarization . The motivation of this paper is how to effectively extract and use the relevant contexts for multi-turn dialogue generation . Different from previous studies , our proposed model can focus on the relevant contexts , with both long and short distant dependency relations , by using the transformer-pointergenerator mechanism .
1. Summary: The authors proposed a deep neural network-based model to generate responses fro task-oriented dialogue systems. The model mainly contains two parts, the first part is to retrieve relevant responses based on question and encode them into templates, the second part is a decoder to generate the response based on the encoded templates and input utterances.
SP:f5e27c7a2ae2bda209fb5befbc3ea366b4d71adc
RGTI:Response generation via templates integration for End to End dialog
End-to-end models have achieved considerable success in task-oriented dialogue area , but suffer from the challenges of ( a ) poor semantic control , and ( b ) little interaction with auxiliary information . In this paper , we propose a novel yet simple end-to-end model for response generation via mixed templates , which can address above challenges . In our model , we retrieval candidate responses which contain abundant syntactic and sequence information by dialogue semantic information related to dialogue history . Then , we exploit candidate response attention to get templates which should be mentioned in response . Our model can integrate multi template information to guide the decoder module how to generate response better . We show that our proposed model learns useful templates information , which improves the performance of ” how to say ” and ” what to say ” in response generation . Experiments on the large-scale Multiwoz dataset demonstrate the effectiveness of our proposed model , which attain the state-of-the-art performance . 1 INTRODUCTION . Task-oriented dialogue is aim to help users to complete a task in specific field such as restaurant reservation , or booking film tickets . The traditional approach is to design pipeline architectures which have several modules : natural language understanding , dialogue manager and natural language generation ( Wen et al. , 2017 ) . It ’ s easy to control , but the dialogue system becomes more and more complicated . With the development of deep learning , end-to-end methods have shown hopeful results and received great attention in academic . They input user queries and generate system responses , which is relatively simple . However , the disadvantage is that end-to-end approaches are difficult to control generated results . Task-oriented responses should have correct entities and grammatical expressions , which means solving problems of ” what to say ” and ” how to say ” . If a user wants to ask for a restaurant of moderate price range , a good task-oriented dialogue system should return the response with right restaurants whose prices can not be high or low and use the proper wording which in clear and unambiguous expression . Researches get right entities by looking up the knowledge base ( KB ) . Sukhbaatar et al . ( 2015 ) introduce KB in the form of hidden states . Madotto et al . ( 2018 ) use attention and copy mechanism to produce words from KB . Most research on generating smooth response has been carried out in using templates . However , templates usually are designed by domain experts in advance ( Walker et al. , 2007b ) , so it needs huge cost and is difficult to transfer different domains . Wen et al . ( 2015 ) use Semantically Controlled LSTM ( SC-LSTM ) to control semantic results of language generation . Su et al . ( 2018 ) propose a hierarchical architecture using linguistic patterns to improve response generation . These models though achieving good performance , suffer from the problems as templates fixed or poorly controllability of semantic . In order to alleviate such issues , we propose an end-to-end response generation model via templates integration ( RGTI ) , which is composed of a encoder encoding the retrieval candidate responses with the form of triple into template representation and a decoder that integrates templates to generate the target response . Instead of encoding the templates directly , we construct a hierarchical encoding structure to make the templates contain semantic information and sequence information . During the decoder phase , we exploit a mixed decoder via templates and dialogue history introducing copy mechanism to generate better response . We empirically show that RGTI can achieve advanced performance using the triple encoder and mixed decoder . In the human-human multi-domain dialogue dataset Budzianowski et al . ( 2018 ) , RGTI is able to surpass the previous state-of-the-art on automatic evaluation , which further confirms the effectiveness of our proposed encoder-decoder model . 2 OUR MODEL . We now describe the RGTI framework composed of two parts : encoder for templates in triple form as well as dialogue history and mixed decoder in generation and copy mode , as shown in Figure 1 . The dialogue history X = ( x1 , ... , xn ) is the input , and the system response Y = ( y1 , ... , ym ) is the expected output , where n , m are the corresponding lengths . We first retrieval relevant candidate responses to the target responses . Then the encoder block uses multi-head attention to encode the candidate responses in triple form into templates representation . Next , the copy augmented decoder uses a gating mechanism for words selection from two sources : utterances and templates , while generating a target response . 2.1 RETRIEVAL REFERENCE RESPONSE . First we should retrieval reference responses from training sets . The benefit of this approach is we can get more semantically coherent responses . For getting the better reference responses , we use dialogue state , dialogue act and semantic information of responses as search criteria . ElasticSearch is been used , which can get faster and more acurrate results . 2.2 ENCODER BLOCK . We use multi-head attention to encode dialogue historyX and retrieval reference responsesR , which can automatically extract important information from sentences . If we feed the retrieved sentences directly into the model , the noise is large and not conducive to model learning , so we assume the closet part to the entity in the response is the most important . We divide the entity and its most relevant parts into triple form ( head , entity , tail ) . Head means the part of the sentence before the entity , and tail means the after part , for example if a response is ” enjoy your stay in value-place , goodbye ” , the corresponding head is ” enjoy your stay ” , entity is ” value-place ” , tail is ” goodbye ” . Using this form , we can pay attention to word order and obtain the relationship between the sentence structure and the entity in the sentence . We treat the encoded retrieval responses as template Q , which is a refined expression of retrieval results . 2.3 MIXED DECODER Add & Norm Add & Norm Feed Forward Add & Norm Add & Norm Feed Forward Add & Norm Liner Input Embedding Output Embedding Masked Multi-Head Attention Multi-Head Attention Multi-Head Attention Softmax MultiHead Attention Distribution Vocabulary Distribution P_gen Copy Distribution Copy_dist * ( 1-P_gen ) Generate_dist * P_gen Final Distribution Figure 3 : Mixed Decoder , We predict tokens of target response based on two mixed modes , generate-mode and copy-mode . Generate-mode is to generate words from the vocabulary directly and copy mode is to copy words from templates . Accordingly , our model uses two output layer : sequence prediction layer , template location copying layer . Then we use the gated mechanism of the above output layers to get the final generated words . The probability of generating target word yt is the sum of multiple probabilities at each time step t : p ( yt|st , yt−1 ) = ppr ( yt|st , yt−1 , ct ) · pm ( pr|st , yt−1 ) + pco ( yt|st , yt−1 , MQ ) · pm ( yt , ct|st , yt−1 ) where pr is the predict-mode , and comeans copy words from template , pm ( ·|· ) shows the probability for choosing different modes . The probabilities of these modes are calculated as follows : ppr ( yt|· ) = 1 Z eϕpr ( yt ) pco ( yt|· ) = 1 Z ∑ j : Qj=yt eϕco ( yt ) where ϕ is score function to choose mode , Z stands for the normalization term of two modes . Z = eϕpr ( v ) + ∑ j : Qj=v eϕco ( v ) Specifically , the score functions of two modes are given by : ϕ ( yt = vi ) = v T i Wpr [ st , ctem ] ϕ ( yt = xj ) = DNN ( hj , st , histQ ) 3 EXPERIMENTS . 3.1 DATASET . To verify the results of our model , we use recently proposed MultiWOZ dataset ( Budzianowski et al. , 2018 ) to carry out experiments . The MultiWOZ is the largest existing human-human conversational corpus spanning over seven domains ( attraction , hospital , police , hotel , restaurant , taxi , train ) , which contains 8438 multi-turn dialogues and the average length of each dialogue is 13.68 . Different from current mainstream task-oriented dialogue datasets like WOZ2.0 ( Wen et al. , 2017 ) , and DSTC2 which contain less than 10 slots and only a few hundred values . There are almost 30 ( domain , slot ) pairs and over 4500 possible values in MultiWOZ dataset . Each dialogue consists of a dialogue goal and a representation of multiple pairs of users and system utterances . In each turn of the dialogue , there are still two kinds of annotations , one is belief state and the other is the dialog action which are used to mark the status of the current conversation and the potential actions of the user . 3.2 TRAINING DETAILS . We trained our model end-to-end using Adam optimizer ( Kingma & Ba , 2014 ) , and uses multistep learning rate with milestones in 50 , 100 , 150 , 200 , the learning rate starts from 1e−3 and the parameter γ is 0.5 . All the embeddings are initialized randomly , and beam-search strategy is used during the decoding state . The hyper-parameters such as hidden size is 512 and the dropout rate is 0.2 . 3.3 EVALUATION 3.3.1 ENTITY F1 The main evaluation metric is F1 score which is the harmonic mean of precision and recall at word level between the predicted answer and ground truth . By comparing the ground truth system responses with the set of entities to select useful entities , this metric can evaluate the ability to generate relevant entities and to capture the semantics of the dialog ( Eric & Manning , 2017 ) and ( Eric et al. , 2017 ) 3.3.2 BLEU . We also use BLEU score in our evaluation which is often used to compute the word overlap between the generated output and the reference response.The early BLEU metrics was used in the field of translation . In recent years , it has also referred to the BLEU metrics in the end-to-end task-oriented dialogue and in the field of chat-bots . 3.4 ABLATION STUDY We perform ablation experiments on the test set to analyze the effectiveness of different module in our model . The results of these experiments are shown in the Table below . As one can observe from the Table , our model without copy mechanism has 2.5 % BlEU drop in generate results . On the other hand , RGTI without templates means that we do not consider the relevant templates information and thus lead to a reduction in BLEU . Note that if we remove the templates , then a 0.5 % increase can be observed in the table , which suggests that instead of applying context history makes the corresponding text noise have more side effects than his positive influence . 4 RELATED WORKS . Task-oriented dialog systems are mainly explored by following two different approaches : piplines and end-to-end . For the dialogue systems of piplines ( Williams Young , 2007 ; Wen et al. , 2017 ) , modules are separated into different trained models : ( i ) natural language understanding g ( Young et al. , 2013 ; Chen et al. , 2016 ) , which is used to understand human intention , dialogue state tracking ( Lee Stent , 2016 ; Zhong et al. , 2018 ) which is for estimating user goal at every step of the dialogue , dialogue management ( Su et al. , 2016 ) , and natural language generation ( Sharma et al.,2016 ) which is aimed to realize language surface form given the semantic constraint . These approaches achieve good stability via combining domain-specific knowledge and slot-filling techniques , but additional human labels are needed . On the other hand , end-to-end approaches have shown promising results recently . BiLSTM Zhao et al . ( 2017 ) use recurrent neural networks to generate final responses and achieve good results the previous state-of-art model about memory mechanism strengthen the reasoning ability by incorporating external knowledge into the neural network , Sequicity and HDSA represent the conversation history as belief span or dialog action which is used as compressed information for downstream model generate system response considering current dialog state . 4.1 TEMPLATE Using templates to guide response generation is a common method in task-oriented dialogue area . Stent et al . ( 2004 ) and Walker et al . ( 2007a ) use machine learning to train the templates selection . The cost to create and maintain templates is huge , which is a challenge in adapting current dialogue system to new domains or different language . Wiseman et al . ( 2018 ) introduce HMM into text generation , which can decode the state of generation templates . It ’ s a useful method to generate templates , however the variability and efficiency are not satisfied . 4.2 TRANSFORMER Recently , a new neural architecture called transformer has surpassed RNNs on sequence to sequence tasks , the paper on transformers by Vaswani et . al . [ 19 ] demonstrated that transformers produce state-of-the-art results on machine translation , which allowing for increased parallelization and significantly reduced training time . Otherwise , there has been little work around the use of transformer on end-to-end dialog system . 4.3 POINTER-GENERATOR NETWORKS The pointer-generator was first proposed by Vinyals , and then has been applied to several natural language processing tasks , including translation ( Gulcehre et al . [ 4 ] ) , language modeling ( Merity et al . [ 7 ] ) , and summarization . The motivation of this paper is how to effectively extract and use the relevant contexts for multi-turn dialogue generation . Different from previous studies , our proposed model can focus on the relevant contexts , with both long and short distant dependency relations , by using the transformer-pointergenerator mechanism .
This paper describes a method to incorporate candidate templates to aid in response generation within an end-to-end dialog system. While the motivation and task setup is interesting, the paper is clearly unfinished. Most jarringly, Table 2 which should contain the main results comparing the proposed RGTI model to existing baseline models is not filled in, and there appears to be no table showing results of the ablation study briefly described in Section 3.4.
SP:f5e27c7a2ae2bda209fb5befbc3ea366b4d71adc
Removing the Representation Error of GAN Image Priors Using the Deep Decoder
1 INTRODUCTION . Generative Adversarial Networks ( GANs ) show promise as priors for solving imaging inverse problems such as inpainting , compressive sensing , super-resolution , and others . For example , they have been shown to perform as well as common sparsity based priors on compressed sensing tasks using 5-10x fewer measurements , and also perform well in nonlinear blind image deblurring ( Bora et al. , 2017 ; Asim et al. , 2018 ) . The typical inverse problem in imaging is to reconstruct an image given incomplete or corrupted measurements of that image . Since there may be many potential reconstructions that are consistent with the measurements , this task requires a prior assumption about the structure of the true image . A traditional prior assumption is that the image has a sparse representation in some basis . Provided the image is a member of a known class for which many examples are available , a GAN can be trained to approximate the distribution of images in the desired class . The generator of the GAN can then be used as a prior , by finding the point in the range of the generator that is most consistent with the provided measurements . We use the term `` GAN prior '' to refer to generative convolutional neural networks which learn a mapping from a low dimensional latent code space to the image space , for example with the DCGAN , GLO , or VAE architectures ( Radford et al. , 2015 ; Bojanowski et al. , 2017 ; Kingma & Welling , 2013 ) . Challenges in the training of GANs involve selecting hyperparameters , like the dimensionality of the model manifold ; difficulties in training , such as mode collapse ; and the fact than GANs are not directly optimizing likelihood . Because of this , their performance as image priors is severely limited by representation error ( Bora et al. , 2017 ) . This effect is exaggerated when reconstructing images which are out of the training distribution , in which case the GAN prior typically fails completely to give a sensible solution to the inverse problem . In contrast , untrained deep neural networks also show promise in solving imaging inverse problems , by leveraging architectural bias of a convolutional network as a structural prior instead of a learned representation ( Ulyanov et al. , 2018 ; Heckel & Hand , 2018 ) . These methods are independent of any training data or image distribution , and therefore are robust to shifts in data distribution that are problematic for GAN priors . Recent work by Heckel & Hand ( 2018 ) presents an untrained decoder-style network architecture , the Deep Decoder , that is an efficient image representation and as a consequence works well as an image prior . In particular , it can represent images more efficiently than with wavelet thresholding . When used for denoising tasks , it outperforms BM3D , considered the state-of-the-art among untrained denoising methods . The Deep Decoder is similar to the Deep Image Prior , but it can be underparameterized , having fewer optimizable parameters than the image dimensionality , and consequently does not need any algorithmic regularization , such as early stopping . In this paper , we propose a simple method to reduce the representation error of a generative prior by studying image models which are linear combinations of a trained GAN with an untrained Deep Decoder . We build a method that capitalizes on the strengths of both methods : we want strong performance for all natural images and not just those close to a training distribution , and we want improved performance when given images are near a provided training distribution . The former comes from the Deep Decoder , and the latter comes from the GAN . We demonstrate the performance of this method on compressive sensing tasks using both in-distribution and out-of-distribution images . For in-distribution images , we find that the hybrid model consistently yields higher PSNRs than various GAN priors across a wide variety of undersampling ratios ( sometimes by 10+ dB ) , while also consistently outperforming a standalone Deep Decoder ( by around 1 dB ) . Performance improvements over the GAN prior also hold in the case of imaging on far out-of-distribution images , where the hybrid model and Deep Decoder model have comparable performance . A major challenge of the field is to build algorithms for solving inverse problems that are at least as good as both learned and recently discovered unlearned methods . Any new method should be at least as good as either approach separately . The literature contains multiple answers to this question , including invertible neural networks , optimizing over all weights of a trained GAN in an image-adaptive way , and more . This paper provides a significantly simpler method to get the benefits of both learned and unlearned methods , surprisingly by simply taking the linear combination of both models . 2 RELATED WORK . Another approach to reducing the reconstruction error of generative models has been to study invertible generative neural networks . These are networks that are fully invertible maps between latent space an image space by architectural design . The allow for direction calculation and optimization of the likelihood of any image , in particular because all images are in the range of such networks . Consequently , they have zero representation error . While such methods have demonstrated strong empirical performance ( Asim et al. , 2019 ) , invertible networks are very computationally expensive , as this recent paper used 15 GPU minutes to recover a single 64 × 64 color image . Much of their benefit may be obtainable by simpler and cheaper learned models . Alternatively , representation error of GANs may be reduced through an image adaptive process , akin to using the GAN as a warm start to a Deep Image Prior . We will make comparisons to one implementation of this idea , IAGAN , in Section 4.1.1 . The IAGAN method uses an entire GAN as an image model , tuning its parameters to fit a single image . This method will have negligible representation error , and our model achieves comparable performance in low measurement regimes while using a drastically fewer optimizable parameters . Another approach to reducing the GAN representation error could be to create better GANs . Much progress has been made on this front . Recent theoretical advances in the understanding and design of optimization techniques for GAN priors are driving a new generation of GANs which are stable during training under a wide range of hyperparameters , and which generate highly realistic images . Examples include the Wasserstein GAN , Energy Based GANs , and Boundary Equilibrium GAN ( Arjovsky et al. , 2017 ; Zhao et al. , 2016 ; Berthelot et al. , 2017 ) . Other architectures have been proposed which factorize the problem of image generation across multiple spatial scales . For example , Style-GAN introduces multiscale latent `` style '' vectors , and the Progressive Growth of GANs method explicitly separates training into phases , across which the scale of image generation is increased gradually ( Karras et al. , 2018 ; 2017 ) . In any of these examples , the demonstration of GAN quality is typically the visual appearance of the result . Visually appealing GAN outputs may still belong to GANs with significant representation errors for particular images desired to be recovered by solving an inverse problem . 3 METHOD . We assume that one observes a set of linear measurements y ∈ Rn of a true image x ∈ Rn , possibly with additive noise η : y = Ax+ η , where A ∈ Rm×n is a known measurement matrix . We introduce an image model of the form H ( ϑ ) , where ϑ are the parameters of the image representation under H . The empirical risk formulation of this inverse problem is given by min ϑ ‖AH ( ϑ ) − y‖22 ( 1 ) In this formulation , one must find an image in the range of the model H that is most consistent with the given measurements by searching over parameters ϑ . For example , Bora et al . ( 2017 ) propose to use a generative image model such as a DCGAN , GLO , or VAE , for which ϑ is a low dimensional latent code . One could also choose ϑ to be the coefficients of a wavelet decomposition , or the weights of a neural network tuned to output a single image . In our model , we represent images as the linear combination of the output of a pretrained GAN Gφ ( z ) and a Deep Decoder DD ( θ ) : H ( z , θ , α , β ) = αGφ ( z ) + βDD ( θ ) , ϑ = { z , θ , α , β } Here , the φ are the learned weights of the GAN , which are fixed . The variables that are optimized are : z , the GAN latent code ; θ , the image-specific weights of the Deep Decoder ; and scalars α and β . The first part of our image model is a GAN . We demonstrate our model ’ s performance using a BEGAN , and demonstrate the same results generalize to the DCGAN architecture . Our BEGAN has 64-dimensional latent codes sampled uniformly from [ −1 , 1 ] 64 , and we choose a diversity ratio of 0.5 . Our DCGAN has 100-dimensional latent codes , sampled from N ( 0 , 0.12I ) . The BEGAN prior is trained to output 128× 128 pixel color images and the DCGAN prior is trained to output 64× 64 color images of celebrity faces taken from the CelebA training set ( Liu et al. , 2015 ) . The GANs are initially pretrained , and the only parameters that are optimized during inversion is the latent code . The second part of our image model is a Deep Decoder . The Deep Decoder is a convolutional neural network consisting only of the following architectural elements : 1x1 convolutions , relu activations , fixed bilinear upsampling , and channelwise normalization . A final layer uses pixelwise linear combinations to create a 3 channel output . In all experiemnts , we consider a 4 layer deep decoder with k channels in each layer , where k is chosen so that |θ| < m. Thus , our deep decoder , and even our entire model , is underparameterized with respect to the image dimensionality . The deep decoder is unlearned in that it sees no training data . Its parameters θ are estimated only at test time . In our experiments , we found it beneficial to first partially solve ( 1 ) with G ( z ) only and separately with DD ( θ ) only to find approximate minimizers z∗ , θ∗ which are then used to initialize H . To maintain a fair comparison between H and other image models , we hold the number of global inversion iterations N constant . We use npre = 500 separate inversion iterations to find z∗ , θ∗ , and then initialize α = 0.5 , β = 0.5 , and continue with n = 5000 inversion iterations to optimize the parameters of H . To solve ( 1 ) with a GAN prior as in ( Bora et al. , 2017 ) , or with a proper Deep Decoder as an image prior , we simply run N = 5500 inversion steps with no interruptions . We provide details on the hyperparameters used in our experiments in Section 6.1 of the Supplemental Materials . Algorithm 1 Inversion Algorithm . Require : npre , the number of separate preinversion steps for Gφ ( z ) and DD ( θ ) . n , the number of remaining inversion steps for the hybrid model . z and θ , random initialization parameters for Gφ ( z ) and DD ( θ ) . 1 : for k = 0 , ... , npre do 2 : LG ← ‖AGφ ( z ) − y‖22 3 : z ← AdamUpdate ( z , ∇zLG ) 4 : LDD ← ‖A ( DD ( θ ) ) − y‖22 5 : θ ← AdamUpdate ( θ , ∇θLDD ) 6 : end for 7 : α← 0.5 , β ← 0.5 8 : for t = 0 , ... , n do 9 : H ← αGφ ( z ) + βDD ( θ ) 10 : L← ‖AH − y‖22 11 : z , θ , α , β ← AdamUpdate ( ϑH , ∇ϑHL ) 12 : end for
This paper proposes to use a combination of a pretrained GAN and an untrained deep decoder as the image prior for image restoration problem. The combined model jointly infers the latent code for the trained GAN and the parameters in the untrained deep decoder. It also jointly infers the mixing coefficient alpha and beta during test time for each image, thus learning how much we should rely on GAN. The proposed hybrid model is helpful on compressed sensing experiments on the CelebA dataset; however, it is only marginally better than deep decoder on image super resolution and out-of-distribution compressed sensing.
SP:34824d19f70879da119b1ecd77d64b06ebca462d
Removing the Representation Error of GAN Image Priors Using the Deep Decoder
1 INTRODUCTION . Generative Adversarial Networks ( GANs ) show promise as priors for solving imaging inverse problems such as inpainting , compressive sensing , super-resolution , and others . For example , they have been shown to perform as well as common sparsity based priors on compressed sensing tasks using 5-10x fewer measurements , and also perform well in nonlinear blind image deblurring ( Bora et al. , 2017 ; Asim et al. , 2018 ) . The typical inverse problem in imaging is to reconstruct an image given incomplete or corrupted measurements of that image . Since there may be many potential reconstructions that are consistent with the measurements , this task requires a prior assumption about the structure of the true image . A traditional prior assumption is that the image has a sparse representation in some basis . Provided the image is a member of a known class for which many examples are available , a GAN can be trained to approximate the distribution of images in the desired class . The generator of the GAN can then be used as a prior , by finding the point in the range of the generator that is most consistent with the provided measurements . We use the term `` GAN prior '' to refer to generative convolutional neural networks which learn a mapping from a low dimensional latent code space to the image space , for example with the DCGAN , GLO , or VAE architectures ( Radford et al. , 2015 ; Bojanowski et al. , 2017 ; Kingma & Welling , 2013 ) . Challenges in the training of GANs involve selecting hyperparameters , like the dimensionality of the model manifold ; difficulties in training , such as mode collapse ; and the fact than GANs are not directly optimizing likelihood . Because of this , their performance as image priors is severely limited by representation error ( Bora et al. , 2017 ) . This effect is exaggerated when reconstructing images which are out of the training distribution , in which case the GAN prior typically fails completely to give a sensible solution to the inverse problem . In contrast , untrained deep neural networks also show promise in solving imaging inverse problems , by leveraging architectural bias of a convolutional network as a structural prior instead of a learned representation ( Ulyanov et al. , 2018 ; Heckel & Hand , 2018 ) . These methods are independent of any training data or image distribution , and therefore are robust to shifts in data distribution that are problematic for GAN priors . Recent work by Heckel & Hand ( 2018 ) presents an untrained decoder-style network architecture , the Deep Decoder , that is an efficient image representation and as a consequence works well as an image prior . In particular , it can represent images more efficiently than with wavelet thresholding . When used for denoising tasks , it outperforms BM3D , considered the state-of-the-art among untrained denoising methods . The Deep Decoder is similar to the Deep Image Prior , but it can be underparameterized , having fewer optimizable parameters than the image dimensionality , and consequently does not need any algorithmic regularization , such as early stopping . In this paper , we propose a simple method to reduce the representation error of a generative prior by studying image models which are linear combinations of a trained GAN with an untrained Deep Decoder . We build a method that capitalizes on the strengths of both methods : we want strong performance for all natural images and not just those close to a training distribution , and we want improved performance when given images are near a provided training distribution . The former comes from the Deep Decoder , and the latter comes from the GAN . We demonstrate the performance of this method on compressive sensing tasks using both in-distribution and out-of-distribution images . For in-distribution images , we find that the hybrid model consistently yields higher PSNRs than various GAN priors across a wide variety of undersampling ratios ( sometimes by 10+ dB ) , while also consistently outperforming a standalone Deep Decoder ( by around 1 dB ) . Performance improvements over the GAN prior also hold in the case of imaging on far out-of-distribution images , where the hybrid model and Deep Decoder model have comparable performance . A major challenge of the field is to build algorithms for solving inverse problems that are at least as good as both learned and recently discovered unlearned methods . Any new method should be at least as good as either approach separately . The literature contains multiple answers to this question , including invertible neural networks , optimizing over all weights of a trained GAN in an image-adaptive way , and more . This paper provides a significantly simpler method to get the benefits of both learned and unlearned methods , surprisingly by simply taking the linear combination of both models . 2 RELATED WORK . Another approach to reducing the reconstruction error of generative models has been to study invertible generative neural networks . These are networks that are fully invertible maps between latent space an image space by architectural design . The allow for direction calculation and optimization of the likelihood of any image , in particular because all images are in the range of such networks . Consequently , they have zero representation error . While such methods have demonstrated strong empirical performance ( Asim et al. , 2019 ) , invertible networks are very computationally expensive , as this recent paper used 15 GPU minutes to recover a single 64 × 64 color image . Much of their benefit may be obtainable by simpler and cheaper learned models . Alternatively , representation error of GANs may be reduced through an image adaptive process , akin to using the GAN as a warm start to a Deep Image Prior . We will make comparisons to one implementation of this idea , IAGAN , in Section 4.1.1 . The IAGAN method uses an entire GAN as an image model , tuning its parameters to fit a single image . This method will have negligible representation error , and our model achieves comparable performance in low measurement regimes while using a drastically fewer optimizable parameters . Another approach to reducing the GAN representation error could be to create better GANs . Much progress has been made on this front . Recent theoretical advances in the understanding and design of optimization techniques for GAN priors are driving a new generation of GANs which are stable during training under a wide range of hyperparameters , and which generate highly realistic images . Examples include the Wasserstein GAN , Energy Based GANs , and Boundary Equilibrium GAN ( Arjovsky et al. , 2017 ; Zhao et al. , 2016 ; Berthelot et al. , 2017 ) . Other architectures have been proposed which factorize the problem of image generation across multiple spatial scales . For example , Style-GAN introduces multiscale latent `` style '' vectors , and the Progressive Growth of GANs method explicitly separates training into phases , across which the scale of image generation is increased gradually ( Karras et al. , 2018 ; 2017 ) . In any of these examples , the demonstration of GAN quality is typically the visual appearance of the result . Visually appealing GAN outputs may still belong to GANs with significant representation errors for particular images desired to be recovered by solving an inverse problem . 3 METHOD . We assume that one observes a set of linear measurements y ∈ Rn of a true image x ∈ Rn , possibly with additive noise η : y = Ax+ η , where A ∈ Rm×n is a known measurement matrix . We introduce an image model of the form H ( ϑ ) , where ϑ are the parameters of the image representation under H . The empirical risk formulation of this inverse problem is given by min ϑ ‖AH ( ϑ ) − y‖22 ( 1 ) In this formulation , one must find an image in the range of the model H that is most consistent with the given measurements by searching over parameters ϑ . For example , Bora et al . ( 2017 ) propose to use a generative image model such as a DCGAN , GLO , or VAE , for which ϑ is a low dimensional latent code . One could also choose ϑ to be the coefficients of a wavelet decomposition , or the weights of a neural network tuned to output a single image . In our model , we represent images as the linear combination of the output of a pretrained GAN Gφ ( z ) and a Deep Decoder DD ( θ ) : H ( z , θ , α , β ) = αGφ ( z ) + βDD ( θ ) , ϑ = { z , θ , α , β } Here , the φ are the learned weights of the GAN , which are fixed . The variables that are optimized are : z , the GAN latent code ; θ , the image-specific weights of the Deep Decoder ; and scalars α and β . The first part of our image model is a GAN . We demonstrate our model ’ s performance using a BEGAN , and demonstrate the same results generalize to the DCGAN architecture . Our BEGAN has 64-dimensional latent codes sampled uniformly from [ −1 , 1 ] 64 , and we choose a diversity ratio of 0.5 . Our DCGAN has 100-dimensional latent codes , sampled from N ( 0 , 0.12I ) . The BEGAN prior is trained to output 128× 128 pixel color images and the DCGAN prior is trained to output 64× 64 color images of celebrity faces taken from the CelebA training set ( Liu et al. , 2015 ) . The GANs are initially pretrained , and the only parameters that are optimized during inversion is the latent code . The second part of our image model is a Deep Decoder . The Deep Decoder is a convolutional neural network consisting only of the following architectural elements : 1x1 convolutions , relu activations , fixed bilinear upsampling , and channelwise normalization . A final layer uses pixelwise linear combinations to create a 3 channel output . In all experiemnts , we consider a 4 layer deep decoder with k channels in each layer , where k is chosen so that |θ| < m. Thus , our deep decoder , and even our entire model , is underparameterized with respect to the image dimensionality . The deep decoder is unlearned in that it sees no training data . Its parameters θ are estimated only at test time . In our experiments , we found it beneficial to first partially solve ( 1 ) with G ( z ) only and separately with DD ( θ ) only to find approximate minimizers z∗ , θ∗ which are then used to initialize H . To maintain a fair comparison between H and other image models , we hold the number of global inversion iterations N constant . We use npre = 500 separate inversion iterations to find z∗ , θ∗ , and then initialize α = 0.5 , β = 0.5 , and continue with n = 5000 inversion iterations to optimize the parameters of H . To solve ( 1 ) with a GAN prior as in ( Bora et al. , 2017 ) , or with a proper Deep Decoder as an image prior , we simply run N = 5500 inversion steps with no interruptions . We provide details on the hyperparameters used in our experiments in Section 6.1 of the Supplemental Materials . Algorithm 1 Inversion Algorithm . Require : npre , the number of separate preinversion steps for Gφ ( z ) and DD ( θ ) . n , the number of remaining inversion steps for the hybrid model . z and θ , random initialization parameters for Gφ ( z ) and DD ( θ ) . 1 : for k = 0 , ... , npre do 2 : LG ← ‖AGφ ( z ) − y‖22 3 : z ← AdamUpdate ( z , ∇zLG ) 4 : LDD ← ‖A ( DD ( θ ) ) − y‖22 5 : θ ← AdamUpdate ( θ , ∇θLDD ) 6 : end for 7 : α← 0.5 , β ← 0.5 8 : for t = 0 , ... , n do 9 : H ← αGφ ( z ) + βDD ( θ ) 10 : L← ‖AH − y‖22 11 : z , θ , α , β ← AdamUpdate ( ϑH , ∇ϑHL ) 12 : end for
This paper presents a method for reducing the representation error generative convolutional neural networks by combining them with untrained deep decoder. The method is evaluated on compressive sensing and super-resolution, where a better performance than the isolated use of Deep Decoders and GAN priors. The main contribution of the paper is not the performance, but the simplicity of this approach.
SP:34824d19f70879da119b1ecd77d64b06ebca462d
Multi-Scale Representation Learning for Spatial Feature Distributions using Grid Cells
1 INTRODUCTION . Unsupervised text encoding models such as Word2Vec ( Mikolov et al. , 2013 ) , Glove ( Pennington et al. , 2014 ) , ELMo ( Peters et al. , 2018 ) , and BERT ( Devlin et al. , 2018 ) have been effectively utilized in many Natural Language Processing ( NLP ) tasks . At their core they train models which encode words into vector space representations based on their positions in the text and their context . A similar situation can be encountered in the field of Geographic Information Science ( GIScience ) . For example , spatial interpolation aims at predicting an attribute value , e.g. , elevation , at an unsampled location based on the known attribute values of nearby samples . Geographic information has become an important component to many tasks such as fine-grained image classification ( Mac Aodha et al. , 2019 ) , point cloud classification and semantic segmentation ( Qi et al. , 2017 ) , reasoning about Point of Interest ( POI ) type similarity ( Yan et al. , 2017 ) , land cover classification ( Kussul et al. , 2017 ) , and geographic question answering ( Mai et al. , 2019b ) . Developing a general model for vector space representation of any point in space would pave the way for many future applications . 1Link to project repository : https : //github.com/gengchenmai/space2vec However , existing models often utilize specific methods to deal with geographic information and often disregards geographic coordinates . For example , Place2Vec ( Yan et al. , 2017 ) converts the coordinates of POIs into spatially collocated POI pairs within certain distance bins , and does not preserve information about the ( cardinal ) direction between points . Li et al . ( 2017 ) propose DCRNN for traffic forecasting in which the traffic sensor network is converted to a distance weighted graph which necessarily forfeits information about the spatial layout of sensors . There is , however , no general representation model beyond simply applying discretization ( Berg et al. , 2014 ; Tang et al. , 2015 ) or feed-forward nets ( Chu et al. , 2019 ; Mac Aodha et al. , 2019 ) to coordinates . A key challenge in developing a general-purpose representation model for space is how to deal with mixtures of distributions with very different characteristics ( see an example in Figure 1 ) , which often emerges in spatial datasets ( McKenzie et al. , 2015 ) . For example , there are POI types with clustered distributions such as women ’ s clothing , while there are other POI types with regular distributions such as education . These feature distributions co-exist in the same space , and yet we want a single representation to accommodate all of them in a task such as location-aware image classification ( Mac Aodha et al. , 2019 ) . Ripley ’ s K is a spatial analysis method used to describe point patterns over a given area of interest . Figure 1c shows the K plot of several POI types in Las Vegas . One can see that as the radius grows the numbers of POIs increase at different rates for different POI types . In order to see the relative change of density at different scales , we renormalize the curves by each POI type ’ s density and show it in log scale in Figure 1d . One can see two distinct POI type groups with different distribution patterns with clustered and even distributions . If we want to model the distribution of these POIs by discretizing the study area into tiles , we have to use small grid sizes for women ’ s clothing while using larger grid sizes for educations because smaller grid sizes lead to over- parameterization of the model and overfitting . In order to jointly describe these distributions and their patterns , we need an encoding method which supports multi-scale representations . Nobel Prize winning Neuroscience research ( Abbott & Callaway , 2014 ) has demonstrated that grid cells in mammals provide a multi-scale periodic representation that functions as a metric for location encoding , which is critical for integrating self-motion . Moreover , Blair et al . ( 2007 ) show that the multi-scale periodic representation of grid cells can be simulated by summing three cosine grating functions oriented 60˝ apart , which may be regarded as a simple Fourier model of the hexagonal lattice . This research inspired us to encode locations with multi-scale periodic representations . Our assumption is that decomposed geographic coordinates helps machine learning models , such as deep neural nets , and multi-scale representations deal with the inefficiency of intrinsically single-scale methods such as RFB kernels or discretization ( tile embeddings ) . To validate this intuition , we propose an encoder-decoder framework to encode the distribution of point-features2 in space and 2In GIS and spatial analysis , ‘ features ’ are representations of real-world entities . A tree can , for instance , be modeled by a point-feature , while a street would be represented as a line string feature . train such a model in an unsupervised manner . This idea of using sinusoid functions with different frequencies to encode positions is similar to the position encoding proposed in the Transformer model ( Vaswani et al. , 2017 ) . However , the position encoding model of Transformer deals with a discrete 1D space – the positions of words in a sentence – while our model works on higher dimensional continuous spaces such as the surface of earth . In summary , the contributions of our work are as follows : . 1 . We propose an encoder-decoder encoding framework called Space2Vec using sinusoid functions with different frequencies to model absolute positions and spatial contexts . We also propose a multi-head attention mechanism based on context points . To the best of our knowledge , this is the first attention model that explicitly considers the spatial relationships between the query point and context points . 2 . We conduct experiments on two real world geographic data for two different tasks : 1 ) predicting types of POIs given their positions and context , 2 ) image classification leveraging their geo-locations . Space2Vec outperforms well-established encoding methods such as RBF kernels , multi-layer feed-forward nets , and tile embedding approaches for location modeling and image classification . 3 . To understand the advantages of Space2Vec we visualize the firing patterns ( response maps ) of location models ’ encoding layer neurons and show how they handle spatial structures at different scales by integrating multi-scale representations . Furthermore the firing patterns for the spatial context models neurons give insight into how the grid-like cells capture the decreasing distance effect with multi-scale representations . 2 PROBLEM FORMULATION . Distributed representation of point-features in space can be formulated as follows . Given a set of points P “ tpiu , i.e. , Points of Interests ( POIs ) , in L-D space ( L “ 2 , 3 ) define a function fP , θpxq : RL Ñ Rd ( L ! d ) , which is parameterized by θ and maps any coordinate x in space to a vector representation of d dimension . Each point ( e.g. , a restaurant ) pi “ pxi , viq is associated with a location xi and attributes vi ( i.e. , POI features such as type , name , capacity , etc. ) . The function fP , θpxq encodes the probability distribution of point features over space and can give a representation of any point in the space . Attributes ( e.g . place types such as Museum ) and coordinate of point can be seen as analogies to words and word positions in commonly used word embedding models . 3 RELATED WORK . There has been theoretical research on neural network based path integration/spatial localization models and their relationships with grid cells . Both Cueva & Wei ( 2018 ) and Banino et al . ( 2018 ) showed that grid-like spatial response patterns emerge in trained networks for navigation tasks which demonstrate that grid cells are critical for vector-based navigation . Moreover , Gao et al . ( 2019 ) propose a representational model for grid cells in navigation tasks which has good quality such as magnified local isometry . All these research is focusing on understanding the relationship between the grid-like spatial response patterns and navigation tasks from a theoretical perspective . In contrast , our goal focuses on utilizing these theoretical results on real world data in geoinformatics . Radial Basis Function ( RBF ) kernel is a well-established approach to generating learning friendly representation from points in space for machine learning algorithms such as SVM classification ( Baudat & Anouar , 2001 ) and regression ( Bierens , 1994 ) . However , the representation is example based – i.e. , the resultant model uses the positions of training examples as the centers of Gaussian kernel functions ( Maz ’ ya & Schmidt , 1996 ) . In comparison , the grid cell based location encoding relies on sine and cosine functions , and the resultant model is inductive and does not store training examples . Recently the computer vision community shows increasing interests in incorporating geographic information ( e.g . coordinate encoding ) into neural network architectures for multiple tasks such as image classification ( Tang et al. , 2015 ) and fine grained recognition ( Berg et al. , 2014 ; Chu et al. , 2019 ; Mac Aodha et al. , 2019 ) . Both Berg et al . ( 2014 ) and Tang et al . ( 2015 ) proposed to discretize the study area into regular grids . To model the geographical prior distribution of the image categories , the grid id is used for GPS encoding instead of the raw coordinates . However , choosing the correct discretization is challenging ( Openshaw , 1984 ; Fotheringham & Wong , 1991 ) , and incorrect choices can significantly affect the final performance ( Moat et al. , 2018 ; Lechner et al. , 2012 ) . In addition , discretization does not scale well in terms of memory use . To overcome these difficulties , both Chu et al . ( 2019 ) and Mac Aodha et al . ( 2019 ) advocated the idea of inductive location encoders which directly encode coordinates into a location embedding . However , both of them directly feed the coordinates into a feed-forward neural network ( Chu et al. , 2019 ) or residual blocks ( Mac Aodha et al. , 2019 ) without any feature decomposition strategy . Our experiments show that this direct encoding approach is insufficient to capture the spatial feature distribution and Space2Vec significantly outperforms them by integrating spatial representations of different scales . 4 METHOD . We solve distributed representation of point-features in space ( defined in Section 2 ) with an encoderdecoder architecture : 1 . Given a point pi “ pxi , viq a point space encoder Encpxqpq encodes location xi into a location embedding erxis P Rd pxq and a point feature encoderEncpvqpq encodes its feature into a feature embedding ervis P Rd pvq . e “ rerxis ; erviss P Rd is the full representation of point pi P P , where d “ dpxq ` dpvq . r ; s represents vector concatenation . In contrast , geographic entities not in P within the studied space can be represented by their location embedding erxjs since its vi is unknown . 2 . We developed two types of decoders which can be used independently or jointly . A location decoder Decspq reconstructs point feature embedding ervis given location embedding erxis , and a spatial context decoder Deccpq reconstructs the feature embedding ervis of point pi based on the space and feature embeddings tei1 , ... , eij , ... , einu of nearest neighboring points tpi1 , ... , pij , ... , pinu , where n is a hyper-parameter .
The paper introduces Space2Vec, a space representation learning model. The work is motivated by the biological grid cell’s multi-scale periodic representations and the success of representation learning of NLP. So, the key idea behind the model is two-fold. On one hand, utilize the position information and the context associated with the position. On the other hand, the authors build a multiscale point space encoder based on Theorem 1 (in the paper), which was previously proved by Gao et al. (2019).
SP:062377d8728ec1cc76ecd9a32deddaf1fcd58763
Multi-Scale Representation Learning for Spatial Feature Distributions using Grid Cells
1 INTRODUCTION . Unsupervised text encoding models such as Word2Vec ( Mikolov et al. , 2013 ) , Glove ( Pennington et al. , 2014 ) , ELMo ( Peters et al. , 2018 ) , and BERT ( Devlin et al. , 2018 ) have been effectively utilized in many Natural Language Processing ( NLP ) tasks . At their core they train models which encode words into vector space representations based on their positions in the text and their context . A similar situation can be encountered in the field of Geographic Information Science ( GIScience ) . For example , spatial interpolation aims at predicting an attribute value , e.g. , elevation , at an unsampled location based on the known attribute values of nearby samples . Geographic information has become an important component to many tasks such as fine-grained image classification ( Mac Aodha et al. , 2019 ) , point cloud classification and semantic segmentation ( Qi et al. , 2017 ) , reasoning about Point of Interest ( POI ) type similarity ( Yan et al. , 2017 ) , land cover classification ( Kussul et al. , 2017 ) , and geographic question answering ( Mai et al. , 2019b ) . Developing a general model for vector space representation of any point in space would pave the way for many future applications . 1Link to project repository : https : //github.com/gengchenmai/space2vec However , existing models often utilize specific methods to deal with geographic information and often disregards geographic coordinates . For example , Place2Vec ( Yan et al. , 2017 ) converts the coordinates of POIs into spatially collocated POI pairs within certain distance bins , and does not preserve information about the ( cardinal ) direction between points . Li et al . ( 2017 ) propose DCRNN for traffic forecasting in which the traffic sensor network is converted to a distance weighted graph which necessarily forfeits information about the spatial layout of sensors . There is , however , no general representation model beyond simply applying discretization ( Berg et al. , 2014 ; Tang et al. , 2015 ) or feed-forward nets ( Chu et al. , 2019 ; Mac Aodha et al. , 2019 ) to coordinates . A key challenge in developing a general-purpose representation model for space is how to deal with mixtures of distributions with very different characteristics ( see an example in Figure 1 ) , which often emerges in spatial datasets ( McKenzie et al. , 2015 ) . For example , there are POI types with clustered distributions such as women ’ s clothing , while there are other POI types with regular distributions such as education . These feature distributions co-exist in the same space , and yet we want a single representation to accommodate all of them in a task such as location-aware image classification ( Mac Aodha et al. , 2019 ) . Ripley ’ s K is a spatial analysis method used to describe point patterns over a given area of interest . Figure 1c shows the K plot of several POI types in Las Vegas . One can see that as the radius grows the numbers of POIs increase at different rates for different POI types . In order to see the relative change of density at different scales , we renormalize the curves by each POI type ’ s density and show it in log scale in Figure 1d . One can see two distinct POI type groups with different distribution patterns with clustered and even distributions . If we want to model the distribution of these POIs by discretizing the study area into tiles , we have to use small grid sizes for women ’ s clothing while using larger grid sizes for educations because smaller grid sizes lead to over- parameterization of the model and overfitting . In order to jointly describe these distributions and their patterns , we need an encoding method which supports multi-scale representations . Nobel Prize winning Neuroscience research ( Abbott & Callaway , 2014 ) has demonstrated that grid cells in mammals provide a multi-scale periodic representation that functions as a metric for location encoding , which is critical for integrating self-motion . Moreover , Blair et al . ( 2007 ) show that the multi-scale periodic representation of grid cells can be simulated by summing three cosine grating functions oriented 60˝ apart , which may be regarded as a simple Fourier model of the hexagonal lattice . This research inspired us to encode locations with multi-scale periodic representations . Our assumption is that decomposed geographic coordinates helps machine learning models , such as deep neural nets , and multi-scale representations deal with the inefficiency of intrinsically single-scale methods such as RFB kernels or discretization ( tile embeddings ) . To validate this intuition , we propose an encoder-decoder framework to encode the distribution of point-features2 in space and 2In GIS and spatial analysis , ‘ features ’ are representations of real-world entities . A tree can , for instance , be modeled by a point-feature , while a street would be represented as a line string feature . train such a model in an unsupervised manner . This idea of using sinusoid functions with different frequencies to encode positions is similar to the position encoding proposed in the Transformer model ( Vaswani et al. , 2017 ) . However , the position encoding model of Transformer deals with a discrete 1D space – the positions of words in a sentence – while our model works on higher dimensional continuous spaces such as the surface of earth . In summary , the contributions of our work are as follows : . 1 . We propose an encoder-decoder encoding framework called Space2Vec using sinusoid functions with different frequencies to model absolute positions and spatial contexts . We also propose a multi-head attention mechanism based on context points . To the best of our knowledge , this is the first attention model that explicitly considers the spatial relationships between the query point and context points . 2 . We conduct experiments on two real world geographic data for two different tasks : 1 ) predicting types of POIs given their positions and context , 2 ) image classification leveraging their geo-locations . Space2Vec outperforms well-established encoding methods such as RBF kernels , multi-layer feed-forward nets , and tile embedding approaches for location modeling and image classification . 3 . To understand the advantages of Space2Vec we visualize the firing patterns ( response maps ) of location models ’ encoding layer neurons and show how they handle spatial structures at different scales by integrating multi-scale representations . Furthermore the firing patterns for the spatial context models neurons give insight into how the grid-like cells capture the decreasing distance effect with multi-scale representations . 2 PROBLEM FORMULATION . Distributed representation of point-features in space can be formulated as follows . Given a set of points P “ tpiu , i.e. , Points of Interests ( POIs ) , in L-D space ( L “ 2 , 3 ) define a function fP , θpxq : RL Ñ Rd ( L ! d ) , which is parameterized by θ and maps any coordinate x in space to a vector representation of d dimension . Each point ( e.g. , a restaurant ) pi “ pxi , viq is associated with a location xi and attributes vi ( i.e. , POI features such as type , name , capacity , etc. ) . The function fP , θpxq encodes the probability distribution of point features over space and can give a representation of any point in the space . Attributes ( e.g . place types such as Museum ) and coordinate of point can be seen as analogies to words and word positions in commonly used word embedding models . 3 RELATED WORK . There has been theoretical research on neural network based path integration/spatial localization models and their relationships with grid cells . Both Cueva & Wei ( 2018 ) and Banino et al . ( 2018 ) showed that grid-like spatial response patterns emerge in trained networks for navigation tasks which demonstrate that grid cells are critical for vector-based navigation . Moreover , Gao et al . ( 2019 ) propose a representational model for grid cells in navigation tasks which has good quality such as magnified local isometry . All these research is focusing on understanding the relationship between the grid-like spatial response patterns and navigation tasks from a theoretical perspective . In contrast , our goal focuses on utilizing these theoretical results on real world data in geoinformatics . Radial Basis Function ( RBF ) kernel is a well-established approach to generating learning friendly representation from points in space for machine learning algorithms such as SVM classification ( Baudat & Anouar , 2001 ) and regression ( Bierens , 1994 ) . However , the representation is example based – i.e. , the resultant model uses the positions of training examples as the centers of Gaussian kernel functions ( Maz ’ ya & Schmidt , 1996 ) . In comparison , the grid cell based location encoding relies on sine and cosine functions , and the resultant model is inductive and does not store training examples . Recently the computer vision community shows increasing interests in incorporating geographic information ( e.g . coordinate encoding ) into neural network architectures for multiple tasks such as image classification ( Tang et al. , 2015 ) and fine grained recognition ( Berg et al. , 2014 ; Chu et al. , 2019 ; Mac Aodha et al. , 2019 ) . Both Berg et al . ( 2014 ) and Tang et al . ( 2015 ) proposed to discretize the study area into regular grids . To model the geographical prior distribution of the image categories , the grid id is used for GPS encoding instead of the raw coordinates . However , choosing the correct discretization is challenging ( Openshaw , 1984 ; Fotheringham & Wong , 1991 ) , and incorrect choices can significantly affect the final performance ( Moat et al. , 2018 ; Lechner et al. , 2012 ) . In addition , discretization does not scale well in terms of memory use . To overcome these difficulties , both Chu et al . ( 2019 ) and Mac Aodha et al . ( 2019 ) advocated the idea of inductive location encoders which directly encode coordinates into a location embedding . However , both of them directly feed the coordinates into a feed-forward neural network ( Chu et al. , 2019 ) or residual blocks ( Mac Aodha et al. , 2019 ) without any feature decomposition strategy . Our experiments show that this direct encoding approach is insufficient to capture the spatial feature distribution and Space2Vec significantly outperforms them by integrating spatial representations of different scales . 4 METHOD . We solve distributed representation of point-features in space ( defined in Section 2 ) with an encoderdecoder architecture : 1 . Given a point pi “ pxi , viq a point space encoder Encpxqpq encodes location xi into a location embedding erxis P Rd pxq and a point feature encoderEncpvqpq encodes its feature into a feature embedding ervis P Rd pvq . e “ rerxis ; erviss P Rd is the full representation of point pi P P , where d “ dpxq ` dpvq . r ; s represents vector concatenation . In contrast , geographic entities not in P within the studied space can be represented by their location embedding erxjs since its vi is unknown . 2 . We developed two types of decoders which can be used independently or jointly . A location decoder Decspq reconstructs point feature embedding ervis given location embedding erxis , and a spatial context decoder Deccpq reconstructs the feature embedding ervis of point pi based on the space and feature embeddings tei1 , ... , eij , ... , einu of nearest neighboring points tpi1 , ... , pij , ... , pinu , where n is a hyper-parameter .
This paper presents a new method called "Space2Vec" to compute spatial embeddings of a pixel in a spatial data. The primary motivation of Space2Vec is to integrate representations of different spatial scales which could potentially make the spatial representations more informative and meaningful as features. Space2Vec is trained as a part of an encoder-decoder framework, where Space2Vec encodes the spatial features of all the points that are fed as input to the framework.
SP:062377d8728ec1cc76ecd9a32deddaf1fcd58763
Adjustable Real-time Style Transfer
1 INTRODUCTION . Style transfer is a long-standing problem in computer vision with the goal of synthesizing new images by combining the content of one image with the style of another ( Efros & Freeman , 2001 ; Hertzmann , 1998 ; Ashikhmin , 2001 ) . Recently , neural style transfer techniques ( Gatys et al. , 2015 ; 2016 ; Johnson et al. , 2016 ; Ghiasi et al. , 2017 ; Li et al. , 2018 ; 2017b ) showed that the correlation between the features extracted from the trained deep neural networks is quite effective on capturing the visual styles and content that can be used for generating images similar in style and content . However , since the definition of similarity is inherently vague , the objective of style transfer is not well defined ( Dumoulin et al. , 2017 ) and one can imagine multiple stylized images from the same pair of content/style images . Existing real-time style transfer methods generate only one stylization for a given content/style pair and while the stylizations of different methods usually look distinct ( Sanakoyeu et al. , 2018 ; Huang & Belongie , 2017 ) , it is not possible to say that one stylization is better in all contexts since people react differently to images based on their background and situation . Hence , to get favored stylizations users must try different methods that is not satisfactory . It is more desirable to have a single model which can generate diverse results , but still similar in style and content , in real-time , by adjusting some input parameters . One other issue with the current methods is their high sensitivity to the hyper-parameters . More specifically , current real-time style transfer methods minimize a weighted sum of losses from different layers of a pre-trained image classification model ( Johnson et al. , 2016 ; Huang & Belongie , 2017 ) ( check Sec 3 for details ) and different weight sets can result into very different styles ( Figure 6 ) . However , one can only observe the effect of these weights in the final stylization by retraining the model with the new set of weights . Considering the fact that the ” optimal ” set of weights can be different for any pair of style/content ( Figure 4 ) and also the fact that this ” optimal ” truly doesn ’ t exist ( since the goodness of the output is a personal choice ) retraining the models over and over until the desired result is generated is not practical . The primary goal of this paper is to address these issues by providing a novel mechanism which allows for adjustment of the stylized image , in real-time and after training . To achieve this , we use an auxiliary network which accepts additional parameters as inputs and changes the style transfer process by adjusting the weights between multiple losses . We show that changing these parameters at inference time results to stylizations similar to the ones achievable by retraining the model with different hyperparameters . We also show that a random selection of these parameters at run-time can generate a random stylization . These solutions , enable the end user to be in full control of how the stylized image is being formed as well as having the capability of generating multiple stochastic stylized images from a fixed pair of style/content . The stochastic nature of our proposed method is most apparent when viewing the transition between random generations . Therefore , we highly encourage the reader to check the project website https : //goo.gl/PVWQ9K . 2 RELATED WORK . The strength of deep networks in style transfer was first demonstrated by Gatys et al . ( Gatys et al. , 2016 ) . While this method generates impressive results , it is too slow for real-time applications due to its optimization loop . Follow up works speed up this process by training feed-forward networks that can transfer style of a single style image ( Johnson et al. , 2016 ; Ulyanov et al. , 2016 ) or multiple styles ( Dumoulin et al. , 2017 ) . Other works introduced real-time methods to transfer style of arbitrary style image to an arbitrary content image ( Ghiasi et al. , 2017 ; Huang & Belongie , 2017 ) . Although , these methods can generate stylization for the arbitrary inputs , they can only produce one stylization for a single pair of content/style images . In the case that the user does not like the result , it is not possible to get a different result without retraining the network for a different set of hyperparameters . Our goal in this paper is to train a single network that user can get different stylization without retraining the network . Generating diverse results have been studied in multiple domains such as colorizations ( Deshpande et al. , 2017 ; Cao et al. , 2017 ) , image synthesis ( Chen & Koltun , 2017 ) , video prediction ( Babaeizadeh et al. , 2017 ; Lee et al. , 2018 ) , and domain transfer ( Huang et al. , 2018 ; Zhang , 2018 ) . Domain transfer is the most similar problem to the style transfer . Although we can generate multiple outputs from a given input image ( Huang et al. , 2018 ; Zhu et al. , 2017 ) , we need a collection of target or style images for training . Therefore we can not use it when we do not have a collection of similar styles . For instance , when we want to generate multiple stylizations for the Stary Night painting , it is hard to find different similar paintings . Style loss function is a crucial part of style transfer which affects the output stylization significantly . The most common style loss is Gram matrix which computes the second-order statistics of the feature activations ( Gatys et al. , 2016 ) , however many alternative losses have been introduced to measure distances between feature statistics of the style and stylized images such as correlation alignment loss ( Peng & Saenko , 2018 ) , histogram loss ( Risser et al. , 2017 ) , and MMD loss ( Li et al. , 2017a ) . More recent work ( Liu et al. , 2017 ) has used depth similarity of style and stylized images as a part of the loss . We demonstrate the success of our method using only Gram matrix ; however , our approach can be expanded to utilize other losses as well . To the best of our knowledge , the only previous work which generates multiple stylizations is ( Ulyanov et al. , 2017 ) which utilized Julesz ensemble to explicitly encourage diversity in stylizations . However their results are quite similar in style and only differ in minor details . A qualitative comparison in Figures 8,14 show that our proposed method is more effective in diverse stylization . 3 BACKGROUND . 3.1 STYLE TRANSFER USING DEEP NETWORKS . Style transfer can be formulated as generating a stylized image p which its content is similar to a given content image c and its style is close to another given style image s. The similarity in style can be vaguely defined as sharing the same spatial statistics in low-level features , while similarity in content is roughly having a close Euclidean distance in high-level features ( Ghiasi et al. , 2017 ) . These features are typically extracted from a pre-trained image classification network , commonly VGG-19 ( Simonyan & Zisserman , 2014 ) . The main idea here is that the features obtained by the image classifier contain information about the content of the input image while the correlation between these features represents its style . In order to increase the similarity between two images , Gatys et al . ( Gatys et al. , 2016 ) minimize the following distances between their extracted features : Llc ( p ) = ∣∣∣∣φl ( p ) − φl ( s ) ∣∣∣∣2 2 , Lls ( p ) = ∣∣∣∣G ( φl ( p ) ) −G ( φl ( s ) ) ∣∣∣∣2 F ( 1 ) where φl ( x ) is activation of a pre-trained classification network at layer l given the input image x , while Llc ( p ) and Lls ( p ) are content and style loss at layer l respectively . G ( φl ( p ) ) denotes the Gram matrix associated with φl ( p ) . The total loss is calculated as the weighted sum of losses across a set of content layers C and style layers S : Lc ( p ) = ∑ l∈C wlcLlc ( p ) , Ls ( p ) = ∑ l∈S wlsLls ( p ) ( 2 ) where wlc , w l s are hyper-parameters to adjust the contribution of each layer to the loss . Layers can be shared between C and S . These hyper-parameters have to be manually fine tuned through try and error and usually vary for different style images ( Figure 4 ) . Finally , the objective of style transfer can be defined as : min p ( Lc ( p ) + Ls ( p ) ) ( 3 ) This objective can be minimized by iterative gradient-based optimization methods starting from an initial p which usually is random noise or the content image itself . 3.2 REAL-TIME STYLE TRANSFER . Solving the objective in Equation 3 using an iterative method can be very slow and has to be repeated for any given pair of style/content image . A much faster method is to directly train a deep network T which maps a given content image c to a stylized image p ( Johnson et al. , 2016 ) . T is usually a feed-forward convolutional network ( parameterized by θ ) with residual connections between down- sampling and up-sampling layers ( Ruder et al. , 2018 ) and is trained on many content images using Equation 3 as the loss function : min θ ( Lc ( T ( c ) ) + Ls ( T ( c ) ) ) ( 4 ) The style image is assumed to be fixed and therefore a different network should be trained per style image . However , for a fixed style image , this method can generate stylized images in realtime ( Johnson et al. , 2016 ) . Recent methods ( Dumoulin et al. , 2017 ; Ghiasi et al. , 2017 ; Huang & Belongie , 2017 ) introduced real-time style transfer methods for multiple styles . But , these methods still generate only one stylization for a pair of style and content images . 4 PROPOSED METHOD This paper addresses the following issues in real-time feed-forward style transfer methods : 1 . The output of these models is sensitive to the hyper-parameters wlc and w l s and different weights significantly affect the generated stylized image as demonstrated in Figure 6 . Moreover , the ” optimal ” weights vary from one style image to another ( Figure 4 ) and finding a good set of weights should be repeated for each style image . Note that for each set of wlc and w l s the model has to be retrained that limits the practicality of style transfer models . 2 . Current methods generate a single stylized image given a content/style pair . While the stylizations of different methods usually look very distinct ( Sanakoyeu et al. , 2018 ) , it is not possible to say which stylization is better for every context since it is a matter of personal taste . To get a favored stylization , users may need to try different methods or train a network with different hyper-parameters which is not satisfactory and , ideally , the user should have the capability of getting different stylizations in real-time . We address these issues by conditioning the generated stylized image on additional input parameters where each parameter controls the share of the loss from a corresponding layer . This solves the problem ( 1 ) since one can adjust the contribution of each layer to adjust the final stylized result after the training and in real-time . Secondly , we address the problem ( 2 ) by randomizing these parameters which result in different stylizations .
The paper proposed a generative model for image style transfer in real time. In particular, comparing to the existing work, the proposed method is able to generate a series of transferred images instead of one, and more importantly, users can adjust different parameters without re-training the network to control over the synthesized output. The proposed method was evaluated on publicly-available datasets, and achieved convincing experimental results.
SP:4535803bbaeba4ee21bd85c05ff7ecea4fdbfe10
Adjustable Real-time Style Transfer
1 INTRODUCTION . Style transfer is a long-standing problem in computer vision with the goal of synthesizing new images by combining the content of one image with the style of another ( Efros & Freeman , 2001 ; Hertzmann , 1998 ; Ashikhmin , 2001 ) . Recently , neural style transfer techniques ( Gatys et al. , 2015 ; 2016 ; Johnson et al. , 2016 ; Ghiasi et al. , 2017 ; Li et al. , 2018 ; 2017b ) showed that the correlation between the features extracted from the trained deep neural networks is quite effective on capturing the visual styles and content that can be used for generating images similar in style and content . However , since the definition of similarity is inherently vague , the objective of style transfer is not well defined ( Dumoulin et al. , 2017 ) and one can imagine multiple stylized images from the same pair of content/style images . Existing real-time style transfer methods generate only one stylization for a given content/style pair and while the stylizations of different methods usually look distinct ( Sanakoyeu et al. , 2018 ; Huang & Belongie , 2017 ) , it is not possible to say that one stylization is better in all contexts since people react differently to images based on their background and situation . Hence , to get favored stylizations users must try different methods that is not satisfactory . It is more desirable to have a single model which can generate diverse results , but still similar in style and content , in real-time , by adjusting some input parameters . One other issue with the current methods is their high sensitivity to the hyper-parameters . More specifically , current real-time style transfer methods minimize a weighted sum of losses from different layers of a pre-trained image classification model ( Johnson et al. , 2016 ; Huang & Belongie , 2017 ) ( check Sec 3 for details ) and different weight sets can result into very different styles ( Figure 6 ) . However , one can only observe the effect of these weights in the final stylization by retraining the model with the new set of weights . Considering the fact that the ” optimal ” set of weights can be different for any pair of style/content ( Figure 4 ) and also the fact that this ” optimal ” truly doesn ’ t exist ( since the goodness of the output is a personal choice ) retraining the models over and over until the desired result is generated is not practical . The primary goal of this paper is to address these issues by providing a novel mechanism which allows for adjustment of the stylized image , in real-time and after training . To achieve this , we use an auxiliary network which accepts additional parameters as inputs and changes the style transfer process by adjusting the weights between multiple losses . We show that changing these parameters at inference time results to stylizations similar to the ones achievable by retraining the model with different hyperparameters . We also show that a random selection of these parameters at run-time can generate a random stylization . These solutions , enable the end user to be in full control of how the stylized image is being formed as well as having the capability of generating multiple stochastic stylized images from a fixed pair of style/content . The stochastic nature of our proposed method is most apparent when viewing the transition between random generations . Therefore , we highly encourage the reader to check the project website https : //goo.gl/PVWQ9K . 2 RELATED WORK . The strength of deep networks in style transfer was first demonstrated by Gatys et al . ( Gatys et al. , 2016 ) . While this method generates impressive results , it is too slow for real-time applications due to its optimization loop . Follow up works speed up this process by training feed-forward networks that can transfer style of a single style image ( Johnson et al. , 2016 ; Ulyanov et al. , 2016 ) or multiple styles ( Dumoulin et al. , 2017 ) . Other works introduced real-time methods to transfer style of arbitrary style image to an arbitrary content image ( Ghiasi et al. , 2017 ; Huang & Belongie , 2017 ) . Although , these methods can generate stylization for the arbitrary inputs , they can only produce one stylization for a single pair of content/style images . In the case that the user does not like the result , it is not possible to get a different result without retraining the network for a different set of hyperparameters . Our goal in this paper is to train a single network that user can get different stylization without retraining the network . Generating diverse results have been studied in multiple domains such as colorizations ( Deshpande et al. , 2017 ; Cao et al. , 2017 ) , image synthesis ( Chen & Koltun , 2017 ) , video prediction ( Babaeizadeh et al. , 2017 ; Lee et al. , 2018 ) , and domain transfer ( Huang et al. , 2018 ; Zhang , 2018 ) . Domain transfer is the most similar problem to the style transfer . Although we can generate multiple outputs from a given input image ( Huang et al. , 2018 ; Zhu et al. , 2017 ) , we need a collection of target or style images for training . Therefore we can not use it when we do not have a collection of similar styles . For instance , when we want to generate multiple stylizations for the Stary Night painting , it is hard to find different similar paintings . Style loss function is a crucial part of style transfer which affects the output stylization significantly . The most common style loss is Gram matrix which computes the second-order statistics of the feature activations ( Gatys et al. , 2016 ) , however many alternative losses have been introduced to measure distances between feature statistics of the style and stylized images such as correlation alignment loss ( Peng & Saenko , 2018 ) , histogram loss ( Risser et al. , 2017 ) , and MMD loss ( Li et al. , 2017a ) . More recent work ( Liu et al. , 2017 ) has used depth similarity of style and stylized images as a part of the loss . We demonstrate the success of our method using only Gram matrix ; however , our approach can be expanded to utilize other losses as well . To the best of our knowledge , the only previous work which generates multiple stylizations is ( Ulyanov et al. , 2017 ) which utilized Julesz ensemble to explicitly encourage diversity in stylizations . However their results are quite similar in style and only differ in minor details . A qualitative comparison in Figures 8,14 show that our proposed method is more effective in diverse stylization . 3 BACKGROUND . 3.1 STYLE TRANSFER USING DEEP NETWORKS . Style transfer can be formulated as generating a stylized image p which its content is similar to a given content image c and its style is close to another given style image s. The similarity in style can be vaguely defined as sharing the same spatial statistics in low-level features , while similarity in content is roughly having a close Euclidean distance in high-level features ( Ghiasi et al. , 2017 ) . These features are typically extracted from a pre-trained image classification network , commonly VGG-19 ( Simonyan & Zisserman , 2014 ) . The main idea here is that the features obtained by the image classifier contain information about the content of the input image while the correlation between these features represents its style . In order to increase the similarity between two images , Gatys et al . ( Gatys et al. , 2016 ) minimize the following distances between their extracted features : Llc ( p ) = ∣∣∣∣φl ( p ) − φl ( s ) ∣∣∣∣2 2 , Lls ( p ) = ∣∣∣∣G ( φl ( p ) ) −G ( φl ( s ) ) ∣∣∣∣2 F ( 1 ) where φl ( x ) is activation of a pre-trained classification network at layer l given the input image x , while Llc ( p ) and Lls ( p ) are content and style loss at layer l respectively . G ( φl ( p ) ) denotes the Gram matrix associated with φl ( p ) . The total loss is calculated as the weighted sum of losses across a set of content layers C and style layers S : Lc ( p ) = ∑ l∈C wlcLlc ( p ) , Ls ( p ) = ∑ l∈S wlsLls ( p ) ( 2 ) where wlc , w l s are hyper-parameters to adjust the contribution of each layer to the loss . Layers can be shared between C and S . These hyper-parameters have to be manually fine tuned through try and error and usually vary for different style images ( Figure 4 ) . Finally , the objective of style transfer can be defined as : min p ( Lc ( p ) + Ls ( p ) ) ( 3 ) This objective can be minimized by iterative gradient-based optimization methods starting from an initial p which usually is random noise or the content image itself . 3.2 REAL-TIME STYLE TRANSFER . Solving the objective in Equation 3 using an iterative method can be very slow and has to be repeated for any given pair of style/content image . A much faster method is to directly train a deep network T which maps a given content image c to a stylized image p ( Johnson et al. , 2016 ) . T is usually a feed-forward convolutional network ( parameterized by θ ) with residual connections between down- sampling and up-sampling layers ( Ruder et al. , 2018 ) and is trained on many content images using Equation 3 as the loss function : min θ ( Lc ( T ( c ) ) + Ls ( T ( c ) ) ) ( 4 ) The style image is assumed to be fixed and therefore a different network should be trained per style image . However , for a fixed style image , this method can generate stylized images in realtime ( Johnson et al. , 2016 ) . Recent methods ( Dumoulin et al. , 2017 ; Ghiasi et al. , 2017 ; Huang & Belongie , 2017 ) introduced real-time style transfer methods for multiple styles . But , these methods still generate only one stylization for a pair of style and content images . 4 PROPOSED METHOD This paper addresses the following issues in real-time feed-forward style transfer methods : 1 . The output of these models is sensitive to the hyper-parameters wlc and w l s and different weights significantly affect the generated stylized image as demonstrated in Figure 6 . Moreover , the ” optimal ” weights vary from one style image to another ( Figure 4 ) and finding a good set of weights should be repeated for each style image . Note that for each set of wlc and w l s the model has to be retrained that limits the practicality of style transfer models . 2 . Current methods generate a single stylized image given a content/style pair . While the stylizations of different methods usually look very distinct ( Sanakoyeu et al. , 2018 ) , it is not possible to say which stylization is better for every context since it is a matter of personal taste . To get a favored stylization , users may need to try different methods or train a network with different hyper-parameters which is not satisfactory and , ideally , the user should have the capability of getting different stylizations in real-time . We address these issues by conditioning the generated stylized image on additional input parameters where each parameter controls the share of the loss from a corresponding layer . This solves the problem ( 1 ) since one can adjust the contribution of each layer to adjust the final stylized result after the training and in real-time . Secondly , we address the problem ( 2 ) by randomizing these parameters which result in different stylizations .
The paper presents an approach for style transfer with controlable parameters. The controllable parameters correspond to the weights associated to "style losses" or ordinary style transfer models (distance between gram matrices of generated vs style image at specific layers of a network). The authors propose to learn a single architecture that takes these parameters as input to generate an image that resembles what would be generated by optimizing directly on these parameters. Examples of transfer and of the effect of these parameters are given. A quantitative evaluation shows that the effect of changing the parameters of the new network has the effect of reducing the loss at the desired layers.
SP:4535803bbaeba4ee21bd85c05ff7ecea4fdbfe10
Exploration Based Language Learning for Text-Based Games
1 INTRODUCTION . Text-based games became popular in the mid 80s with the game series Zork ( Anderson & Galley , 1985 ) resulting in many different text-based games being produced and published ( Spaceman , 2019 ) . These games use a plain text description of the environment and the player has to interact with them by writing natural-language commands . Recently , there has been a growing interest in developing agents that can automatically solve text-based games ( Côté et al. , 2018 ) by interacting with them . These settings challenge the ability of an artificial agent to understand natural language , common sense knowledge , and to develop the ability to interact with environments using language ( Luketina et al. , 2019 ; Branavan et al. , 2012 ) . Since the actions in these games are commands that are in natural language form , the major obstacle is the extremely large action space of the agent , which leads to a combinatorially large exploration problem . In fact , with a vocabulary ofN words ( e.g . 20K ) and the possibility of producing sentences with at most m words ( e.g . 7 words ) , the total number of actions is O ( Nm ) ( e.g . 20K7 ≈ 1.28e30 ) . To avoid this large action space , several existing solutions focus on simpler text-based games with very small vocabularies where the action space is constrained to verb-object pairs ( DePristo & Zubek , 2001 ; Narasimhan et al. , 2015 ; Yuan et al. , 2018 ; Zelinka , 2018 ) . Moreover , many existing works rely on using predetermined sets of admissible actions ( He et al. , 2015 ; Tessler et al. , 2019 ; Zahavy et al. , 2018 ) . However , a more ideal , and still under explored , alternative would be an agent that can operate in the full , unconstrained action space of natural language that can systematically generalize to new text-based games with no or few interactions with the environment . To address this challenge , we propose to use the idea behind the recently proposed GoExplore ( Ecoffet et al. , 2019 ) algorithm . Specifically , we propose to first extract high reward trajectories of states and actions in the game using the exploration methodology proposed in Go-Explore and then train a policy using a Seq2Seq ( Sutskever et al. , 2014 ) model that maps observations to actions , in an imitation learning fashion . To show the effectiveness of our proposed methodology , we first benchmark the exploration ability of Go-Explore on the family of text-based games called CoinCollector ( Yuan et al. , 2018 ) . Then we use the 4,440 games of “ First TextWorld Problems ” ( Côté , 2018 ) , which are generated using the machinery introduced by Côté et al . ( 2018 ) , to show the generalization ability of our proposed methodology . In the former experiment we show that Go-Explore finds winning trajectories faster than existing solutions , and in the latter , we show that training a Seq2Seq model on the trajectories found by Go-Explore results in stronger generalization , as suggested by the stronger performance on unseen games , compared to existing competitive baselines ( He et al. , 2015 ; Narasimhan et al. , 2015 ) . Reinforcement Learning Based Approaches for Text-Based Games Among reinforcement learning based efforts to solve text-based games two approaches are prominent . The first approach assumes an action as a sentence of a fixed number of words , and associates a separate Qfunction ( Watkins , 1989 ; Mnih et al. , 2015 ) with each word position in this sentence . This method was demonstrated with two-word sentences consisting of a verb-object pair ( e.g . take apple ) ( DePristo & Zubek , 2001 ; Narasimhan et al. , 2015 ; Yuan et al. , 2018 ; Zelinka , 2018 ; Fulda et al. , 2017 ) . In the second approach , one Q-function that scores all possible actions ( i.e . sentences ) is learned and used to play the game ( He et al. , 2015 ; Tessler et al. , 2019 ; Zahavy et al. , 2018 ) . The first approach is quite limiting since a fixed number of words must be selected in advance and no temporal dependency is enforced between words ( e.g . lack of language modelling ) . In the second approach , on the other hand , the number of possible actions can become exponentially large if the admissible actions ( a predetermined low cardinality set of actions that the agent can take ) are not provided to the agent . A possible solution to this issue has been proposed by Tao et al . ( 2018 ) , where a hierarchical pointer-generator is used to first produce the set of admissible actions given the observation , and subsequently one element of this set is chosen as the action for that observation . However , in our experiments we show that even in settings where the true set of admissible actions is provided by the environment , a Q-scorer ( He et al. , 2015 ) does not generalize well in our setting ( Section 5.2 Zero-Shot ) and we would expect performance to degrade even further if the admissible actions were generated by a separate model . Less common are models that either learn to reduce a large set of actions into a smaller set of admissible actions by eliminating actions ( Zahavy et al. , 2018 ) or by compressing them in a latent space ( Tessler et al. , 2019 ) . Exploration in Reinforcement Learning In most text-based games rewards are sparse , since the size of the action space makes the probability of observing a reward extremely low when taking only random actions . Sparse reward environments are particularly challenging for reinforcement learning as they require longer term planning . Many exploration based solutions have been proposed to address the challenges associated with reward sparsity . Among these exploration approaches are novelty search ( Lehman & Stanley , 2008 ; 2011 ; Conti et al. , 2018 ; Achiam & Sastry , 2017 ; Burda et al. , 2018 ) , intrinsic motivation ( Schmidhuber , 1991b ; Oudeyer & Kaplan , 2009 ; Barto , 2013 ) , and curiosity based rewards ( Schmidhuber , 2006 ; 1991a ; Pathak et al. , 2017 ) . For text based games exploration methods have been studied by Yuan et al . ( 2018 ) , where the authors showed the effectiveness of the episodic discovery bonus ( Gershman & Daw , 2017 ) in environments with sparse rewards . This exploration method can only be applied in games with very small action and state spaces , since their counting methods rely on the state in its explicit raw form . 2 METHODOLOGY . Go-Explore ( Ecoffet et al. , 2019 ) differs from the exploration-based algorithms discussed above in that it explicitly keeps track of under-explored areas of the state space and in that it utilizes the determinism of the simulator in order to return to those states , allowing it to explore sparse-reward environments in a sample efficient way ( see Ecoffet et al . ( 2019 ) as well as section 4.1 ) . For the experiments in this paper we mainly focus on the final performance of our policy , not how that policy is trained , thus making Go-Explore a suitable algorithm for our experiments . Go-Explore is composed of two phases . In phase 1 ( also referred to as the “ exploration ” phase ) the algorithm explores the state space through keeping track of previously visited states by maintaining an archive . During this phase , instead of resuming the exploration from scratch , the algorithm starts exploring from promising states in the archive to find high performing trajectories . In phase 2 ( also referred to as the “ robustification ” phase , while in our variant we will call it “ generalization ” ) the algorithm trains a policy using the trajectories found in phase 1 . Following this framework , which is also shown in Figure 3 ( Appendix A.2 ) , we define the Go-Explore phases for text-based games . Let us first define text-based games using the same notation as Yuan et al . ( 2018 ) . A text-based game can be framed as a discrete-time Partially Observable Markov Decision Process ( POMDP ) ( Kaelbling et al. , 1998 ) defined by ( S , T , A , Ω , O , R ) , where : S is the set of the environment states , T is the state transition function that defines the next state probability , i.e . T ( st+1|at ; st ) ∀st ∈ S , A is the set of actions , which in our case is all the possible sequences of tokens , Ω is the set of observations , i.e . text observed by the agent every time has it to take an action in the game ( i.e . dialogue turn ) which is controlled by the conditional observation probability O , i.e . O ( ot|st , at−1 ) , and , finally , R is the reward function i.e . r = R ( s , a ) . Let us also define the observation ot ∈ Ω and the action at ∈ A. Text-based games provide some information in plain text at each turn and , without loss of generality , we define an observation ot as the sequence of tokens { o0t , · · · , ont } that form that text . Similarly , we define the tokens of an action at as the sequence { a0t , · · · , amt } . Furthermore , we define the set of admissible actions At ∈ A as At = { a0 , · · · , az } , where each ai , which is a sequence of tokens , is grammatically correct and admissible with reference to the observation ot . 2.1 PHASE 1 : EXPLORATION . In phase 1 , Go-Explore builds an archive of cells , where a cell is defined as a set of observations that are mapped to the same , discrete representation by some mapping function f ( x ) . Each cell is associated with meta-data including the trajectory towards that cell , the length of that trajectory , and the cumulative reward of that trajectory . New cells are added to the archive when they are encountered in the environment , and existing cells are updated with new meta-data when the trajectory towards that cells is higher scoring or equal scoring but shorter . At each iteration the algorithm selects a cell from this archive based on meta-data of the cell ( e.g . the accumulated reward , etc . ) and starts to randomly explore from the end of the trajectory associated with the selected cell . Phase 1 requires three components : the way that observations are embedded into cell representations , the cell selection , and the way actions are randomly selected when exploring from a selected cell . In our variant of the algorithm , f ( x ) is defined as follows : given an observation , we compute the word embedding for each token in this observation , sum these embeddings , and then concatenate this sum with the current cumulative reward to construct the cell representation . The resulting vectors are subsequently compressed and discretized by binning them in order to map similar observations to the same cell . This way , the cell representation , which is the key of the archive , incorporates information about the current observation of the game . Adding the current cumulative reward to the cell representation is new to our Go-Explore variant as in the original algorithm only down-scaled image pixels were used . It turned out to be a very very effective to increase the speed at which high reward trajectories are discovered . In phase 1 , we restrict the action space to the set of admissible actions At that are provided by the game at every step of the game 1 . This too is particularly important for the random search to find a high reward trajectory faster . Finally , we denote the trajectory found in phase 1 for game g as Tg = [ ( o0 , a0 , r0 ) , · · · , ( ot , at , rt ) ] . 1Note that the final goal is to generalize to test environments where admissible actions are not available . The assumption that admissible actions are available at training time holds in cases where we build the training environment for an RL agent ( e.g . a hand-crafted dialogue system ) , and a system trained in such an environment can be practically applied as long as the system does not rely on such information at test time . Thus , we assumed that these admissible commands are not available at test time .
This paper applies the Go-Explore algorithm to the domain of text-based games and shows significant performance gains on Textworld's Coin Collector and Cooking sets of games. Additionally, the authors evaluate 3 different paradigms for training agents on (1) single games, (2) jointly on multiple games, and (3) training on a train set of games and testing on a held-out set of games. Results show that Go-Explore's policies outperform prior methods including DRRN and LSTM-DQN. In addition to better asymptotic performance Go-Explore is also more efficient in terms of the number of environment interactions needed to reach a good policy.
SP:c578bd6652d1dcd0e280d587ffc973dddf3146c6
Exploration Based Language Learning for Text-Based Games
1 INTRODUCTION . Text-based games became popular in the mid 80s with the game series Zork ( Anderson & Galley , 1985 ) resulting in many different text-based games being produced and published ( Spaceman , 2019 ) . These games use a plain text description of the environment and the player has to interact with them by writing natural-language commands . Recently , there has been a growing interest in developing agents that can automatically solve text-based games ( Côté et al. , 2018 ) by interacting with them . These settings challenge the ability of an artificial agent to understand natural language , common sense knowledge , and to develop the ability to interact with environments using language ( Luketina et al. , 2019 ; Branavan et al. , 2012 ) . Since the actions in these games are commands that are in natural language form , the major obstacle is the extremely large action space of the agent , which leads to a combinatorially large exploration problem . In fact , with a vocabulary ofN words ( e.g . 20K ) and the possibility of producing sentences with at most m words ( e.g . 7 words ) , the total number of actions is O ( Nm ) ( e.g . 20K7 ≈ 1.28e30 ) . To avoid this large action space , several existing solutions focus on simpler text-based games with very small vocabularies where the action space is constrained to verb-object pairs ( DePristo & Zubek , 2001 ; Narasimhan et al. , 2015 ; Yuan et al. , 2018 ; Zelinka , 2018 ) . Moreover , many existing works rely on using predetermined sets of admissible actions ( He et al. , 2015 ; Tessler et al. , 2019 ; Zahavy et al. , 2018 ) . However , a more ideal , and still under explored , alternative would be an agent that can operate in the full , unconstrained action space of natural language that can systematically generalize to new text-based games with no or few interactions with the environment . To address this challenge , we propose to use the idea behind the recently proposed GoExplore ( Ecoffet et al. , 2019 ) algorithm . Specifically , we propose to first extract high reward trajectories of states and actions in the game using the exploration methodology proposed in Go-Explore and then train a policy using a Seq2Seq ( Sutskever et al. , 2014 ) model that maps observations to actions , in an imitation learning fashion . To show the effectiveness of our proposed methodology , we first benchmark the exploration ability of Go-Explore on the family of text-based games called CoinCollector ( Yuan et al. , 2018 ) . Then we use the 4,440 games of “ First TextWorld Problems ” ( Côté , 2018 ) , which are generated using the machinery introduced by Côté et al . ( 2018 ) , to show the generalization ability of our proposed methodology . In the former experiment we show that Go-Explore finds winning trajectories faster than existing solutions , and in the latter , we show that training a Seq2Seq model on the trajectories found by Go-Explore results in stronger generalization , as suggested by the stronger performance on unseen games , compared to existing competitive baselines ( He et al. , 2015 ; Narasimhan et al. , 2015 ) . Reinforcement Learning Based Approaches for Text-Based Games Among reinforcement learning based efforts to solve text-based games two approaches are prominent . The first approach assumes an action as a sentence of a fixed number of words , and associates a separate Qfunction ( Watkins , 1989 ; Mnih et al. , 2015 ) with each word position in this sentence . This method was demonstrated with two-word sentences consisting of a verb-object pair ( e.g . take apple ) ( DePristo & Zubek , 2001 ; Narasimhan et al. , 2015 ; Yuan et al. , 2018 ; Zelinka , 2018 ; Fulda et al. , 2017 ) . In the second approach , one Q-function that scores all possible actions ( i.e . sentences ) is learned and used to play the game ( He et al. , 2015 ; Tessler et al. , 2019 ; Zahavy et al. , 2018 ) . The first approach is quite limiting since a fixed number of words must be selected in advance and no temporal dependency is enforced between words ( e.g . lack of language modelling ) . In the second approach , on the other hand , the number of possible actions can become exponentially large if the admissible actions ( a predetermined low cardinality set of actions that the agent can take ) are not provided to the agent . A possible solution to this issue has been proposed by Tao et al . ( 2018 ) , where a hierarchical pointer-generator is used to first produce the set of admissible actions given the observation , and subsequently one element of this set is chosen as the action for that observation . However , in our experiments we show that even in settings where the true set of admissible actions is provided by the environment , a Q-scorer ( He et al. , 2015 ) does not generalize well in our setting ( Section 5.2 Zero-Shot ) and we would expect performance to degrade even further if the admissible actions were generated by a separate model . Less common are models that either learn to reduce a large set of actions into a smaller set of admissible actions by eliminating actions ( Zahavy et al. , 2018 ) or by compressing them in a latent space ( Tessler et al. , 2019 ) . Exploration in Reinforcement Learning In most text-based games rewards are sparse , since the size of the action space makes the probability of observing a reward extremely low when taking only random actions . Sparse reward environments are particularly challenging for reinforcement learning as they require longer term planning . Many exploration based solutions have been proposed to address the challenges associated with reward sparsity . Among these exploration approaches are novelty search ( Lehman & Stanley , 2008 ; 2011 ; Conti et al. , 2018 ; Achiam & Sastry , 2017 ; Burda et al. , 2018 ) , intrinsic motivation ( Schmidhuber , 1991b ; Oudeyer & Kaplan , 2009 ; Barto , 2013 ) , and curiosity based rewards ( Schmidhuber , 2006 ; 1991a ; Pathak et al. , 2017 ) . For text based games exploration methods have been studied by Yuan et al . ( 2018 ) , where the authors showed the effectiveness of the episodic discovery bonus ( Gershman & Daw , 2017 ) in environments with sparse rewards . This exploration method can only be applied in games with very small action and state spaces , since their counting methods rely on the state in its explicit raw form . 2 METHODOLOGY . Go-Explore ( Ecoffet et al. , 2019 ) differs from the exploration-based algorithms discussed above in that it explicitly keeps track of under-explored areas of the state space and in that it utilizes the determinism of the simulator in order to return to those states , allowing it to explore sparse-reward environments in a sample efficient way ( see Ecoffet et al . ( 2019 ) as well as section 4.1 ) . For the experiments in this paper we mainly focus on the final performance of our policy , not how that policy is trained , thus making Go-Explore a suitable algorithm for our experiments . Go-Explore is composed of two phases . In phase 1 ( also referred to as the “ exploration ” phase ) the algorithm explores the state space through keeping track of previously visited states by maintaining an archive . During this phase , instead of resuming the exploration from scratch , the algorithm starts exploring from promising states in the archive to find high performing trajectories . In phase 2 ( also referred to as the “ robustification ” phase , while in our variant we will call it “ generalization ” ) the algorithm trains a policy using the trajectories found in phase 1 . Following this framework , which is also shown in Figure 3 ( Appendix A.2 ) , we define the Go-Explore phases for text-based games . Let us first define text-based games using the same notation as Yuan et al . ( 2018 ) . A text-based game can be framed as a discrete-time Partially Observable Markov Decision Process ( POMDP ) ( Kaelbling et al. , 1998 ) defined by ( S , T , A , Ω , O , R ) , where : S is the set of the environment states , T is the state transition function that defines the next state probability , i.e . T ( st+1|at ; st ) ∀st ∈ S , A is the set of actions , which in our case is all the possible sequences of tokens , Ω is the set of observations , i.e . text observed by the agent every time has it to take an action in the game ( i.e . dialogue turn ) which is controlled by the conditional observation probability O , i.e . O ( ot|st , at−1 ) , and , finally , R is the reward function i.e . r = R ( s , a ) . Let us also define the observation ot ∈ Ω and the action at ∈ A. Text-based games provide some information in plain text at each turn and , without loss of generality , we define an observation ot as the sequence of tokens { o0t , · · · , ont } that form that text . Similarly , we define the tokens of an action at as the sequence { a0t , · · · , amt } . Furthermore , we define the set of admissible actions At ∈ A as At = { a0 , · · · , az } , where each ai , which is a sequence of tokens , is grammatically correct and admissible with reference to the observation ot . 2.1 PHASE 1 : EXPLORATION . In phase 1 , Go-Explore builds an archive of cells , where a cell is defined as a set of observations that are mapped to the same , discrete representation by some mapping function f ( x ) . Each cell is associated with meta-data including the trajectory towards that cell , the length of that trajectory , and the cumulative reward of that trajectory . New cells are added to the archive when they are encountered in the environment , and existing cells are updated with new meta-data when the trajectory towards that cells is higher scoring or equal scoring but shorter . At each iteration the algorithm selects a cell from this archive based on meta-data of the cell ( e.g . the accumulated reward , etc . ) and starts to randomly explore from the end of the trajectory associated with the selected cell . Phase 1 requires three components : the way that observations are embedded into cell representations , the cell selection , and the way actions are randomly selected when exploring from a selected cell . In our variant of the algorithm , f ( x ) is defined as follows : given an observation , we compute the word embedding for each token in this observation , sum these embeddings , and then concatenate this sum with the current cumulative reward to construct the cell representation . The resulting vectors are subsequently compressed and discretized by binning them in order to map similar observations to the same cell . This way , the cell representation , which is the key of the archive , incorporates information about the current observation of the game . Adding the current cumulative reward to the cell representation is new to our Go-Explore variant as in the original algorithm only down-scaled image pixels were used . It turned out to be a very very effective to increase the speed at which high reward trajectories are discovered . In phase 1 , we restrict the action space to the set of admissible actions At that are provided by the game at every step of the game 1 . This too is particularly important for the random search to find a high reward trajectory faster . Finally , we denote the trajectory found in phase 1 for game g as Tg = [ ( o0 , a0 , r0 ) , · · · , ( ot , at , rt ) ] . 1Note that the final goal is to generalize to test environments where admissible actions are not available . The assumption that admissible actions are available at training time holds in cases where we build the training environment for an RL agent ( e.g . a hand-crafted dialogue system ) , and a system trained in such an environment can be practically applied as long as the system does not rely on such information at test time . Thus , we assumed that these admissible commands are not available at test time .
This paper considers the task of training an agent to play text-based computer games. One of the key challenges is the high-dimensional action space in these games, which poses a problem for many current methods. The authors propose to learn an LSTM-based decoder to output the action $a_t$ by greedily prediction one word at a time. They achieve this by training a sequence to sequence model on trajectories collected by running the game using a previously proposed exploration method (Go-Explore). While the results are promising, there might be limited novelty beyond training a sequence to sequence model on pre-collected trajectories. Further, the experiments are missing key elements in terms of proper comparison to baselines.
SP:c578bd6652d1dcd0e280d587ffc973dddf3146c6
Limitations for Learning from Point Clouds
In this paper we prove new universal approximation theorems for deep learning on point clouds that do not assume fixed cardinality . We do this by first generalizing the classical universal approximation theorem to general compact Hausdorff spaces and then applying this to the permutation-invariant architectures presented in PointNet ( Qi et al ) and Deep Sets ( Zaheer et al ) . Moreover , though both architectures operate on the same domain , we show that the constant functions are the only functions they can mutually uniformly approximate . In particular , DeepSets architectures can not uniformly approximate the diameter function but can uniformly approximate the center-of-mass function but it is the other way around for PointNet . Additionally , even when the point clouds are limited to at most k points , PointNet can not uniformly approximate center-of-mass and we obtain explicit error bounds and a method to produce geometrically derived adversarial examples . 1 INTRODUCTION . Recently , architectures proposed in PointNet ( Qi et al. , 2017 ) and Deep Sets ( Zaheer et al. , 2017 ) have allowed for the direct processing of point clouds within a deep learning framework . These methods produce outputs that are permutation-invariant with respect to the member points and work for point clouds of arbitrarily large cardinality . A common source of such data is LIDAR measurements from autonomous vehicles . Zaheer et al . ( 2017 ) also presents a permutation-equivariant architecture which we do not discuss here . Each of of these works provide their own universal approximation theorem ( UAT ) to support the empirical success of their architectures . However , both results assume the cardinality of the point cloud is fixed to some size n. In this work we refine these results , remove the cardinality limitation , use weaker architecture assumptions , and arrive at three main results which can be summarized roughly as follows ( assuming unrestricted finite cardinality for the input point clouds ) : 1 ) PointNet ( DeepSets ) architectures can uniformly approximate real-valued functions that are uniformly continuous with respect to the Hausdorff ( Wasserstein ) metric and nothing else ( Theorem 3.4 ) . 2 ) Only the constant functions can be uniformly approximated by both architectures . In particular , PointNet architectures can uniformly approximate the diameter function but DeepSets architectures can not . Conversely , DeepSets architectures can uniformly approximate the center-of-mass function but PointNet architectures can not ( Theorem 4.1 ) . 3 ) We prove explicit error lower bounds and produce adversarial examples to show that even when limited to point clouds of size k , PointNet can not uniformly approximate center-ofmass ( Theorem 4.2 ) . To do this we extend the many universal approximation results for feed-forward networks ( Cybenko , 1989 ; Hornik et al. , 1989 ; Leshno et al. , 1993 ; Stinchcombe , 1999 ) to the abstract setting of general compact Hausdorff spaces . We then find appropriate compact metric spaces over which PointNet and DeepSets architectures can be easily analyzed and then finally we observe the resulting consequences in the original setting of interest , i.e . point clouds . 2 PRELIMINARIES . 2.1 POINTNET AND DEEPSETS ARCHITECTURES . In practice , the implementations of the architectures presented in PointNet and Deep Sets can involve many additional tricks , but the essential ideas are quite simple . We do however make a small modification to the Deep Sets model . For A ⊆ Rn of cardinality |A| < ∞ , we have Qi et al . ( 2017 ) and Zaheer et al . ( 2017 ) suggesting scalar-output neural networks of the form FPN ( A ) = ρ ( max a∈A ϕ ( a ) ) , and FDS ( A ) = ρ ( b+ 1 |A| ∑ a∈A ϕ ( a ) ) , respectively . Here ϕ : Rn → Rm creates features for each point in A , then a symmetric operation is applied , and then ρ : Rm → R combines these features into a scalar output ( here max is the component-wise maximum ) . In practice , we need both ρ and ϕ to be neural networks . Note that because we use a symmetric operation before ρ , the output will not depend on the ordering of points in the point cloud , and because the max and sum operations scale to arbitrary finite cardinalities the size of the point cloud is not an issue . The original model in Deep Sets did not have a bias term b and used a sum instead of the averaging we use here . This change will help us later in our theoretical analysis . It will help to introduce some simplifying notation . Let F ( Ω ) denote the set of all nonempty finite subsets of a set Ω ( i.e . point clouds in Ω ) , F≤k ( Ω ) the set of nonepmty subsets of size ≤ k , and Fk ( Ω ) the set of k-point subsets . Now consider Ω ⊆ RN and define maxf , avef , b : F ( Ω ) → R which are given by maxf ( A ) = maxa∈A f ( a ) and avef , b ( A ) = b + 1|A| ∑ a∈A f ( a ) respectively . We make sense of this in the natural way if we use vector-valued f and b by operating componentwise . We call these operations max neurons and biased-averaging neurons respectively . Once again letting ρ and ϕ be neural networks , FPN = ρ ◦ maxϕ and FDS = ρ ◦ aveϕ , b will be the general form of what we call the PointNet and DeepSets architectures ( resp . ) in this paper . Some natural questions are 1 ) is there a topology forF ( Ω ) that makes these architectures continuous , 2 ) how expressive are these approaches , and 3 ) how deep is deep enough for function approximation ? 2.2 FUNCTION SPACES AND UNIFORM APPROXIMATION . From now on , we only consider R-valued functions unless otherwise stated . Let B ( A ) be the set of bounded functions on a set A , let C ( X ) and Cb ( X ) be the set of continuous and bounded continuous functions on a topological space X ( respectively ) , and let U ( M ) and Ub ( M ) be the uniformly continuous and bounded uniformly continuous functions on a metric space ( M , d ) ( respectively ) . We equip all of these with the uniform norm i.e . |||f |||A = supa∈A |f ( a ) | – we reserve ‖∗‖ for the Euclidean norm . This makes them all normed spaces , with B ( A ) , Cb ( X ) and Ub ( M ) additionally being Banach spaces . Moreover , if X is compact and ( M , d ) has compact metric completion , then C ( X ) = Cb ( X ) and U ( M ) = Ub ( M ) and hence are also Banach spaces . For background see Rudin ( 2006 ) . If given an injective map i : A→ X , then we say that ϕ : A→ R ( uniquely ) continuously extends to X if there is a ( unique ) ϕ̃ ∈ C ( X ) such that ϕ̃ ◦ i = ϕ . We say a family of funcntions N on A ( uniquely ) continuously extends to X if every ϕ ∈ N ( uniquely ) continuously extends to X . We will make use of the following lemma which is proved in the appendix . Lemma 2.1 . LetN ⊆ B ( D ) where D is a dense subset of a compact metric space ( X , d ) . Suppose N has a continuous extension to X denoted by N ′ ⊆ C ( X ) which is dense . Then the uniform closure ofN in B ( D ) is r ( C ( X ) ) = U ( D ) where r : C ( X ) → Cb ( D ) is the domain restriction map . Letting D = F ( Ω ) , this lemma suggest the following plan of attack : find a compact metric space ( X , d ) in which we can realize F ( Ω ) as a dense subset and hope that our class of neural networks N continuously extends to a dense subset of C ( X ) . If we can do that , then we know the uniform closure of our class of neural networks are precisely the uniformly continuous functions on F ( Ω ) with respect to the metric inherited from X . This motivates the next subsection . 2.3 METRICS ON THE SPACE OF POINT CLOUDS . From now on we will assume ( Ω , d ) is a compact metric space and when Ω ⊆ Rn it will be compact and equipped with the Euclidean metric . Let K ( Ω ) denote the set of all compact subsets of Ω and P ( Ω ) denote the set of all Borel probability measures on Ω . The Hausdorff metric dH ( Munkres , 2000 ) is a natural metric for K ( Ω ) and 1-Wasserstein metric dW ( Villani , 2009 ) ( also called the Earth-mover distance ) is a natural metric for P ( Ω ) . With these metrics , K ( Ω ) and P ( Ω ) become compact metric spaces of their own . From now on we will assume these two spaces are always equipped with the aforementioned metrics . We also briefly mention M ( Ω ) the Banach space of finite signed regular Borel measures on Ω . By the Riesz-Markov theorem it is the topological dual space of C ( Ω ) . Of interest to us is that P ( Ω ) ⊆M ( Ω ) and that the weak- * topology on P ( Ω ) coincides with the topology induced by dW . This means that dW ( µn , µ ) → 0 iff ∫ f dµn → ∫ f dµ for all f ∈ C ( Ω ) . Next , note that F ( Ω ) ⊆ K ( Ω ) and let iK denote the natural inclusion map . We can also define an injective map iP : F ( Ω ) → P ( Ω ) by mapping A ∈ F ( Ω ) to its associated empirical measure iP ( A ) = µA = 1 |A| ∑ a∈A δa ∈ P ( Ω ) where δa is the Dirac delta measure supported at a . The injective maps iK and iP allow us to induce the dH and dW metrics on F ( Ω ) . We will denote the metrized versions by FH ( Ω ) and FW ( Ω ) respectively and use the same convention for FkH ( Ω ) and FkW ( Ω ) . Another important fact to know is that iK and iP embed F ( Ω ) as dense subset of K ( Ω ) and P ( Ω ) . The former follows from compactness of the members of K ( Ω ) and to see why the latter is true see Fournier & Guillin ( 2015 ) ; Villani ( 2009 ) For f ∈ C ( Ω ) and b ∈ R , define Maxf : K ( Ω ) → R and Avef , b : P ( Ω ) → R as the functions Maxf ( K ) = maxx∈K f ( x ) and Avef , b ( µ ) = b+ ∫ Ω f dµ . Lemma 2.2 . Let ( Ω , d ) be compact , f ∈ C ( Ω ) , and b ∈ R. Then Maxf ∈ C ( K ( Ω ) ) and Avef , b ∈ C ( P ( Ω ) ) and Maxf ◦iK = maxf and Avef , b ◦iP = avef , b . As a consequence , PointNet and DeepSets are uniformly continuous on FH ( Ω ) and FW ( Ω ) respectively . This lemma ( proved in the appendix ) tells us that the max neurons and biased-averaging neurons continuously extend to K ( Ω ) and P ( Ω ) and hence so do PointNet and DeepSets architectures ( since we merely compose with the continuous ρ after ) . Thus , we wil be able analyze such architectures as continuous functions on compact metric spaces , which is mathematically a much nicer problem than studying them as set-theoretic functions on an un-metrized F ( Ω ) .
PointNet (Qi et al, 2017) and Deep sets (Zaheer et al, 2017) have allowed to use deep architectures that deal with point clouds as inputs, taking into account the invariance in the ordering of points. However, existing results on their approximation abilities are limited to fixed cardinalities. This paper removes the cardinality limitation and gives two kinds of results:
SP:048d4b0525787b7a697c5608f0dd20ef84ebe339
Limitations for Learning from Point Clouds
In this paper we prove new universal approximation theorems for deep learning on point clouds that do not assume fixed cardinality . We do this by first generalizing the classical universal approximation theorem to general compact Hausdorff spaces and then applying this to the permutation-invariant architectures presented in PointNet ( Qi et al ) and Deep Sets ( Zaheer et al ) . Moreover , though both architectures operate on the same domain , we show that the constant functions are the only functions they can mutually uniformly approximate . In particular , DeepSets architectures can not uniformly approximate the diameter function but can uniformly approximate the center-of-mass function but it is the other way around for PointNet . Additionally , even when the point clouds are limited to at most k points , PointNet can not uniformly approximate center-of-mass and we obtain explicit error bounds and a method to produce geometrically derived adversarial examples . 1 INTRODUCTION . Recently , architectures proposed in PointNet ( Qi et al. , 2017 ) and Deep Sets ( Zaheer et al. , 2017 ) have allowed for the direct processing of point clouds within a deep learning framework . These methods produce outputs that are permutation-invariant with respect to the member points and work for point clouds of arbitrarily large cardinality . A common source of such data is LIDAR measurements from autonomous vehicles . Zaheer et al . ( 2017 ) also presents a permutation-equivariant architecture which we do not discuss here . Each of of these works provide their own universal approximation theorem ( UAT ) to support the empirical success of their architectures . However , both results assume the cardinality of the point cloud is fixed to some size n. In this work we refine these results , remove the cardinality limitation , use weaker architecture assumptions , and arrive at three main results which can be summarized roughly as follows ( assuming unrestricted finite cardinality for the input point clouds ) : 1 ) PointNet ( DeepSets ) architectures can uniformly approximate real-valued functions that are uniformly continuous with respect to the Hausdorff ( Wasserstein ) metric and nothing else ( Theorem 3.4 ) . 2 ) Only the constant functions can be uniformly approximated by both architectures . In particular , PointNet architectures can uniformly approximate the diameter function but DeepSets architectures can not . Conversely , DeepSets architectures can uniformly approximate the center-of-mass function but PointNet architectures can not ( Theorem 4.1 ) . 3 ) We prove explicit error lower bounds and produce adversarial examples to show that even when limited to point clouds of size k , PointNet can not uniformly approximate center-ofmass ( Theorem 4.2 ) . To do this we extend the many universal approximation results for feed-forward networks ( Cybenko , 1989 ; Hornik et al. , 1989 ; Leshno et al. , 1993 ; Stinchcombe , 1999 ) to the abstract setting of general compact Hausdorff spaces . We then find appropriate compact metric spaces over which PointNet and DeepSets architectures can be easily analyzed and then finally we observe the resulting consequences in the original setting of interest , i.e . point clouds . 2 PRELIMINARIES . 2.1 POINTNET AND DEEPSETS ARCHITECTURES . In practice , the implementations of the architectures presented in PointNet and Deep Sets can involve many additional tricks , but the essential ideas are quite simple . We do however make a small modification to the Deep Sets model . For A ⊆ Rn of cardinality |A| < ∞ , we have Qi et al . ( 2017 ) and Zaheer et al . ( 2017 ) suggesting scalar-output neural networks of the form FPN ( A ) = ρ ( max a∈A ϕ ( a ) ) , and FDS ( A ) = ρ ( b+ 1 |A| ∑ a∈A ϕ ( a ) ) , respectively . Here ϕ : Rn → Rm creates features for each point in A , then a symmetric operation is applied , and then ρ : Rm → R combines these features into a scalar output ( here max is the component-wise maximum ) . In practice , we need both ρ and ϕ to be neural networks . Note that because we use a symmetric operation before ρ , the output will not depend on the ordering of points in the point cloud , and because the max and sum operations scale to arbitrary finite cardinalities the size of the point cloud is not an issue . The original model in Deep Sets did not have a bias term b and used a sum instead of the averaging we use here . This change will help us later in our theoretical analysis . It will help to introduce some simplifying notation . Let F ( Ω ) denote the set of all nonempty finite subsets of a set Ω ( i.e . point clouds in Ω ) , F≤k ( Ω ) the set of nonepmty subsets of size ≤ k , and Fk ( Ω ) the set of k-point subsets . Now consider Ω ⊆ RN and define maxf , avef , b : F ( Ω ) → R which are given by maxf ( A ) = maxa∈A f ( a ) and avef , b ( A ) = b + 1|A| ∑ a∈A f ( a ) respectively . We make sense of this in the natural way if we use vector-valued f and b by operating componentwise . We call these operations max neurons and biased-averaging neurons respectively . Once again letting ρ and ϕ be neural networks , FPN = ρ ◦ maxϕ and FDS = ρ ◦ aveϕ , b will be the general form of what we call the PointNet and DeepSets architectures ( resp . ) in this paper . Some natural questions are 1 ) is there a topology forF ( Ω ) that makes these architectures continuous , 2 ) how expressive are these approaches , and 3 ) how deep is deep enough for function approximation ? 2.2 FUNCTION SPACES AND UNIFORM APPROXIMATION . From now on , we only consider R-valued functions unless otherwise stated . Let B ( A ) be the set of bounded functions on a set A , let C ( X ) and Cb ( X ) be the set of continuous and bounded continuous functions on a topological space X ( respectively ) , and let U ( M ) and Ub ( M ) be the uniformly continuous and bounded uniformly continuous functions on a metric space ( M , d ) ( respectively ) . We equip all of these with the uniform norm i.e . |||f |||A = supa∈A |f ( a ) | – we reserve ‖∗‖ for the Euclidean norm . This makes them all normed spaces , with B ( A ) , Cb ( X ) and Ub ( M ) additionally being Banach spaces . Moreover , if X is compact and ( M , d ) has compact metric completion , then C ( X ) = Cb ( X ) and U ( M ) = Ub ( M ) and hence are also Banach spaces . For background see Rudin ( 2006 ) . If given an injective map i : A→ X , then we say that ϕ : A→ R ( uniquely ) continuously extends to X if there is a ( unique ) ϕ̃ ∈ C ( X ) such that ϕ̃ ◦ i = ϕ . We say a family of funcntions N on A ( uniquely ) continuously extends to X if every ϕ ∈ N ( uniquely ) continuously extends to X . We will make use of the following lemma which is proved in the appendix . Lemma 2.1 . LetN ⊆ B ( D ) where D is a dense subset of a compact metric space ( X , d ) . Suppose N has a continuous extension to X denoted by N ′ ⊆ C ( X ) which is dense . Then the uniform closure ofN in B ( D ) is r ( C ( X ) ) = U ( D ) where r : C ( X ) → Cb ( D ) is the domain restriction map . Letting D = F ( Ω ) , this lemma suggest the following plan of attack : find a compact metric space ( X , d ) in which we can realize F ( Ω ) as a dense subset and hope that our class of neural networks N continuously extends to a dense subset of C ( X ) . If we can do that , then we know the uniform closure of our class of neural networks are precisely the uniformly continuous functions on F ( Ω ) with respect to the metric inherited from X . This motivates the next subsection . 2.3 METRICS ON THE SPACE OF POINT CLOUDS . From now on we will assume ( Ω , d ) is a compact metric space and when Ω ⊆ Rn it will be compact and equipped with the Euclidean metric . Let K ( Ω ) denote the set of all compact subsets of Ω and P ( Ω ) denote the set of all Borel probability measures on Ω . The Hausdorff metric dH ( Munkres , 2000 ) is a natural metric for K ( Ω ) and 1-Wasserstein metric dW ( Villani , 2009 ) ( also called the Earth-mover distance ) is a natural metric for P ( Ω ) . With these metrics , K ( Ω ) and P ( Ω ) become compact metric spaces of their own . From now on we will assume these two spaces are always equipped with the aforementioned metrics . We also briefly mention M ( Ω ) the Banach space of finite signed regular Borel measures on Ω . By the Riesz-Markov theorem it is the topological dual space of C ( Ω ) . Of interest to us is that P ( Ω ) ⊆M ( Ω ) and that the weak- * topology on P ( Ω ) coincides with the topology induced by dW . This means that dW ( µn , µ ) → 0 iff ∫ f dµn → ∫ f dµ for all f ∈ C ( Ω ) . Next , note that F ( Ω ) ⊆ K ( Ω ) and let iK denote the natural inclusion map . We can also define an injective map iP : F ( Ω ) → P ( Ω ) by mapping A ∈ F ( Ω ) to its associated empirical measure iP ( A ) = µA = 1 |A| ∑ a∈A δa ∈ P ( Ω ) where δa is the Dirac delta measure supported at a . The injective maps iK and iP allow us to induce the dH and dW metrics on F ( Ω ) . We will denote the metrized versions by FH ( Ω ) and FW ( Ω ) respectively and use the same convention for FkH ( Ω ) and FkW ( Ω ) . Another important fact to know is that iK and iP embed F ( Ω ) as dense subset of K ( Ω ) and P ( Ω ) . The former follows from compactness of the members of K ( Ω ) and to see why the latter is true see Fournier & Guillin ( 2015 ) ; Villani ( 2009 ) For f ∈ C ( Ω ) and b ∈ R , define Maxf : K ( Ω ) → R and Avef , b : P ( Ω ) → R as the functions Maxf ( K ) = maxx∈K f ( x ) and Avef , b ( µ ) = b+ ∫ Ω f dµ . Lemma 2.2 . Let ( Ω , d ) be compact , f ∈ C ( Ω ) , and b ∈ R. Then Maxf ∈ C ( K ( Ω ) ) and Avef , b ∈ C ( P ( Ω ) ) and Maxf ◦iK = maxf and Avef , b ◦iP = avef , b . As a consequence , PointNet and DeepSets are uniformly continuous on FH ( Ω ) and FW ( Ω ) respectively . This lemma ( proved in the appendix ) tells us that the max neurons and biased-averaging neurons continuously extend to K ( Ω ) and P ( Ω ) and hence so do PointNet and DeepSets architectures ( since we merely compose with the continuous ρ after ) . Thus , we wil be able analyze such architectures as continuous functions on compact metric spaces , which is mathematically a much nicer problem than studying them as set-theoretic functions on an un-metrized F ( Ω ) .
This work examines the fundamental properties of two popular architectures -- PointNet and DeepSets -- for processing point clouds (and other unordered sets). The authors provide a new universal approximation theorem on real-valued functions that doesn't require the assumption of a fixed cardinality of the input set. They further provide examples of functions that can't be mutually approximated by PointNets and DeepSets.
SP:048d4b0525787b7a697c5608f0dd20ef84ebe339
Extreme Values are Accurate and Robust in Deep Networks
1 INTRODUCTION . Convolutional neural networks ( CNNs ) evolve very fast ever since AlexNet ( Krizhevsky & Hinton , 2012 ) makes a great breakthrough on ImageNet image classification challenge ( Deng et al. , 2009 ) in 2012 . Various network architectures have been proposed to further boost classification performance since then , including VGGNet ( Simonyan & Zisserman , 2015 ) , GoogleNet ( Szegedy et al. , 2015 ) , ResNet ( He et al. , 2016 ) , DenseNet ( Huang et al. , 2017 ) and SENet ( Hu et al. , 2018 ) , etc . Recently , people even introduce network architecture search to automatically learn better network architectures ( Zoph & Le , 2017 ; Liu et al. , 2018 ) . However , state-of-the-art CNNs are challenged by their robustness , especially vulnerability to adversarial attacks based on small , human-imperceptible modifications of the input ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) . Su et al . ( 2018 ) thoroughly study the robustness of 18 well-known ImageNet models using multiple metrics , and reveals that adversarial examples are widely existent . Many methods are proposed to improve network robustness , which can be roughly categorized into three perspectives : ( 1 ) modifying input or intermediate features by transformation ( Guo et al. , 2018 ) , denoising ( Liao et al. , 2018 ; Jia et al. , 2019 ) , generative models ( Samangouei et al. , 2018 ; Song et al. , 2018 ) ; ( 2 ) modifying training by changing loss functions ( Wong & Kolter , 2018 ; Elsayed et al. , 2018 ; Zhang et al. , 2019 ) , network distillation ( Papernot et al. , 2016 ) , or adversarial training ( Goodfellow et al. , 2015 ; Tramer et al. , 2018 ) ( 3 ) designing robust network architectures ( Xie et al. , 2019 ; Svoboda et al. , 2019 ; Nayebi & Ganguli , 2017 ) and possible combinations of these basic categories . For more details of current status , please refer to a recent survey ( Akhtar & Mian , 2018 ) . Although it is known that adversarial examples are widely existent ( Su et al. , 2018 ) , some fundamental questions are still far from being well studied like what causes it , and how the factor impacts the performance , etc . One of the interesting findings in ( Su et al. , 2018 ) is that model architecture is a more critical factor to network robustness than model size ( e.g . number of layers ) . Some recent works start to explore much deeper nature . For instance , both ( Geirhos et al. , 2019 ; Baker et al. , 2018 ) show that CNNs are trained to be strongly biased towards textures so that CNNs do not distinguish objects contours from other local or even noise edges , thus perform poorly on shape dominating object instances . On the contrary , there are no statistical difference for human behaviors on both texture rich objects and global shape dominating objects in psychophysical trials . Ilyas et al . ( 2019 ) further analyze and show that deep convolutional features can be categorized into robust and non-robust features , while non-robust features may even account for good generalization . However , non-robust features are not expected to have good model interpretability . It is thus an interesting topic to disentangle robust and non-robust features with certain kinds of human priors in the network designing or training process . In fact , human priors have been extensively used in handcraft designed robust visual features like SIFT ( Lowe , 2004 ) . SIFT detects scale-space ( Lindeberg , 1994 ) extrema from input images , and selects stable extrema to build robust descriptors with refined location and orientation , which achieves great success for many matching and recognition based vision tasks before CNN being reborn in 2012 ( Krizhevsky & Hinton , 2012 ) . The scale-space extrema are efficiently implemented by using a difference-of-Gaussian ( DoG ) function to search over all scales and image locations , while the DoG operator is believed to biologically mimic the neural processing in the retina of the eye ( Young , 1987 ) . Unfortunately , there is ( at least explicitly ) no such scale-space extrema operations in all existing CNNs . Our motivation is to study the possibility of leveraging good properties of SIFT to renovate CNN networks architectures towards better accuracy and robustness . In this paper , we borrow the scale-space extrema idea from SIFT , and propose extreme value preserving networks ( EVPNet ) to separate robust features from non-robust ones , with three novel architecture components to model the extreme values : ( 1 ) parametric DoG ( pDoG ) to extract extreme values in scale-space for deep networks , ( 2 ) truncated ReLU ( tReLU ) to suppress noise or non-stable extrema and ( 3 ) projected normalization layer ( PNL ) to mimic PCA-SIFT ( Ke et al. , 2004 ) like feature normalization . pDoG and tReLU are combined into one block named EVPConv , which could be used to replace all k × k ( k > 1 ) conv-layers in existing CNNs . We conduct comprehensive experiments and ablation studies to verify the effectiveness of each component and the proposed EVPNet . Figure 1 illustrates a comparison of responses for standard convolution + ReLU and EVPConv in ResNet-50 trained on ImageNet , and shows that the proposed EVPConv produces less noises and more responses around object boundary than standard convolution + ReLU , which demonstrates the capability of EVPConv to separate robust features from non-robust ones . Our major contribution are : • To the best of our knowledge , we are the first to explicitly separate robust features from non-robust ones in deep neural networks from an architecture design perspective . • We propose three novel network architecture components to model extreme values in deep networks , including parametric DoG , truncated ReLU , and projected normalization layer , and verify their effectiveness through comprehensive ablation studies . • We propose extreme value preserving networks ( EVPNets ) to combine those three novel com- ponents , which are demonstrated to be not only more accurate , but also more robust to a set of adversarial attacks ( FGSM , PGD , etc ) even for clean model without adversarial training . 2 RELATED WORK . Robust visual features . Most traditional robust visual feature algorithms like SIFT ( Lowe , 2004 ) and SURF ( Bay et al. , 2006 ) are based on the scale-space theory ( Lindeberg , 1994 ) , while there is a close link between scale-space theory and biological vision ( Lowe , 2004 ) , since many scalespace operations show a high degree of similarity with receptive field profiles recorded from the mammalian retina and the first stages in the visual cortex . For instance , DoG computes the difference of two Gaussian blurred images and is believed to mimic the neural processing in the retina ( Young , 1987 ) . SIFT is one such kind of typical robust visual features , which consists of 4 major stages : ( 1 ) scale-space extrema detection with DoG operations ; ( 2 ) Keypoints localization by their stability ; ( 3 ) Orientation and scale assignment based on primary local gradient direction ; ( 4 ) Histogram based keypoint description . We borrow the scale-space extrema idea from SIFT , and propose three novel and robust architecture components to mimic key stages of SIFT . Robust Network Architectures . Many research efforts have been devoted to network robustness especially on defending against adversarial attacks as summarized in Akhtar & Mian ( 2018 ) . However , there are very limited works that tackle this problem from a network architecture design perspective . A major category of methods ( Liao et al. , 2018 ; Xie et al. , 2019 ) focus on designing new layers to perform denoising operations on the input image or the intermediate feature maps . Most of them are shown effective on black-box attacks , while are still vulnerable to white-box attacks . Non-local denoising layer proposed in Xie et al . ( 2019 ) is shown to improve robustness to white-box attack to an extent with adversarial training ( Madry et al. , 2018 ) . Peer sample information is introduced in Svoboda et al . ( 2019 ) with a graph convolution layer to improve network robustness . Biologically inspired protection ( Nayebi & Ganguli , 2017 ) introduces highly non-linear saturated activation layer to replace ReLU layer , and demonstrates good robustness to adversarial attacks , while similar higher-order principal is also used in Krotov & Hopfield ( 2018 ) . However , these methods still lack a systematic architecture design guidance , and many ( Svoboda et al. , 2019 ; Nayebi & Ganguli , 2017 ) are not robust to iterative attack methods like PGD under clean model setting . In this work , inspired by robust visual feature SIFT , we are able to design a series of innovative architecture components systematically for improving both model accuracy and robustness . We should stress that extreme value theory is a different concept to scale-space extremes , which tries to model the extreme in data distribution , and is used to design an attack-independent metric to measure robustness of DNNs ( Weng et al. , 2018 ) by exploring input data distribution . 3 PRELIMINARY . Difference-of-Gaussian . Given an input image I and Gaussian kernel G ( x , y , σ ) as below G ( x , y , σ ) = 1 2πσ2 e− ( x 2+y2 ) /2σ2 , ( 1 ) where σ denotes the variance . Also , difference of Gaussian ( DoG ) is defined as D ( x , y , σ ) = G ( x , y , σ ) ⊗ I1 −G ( x , y , σ ) ⊗ I0 , ( 2 ) where ⊗ is the convolution operation , and I1 = G ( x , y , σ ) ⊗ I0 . Scale-space DoG repeatedly convolves input images with the same Gaussian kernels , and produces difference-of-Gaussian images by subtracting adjacent image scales . Scale-space extrema ( maxima and minima ) are detected in DoG images by comparing a pixel to its 26 neighbors in 3×3 grids at current and two adjacent scales ( Lowe , 2004 ) . Adversarial Attacks . We use h ( · ) to denote the softmax output of classification networks , and hc ( · ) to denote the prediction probability of class c. Then given a classifier h ( x ) = y , the goal of adversarial attack is to find xadv such that the output of classifier deviates from the true label y : maxi h i ( xadv ) 6= y while closing to the original input : ||x− xadv|| ≤ . Here || · || refers to a norm operator , i.e . L2 or L∞ . Attack Method . The most simple adversarial attack method is Fast Gradient Sign Method ( FGSM ) ( Goodfellow et al. , 2015 ) , a single-step method which takes the sign of the gradient on the input as the direction of the perturbation . L ( · , · ) denotes the loss function defined by cross entropy . Specifically , the formation is as follows : xadv = x+ · sign ( ∇xL ( h ( x ) , y ) ) , ( 3 ) where x is the clean input , y is the label . is the norm bound ( ||x − xadv|| ≤ , i.e . -ball ) of the adversarial perturbation . Projected gradient descent ( PGD ) iteratively applies FGSM with a small step size αi ( Kurakin et al. , 2017a ; Madry et al. , 2018 ) with formulation as below : xadvi+1 = Proj ( x+ α · sign ( ∇xL ( h ( x adv i ) , y ) ) ) , ( 4 ) where i is the iteration number , α = /T with T being the number of iterations . ‘ Proj ’ is the function to project the image back to -ball every step . Some advanced and complex attacks are further introduced in DeepFool ( Moosavi-Dezfooli et al. , 2016 ) , CW ( Carlini & Wagner , 2017 ) , MI-FGSM ( Dong et al. , 2018 ) . Adversarial Training aims to inject adversarial examples into training procedure so that the trained networks can learn to classify adversarial examples correctly . Specifically , adversarial training solves the following empirical risk minimization problem : arg min h∈H E ( x , y ) D [ max x∗∈A ( x ) L ( h ( x∗ ) , y ) ] , ( 5 ) where A ( x ) denotes the area around x bounded by L∞/L2 norm , and H is the hypothesis space . In this work , we employ both FGSM and PGD to generate adversarial examples for adversarial training .
This paper presents a SIFT-feature inspired modification to the standard convolutional neural network (CNN). Specifically the authors propose three innovations: (1) a differences of Gaussians (DoG) convolutional filter; (2) a symmetric ReLU activation function (referred to as a truncated ReLU; and (3) a projected normalization layer. The paper makes the claim that the proposed CNN variant (referred to as the EVPNet) demonstrates superior performance as well as improved robustness to adversarial attacks.
SP:778ec97ea45befde6a8cba2e505f92c5706185e4
Extreme Values are Accurate and Robust in Deep Networks
1 INTRODUCTION . Convolutional neural networks ( CNNs ) evolve very fast ever since AlexNet ( Krizhevsky & Hinton , 2012 ) makes a great breakthrough on ImageNet image classification challenge ( Deng et al. , 2009 ) in 2012 . Various network architectures have been proposed to further boost classification performance since then , including VGGNet ( Simonyan & Zisserman , 2015 ) , GoogleNet ( Szegedy et al. , 2015 ) , ResNet ( He et al. , 2016 ) , DenseNet ( Huang et al. , 2017 ) and SENet ( Hu et al. , 2018 ) , etc . Recently , people even introduce network architecture search to automatically learn better network architectures ( Zoph & Le , 2017 ; Liu et al. , 2018 ) . However , state-of-the-art CNNs are challenged by their robustness , especially vulnerability to adversarial attacks based on small , human-imperceptible modifications of the input ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) . Su et al . ( 2018 ) thoroughly study the robustness of 18 well-known ImageNet models using multiple metrics , and reveals that adversarial examples are widely existent . Many methods are proposed to improve network robustness , which can be roughly categorized into three perspectives : ( 1 ) modifying input or intermediate features by transformation ( Guo et al. , 2018 ) , denoising ( Liao et al. , 2018 ; Jia et al. , 2019 ) , generative models ( Samangouei et al. , 2018 ; Song et al. , 2018 ) ; ( 2 ) modifying training by changing loss functions ( Wong & Kolter , 2018 ; Elsayed et al. , 2018 ; Zhang et al. , 2019 ) , network distillation ( Papernot et al. , 2016 ) , or adversarial training ( Goodfellow et al. , 2015 ; Tramer et al. , 2018 ) ( 3 ) designing robust network architectures ( Xie et al. , 2019 ; Svoboda et al. , 2019 ; Nayebi & Ganguli , 2017 ) and possible combinations of these basic categories . For more details of current status , please refer to a recent survey ( Akhtar & Mian , 2018 ) . Although it is known that adversarial examples are widely existent ( Su et al. , 2018 ) , some fundamental questions are still far from being well studied like what causes it , and how the factor impacts the performance , etc . One of the interesting findings in ( Su et al. , 2018 ) is that model architecture is a more critical factor to network robustness than model size ( e.g . number of layers ) . Some recent works start to explore much deeper nature . For instance , both ( Geirhos et al. , 2019 ; Baker et al. , 2018 ) show that CNNs are trained to be strongly biased towards textures so that CNNs do not distinguish objects contours from other local or even noise edges , thus perform poorly on shape dominating object instances . On the contrary , there are no statistical difference for human behaviors on both texture rich objects and global shape dominating objects in psychophysical trials . Ilyas et al . ( 2019 ) further analyze and show that deep convolutional features can be categorized into robust and non-robust features , while non-robust features may even account for good generalization . However , non-robust features are not expected to have good model interpretability . It is thus an interesting topic to disentangle robust and non-robust features with certain kinds of human priors in the network designing or training process . In fact , human priors have been extensively used in handcraft designed robust visual features like SIFT ( Lowe , 2004 ) . SIFT detects scale-space ( Lindeberg , 1994 ) extrema from input images , and selects stable extrema to build robust descriptors with refined location and orientation , which achieves great success for many matching and recognition based vision tasks before CNN being reborn in 2012 ( Krizhevsky & Hinton , 2012 ) . The scale-space extrema are efficiently implemented by using a difference-of-Gaussian ( DoG ) function to search over all scales and image locations , while the DoG operator is believed to biologically mimic the neural processing in the retina of the eye ( Young , 1987 ) . Unfortunately , there is ( at least explicitly ) no such scale-space extrema operations in all existing CNNs . Our motivation is to study the possibility of leveraging good properties of SIFT to renovate CNN networks architectures towards better accuracy and robustness . In this paper , we borrow the scale-space extrema idea from SIFT , and propose extreme value preserving networks ( EVPNet ) to separate robust features from non-robust ones , with three novel architecture components to model the extreme values : ( 1 ) parametric DoG ( pDoG ) to extract extreme values in scale-space for deep networks , ( 2 ) truncated ReLU ( tReLU ) to suppress noise or non-stable extrema and ( 3 ) projected normalization layer ( PNL ) to mimic PCA-SIFT ( Ke et al. , 2004 ) like feature normalization . pDoG and tReLU are combined into one block named EVPConv , which could be used to replace all k × k ( k > 1 ) conv-layers in existing CNNs . We conduct comprehensive experiments and ablation studies to verify the effectiveness of each component and the proposed EVPNet . Figure 1 illustrates a comparison of responses for standard convolution + ReLU and EVPConv in ResNet-50 trained on ImageNet , and shows that the proposed EVPConv produces less noises and more responses around object boundary than standard convolution + ReLU , which demonstrates the capability of EVPConv to separate robust features from non-robust ones . Our major contribution are : • To the best of our knowledge , we are the first to explicitly separate robust features from non-robust ones in deep neural networks from an architecture design perspective . • We propose three novel network architecture components to model extreme values in deep networks , including parametric DoG , truncated ReLU , and projected normalization layer , and verify their effectiveness through comprehensive ablation studies . • We propose extreme value preserving networks ( EVPNets ) to combine those three novel com- ponents , which are demonstrated to be not only more accurate , but also more robust to a set of adversarial attacks ( FGSM , PGD , etc ) even for clean model without adversarial training . 2 RELATED WORK . Robust visual features . Most traditional robust visual feature algorithms like SIFT ( Lowe , 2004 ) and SURF ( Bay et al. , 2006 ) are based on the scale-space theory ( Lindeberg , 1994 ) , while there is a close link between scale-space theory and biological vision ( Lowe , 2004 ) , since many scalespace operations show a high degree of similarity with receptive field profiles recorded from the mammalian retina and the first stages in the visual cortex . For instance , DoG computes the difference of two Gaussian blurred images and is believed to mimic the neural processing in the retina ( Young , 1987 ) . SIFT is one such kind of typical robust visual features , which consists of 4 major stages : ( 1 ) scale-space extrema detection with DoG operations ; ( 2 ) Keypoints localization by their stability ; ( 3 ) Orientation and scale assignment based on primary local gradient direction ; ( 4 ) Histogram based keypoint description . We borrow the scale-space extrema idea from SIFT , and propose three novel and robust architecture components to mimic key stages of SIFT . Robust Network Architectures . Many research efforts have been devoted to network robustness especially on defending against adversarial attacks as summarized in Akhtar & Mian ( 2018 ) . However , there are very limited works that tackle this problem from a network architecture design perspective . A major category of methods ( Liao et al. , 2018 ; Xie et al. , 2019 ) focus on designing new layers to perform denoising operations on the input image or the intermediate feature maps . Most of them are shown effective on black-box attacks , while are still vulnerable to white-box attacks . Non-local denoising layer proposed in Xie et al . ( 2019 ) is shown to improve robustness to white-box attack to an extent with adversarial training ( Madry et al. , 2018 ) . Peer sample information is introduced in Svoboda et al . ( 2019 ) with a graph convolution layer to improve network robustness . Biologically inspired protection ( Nayebi & Ganguli , 2017 ) introduces highly non-linear saturated activation layer to replace ReLU layer , and demonstrates good robustness to adversarial attacks , while similar higher-order principal is also used in Krotov & Hopfield ( 2018 ) . However , these methods still lack a systematic architecture design guidance , and many ( Svoboda et al. , 2019 ; Nayebi & Ganguli , 2017 ) are not robust to iterative attack methods like PGD under clean model setting . In this work , inspired by robust visual feature SIFT , we are able to design a series of innovative architecture components systematically for improving both model accuracy and robustness . We should stress that extreme value theory is a different concept to scale-space extremes , which tries to model the extreme in data distribution , and is used to design an attack-independent metric to measure robustness of DNNs ( Weng et al. , 2018 ) by exploring input data distribution . 3 PRELIMINARY . Difference-of-Gaussian . Given an input image I and Gaussian kernel G ( x , y , σ ) as below G ( x , y , σ ) = 1 2πσ2 e− ( x 2+y2 ) /2σ2 , ( 1 ) where σ denotes the variance . Also , difference of Gaussian ( DoG ) is defined as D ( x , y , σ ) = G ( x , y , σ ) ⊗ I1 −G ( x , y , σ ) ⊗ I0 , ( 2 ) where ⊗ is the convolution operation , and I1 = G ( x , y , σ ) ⊗ I0 . Scale-space DoG repeatedly convolves input images with the same Gaussian kernels , and produces difference-of-Gaussian images by subtracting adjacent image scales . Scale-space extrema ( maxima and minima ) are detected in DoG images by comparing a pixel to its 26 neighbors in 3×3 grids at current and two adjacent scales ( Lowe , 2004 ) . Adversarial Attacks . We use h ( · ) to denote the softmax output of classification networks , and hc ( · ) to denote the prediction probability of class c. Then given a classifier h ( x ) = y , the goal of adversarial attack is to find xadv such that the output of classifier deviates from the true label y : maxi h i ( xadv ) 6= y while closing to the original input : ||x− xadv|| ≤ . Here || · || refers to a norm operator , i.e . L2 or L∞ . Attack Method . The most simple adversarial attack method is Fast Gradient Sign Method ( FGSM ) ( Goodfellow et al. , 2015 ) , a single-step method which takes the sign of the gradient on the input as the direction of the perturbation . L ( · , · ) denotes the loss function defined by cross entropy . Specifically , the formation is as follows : xadv = x+ · sign ( ∇xL ( h ( x ) , y ) ) , ( 3 ) where x is the clean input , y is the label . is the norm bound ( ||x − xadv|| ≤ , i.e . -ball ) of the adversarial perturbation . Projected gradient descent ( PGD ) iteratively applies FGSM with a small step size αi ( Kurakin et al. , 2017a ; Madry et al. , 2018 ) with formulation as below : xadvi+1 = Proj ( x+ α · sign ( ∇xL ( h ( x adv i ) , y ) ) ) , ( 4 ) where i is the iteration number , α = /T with T being the number of iterations . ‘ Proj ’ is the function to project the image back to -ball every step . Some advanced and complex attacks are further introduced in DeepFool ( Moosavi-Dezfooli et al. , 2016 ) , CW ( Carlini & Wagner , 2017 ) , MI-FGSM ( Dong et al. , 2018 ) . Adversarial Training aims to inject adversarial examples into training procedure so that the trained networks can learn to classify adversarial examples correctly . Specifically , adversarial training solves the following empirical risk minimization problem : arg min h∈H E ( x , y ) D [ max x∗∈A ( x ) L ( h ( x∗ ) , y ) ] , ( 5 ) where A ( x ) denotes the area around x bounded by L∞/L2 norm , and H is the hypothesis space . In this work , we employ both FGSM and PGD to generate adversarial examples for adversarial training .
This paper proposes a network model named EVPNet, inspired by the idea scale-space extreme value from SIFT, to improve network robustness to adversarial pertubations over textures. To achieve better robustness, EVPNet separates outliers (non-robust) from robust examples by extenting DoG to parametric DoG, utilising truncated ReLU, and then applying a projected normalisation layer to mimic PCA-SIFT like feature normalisation, which are the three novelties that the authors claim in this paper. In the experiments, FGSM and PGD are used to provide adversarial attacks, and experiments conducted on CIFAR-10 and SVHN reveal that EVPNet enhances network robustness.
SP:778ec97ea45befde6a8cba2e505f92c5706185e4
Keyframing the Future: Discovering Temporal Hierarchy with Keyframe-Inpainter Prediction
To flexibly and efficiently reason about temporal sequences , abstract representations that compactly represent the important information in the sequence are needed . One way of constructing such representations is by focusing on the important events in a sequence . In this paper , we propose a model that learns both to discover such key events ( or keyframes ) as well as to represent the sequence in terms of them . We do so using a hierarchical Keyframe-Inpainter ( KEYIN ) model that first generates keyframes and their temporal placement and then inpaints the sequences between keyframes . We propose a fully differentiable formulation for efficiently learning the keyframe placement . We show that KEYIN finds informative keyframes in several datasets with diverse dynamics . When evaluated on a planning task , KEYIN outperforms other recent proposals for learning hierarchical representations . 1 INTRODUCTION . When thinking about the future , humans focus their thoughts on the important things that may happen ( When will the plane depart ? ) without fretting about the minor details that fill each intervening moment ( What is the last word I will say to the taxi driver ? ) . Because the vast majority of elements in a temporal sequence contains redundant information , a temporal abstraction can make reasoning and planning both easier and more efficient . How can we build such an abstraction ? Consider the example of a lead animator who wants to show what happens in the next scene of a cartoon . Before worrying about every low-level detail , the animator first sketches out the story by keyframing , drawing the moments in time when the important events occur . The scene can then be easily finished by other animators who fill in the rest of the sequence from the story laid out by the keyframes . In this paper , we argue that learning to discover such informative keyframes from raw sequences is an efficient and powerful way to learn to reason about the future . Our goal is to learn such an abstraction for future image prediction . In contrast , much of the work on future image prediction has focused on frame-by-frame synthesis ( Oh et al . ( 2015 ) ; Finn et al . ( 2016 ) ) . This strategy puts an equal emphasis on each frame , irrespective of the redundant content it may contain or its usefulness for reasoning relative to the other predicted frames . Other recent work has considered predictions that “ jump ” more than one step into the future , but these approaches either used fixed-offset jumps ( Buesing et al. , 2018 ) or used heuristics to select the predicted frames ( Neitz et al. , 2018 ; Jayaraman et al. , 2019 ; Gregor et al. , 2019 ) . In this work , we propose a method that selects the keyframes that are most informative about the full sequence , so as to allow us to reason about the sequence holistically while only using a small subset of the frames . We do so by ensuring that the full sequence can be recovered from the keyframes with an inpainting strategy , similar to how a supporting animator finishes the story keyframed by the lead . One possible application for a model that discovers informative keyframes is in long-horizon planning . Recently , predictive models have been employed for model-based planning and control ( Ebert et al . ( 2018 ) ) . However , they reason about every single future time step , limiting their applicability to short horizon tasks . In contrast , we show that a model that reasons about the future using a small set of informative keyframes enables visual predictive planning for horizons much greater than previously possible by using keyframes as subgoals in a hierarchical planning framework . To discover informative frames in raw sequence data , we formulate a hierarchical probabilistic model in which a sequence is represented by a subset of its frames ( see Fig . 1 ) . In this two-stage model , a keyframing module represents the keyframes as well as their temporal placement with stochastic latent variables . The images that occur at the timepoints between keyframes are then inferred by an inpainting module . We parametrize this model with a neural network and formulate a variational lower bound on the sequence log-likelihood . Optimizing the resulting objective leads to a model that discovers informative future keyframes that can be easily inpainted to predict the full future sequence . Our contributions are as follows . We formulate a hierarchical approach for the discovery of informative keyframes using joint keyframing and inpainting ( KEYIN ) . We propose a soft objective that allows us to train the model in a fully differentiable way . We first analyze our model on a simple dataset with stochastic dynamics in a controlled setting and show that it can reliably recover the underlying keyframe structure on visual data . We then show that our model discovers hierarchical temporal structure on more complex datasets of demonstrations : an egocentric gridworld environment and a simulated robotic pushing dataset , which is challenging for current approaches to visual planning . We demonstrate that the hierarchy discovered by KEYIN is useful for planning , and that the resulting approach outperforms other proposed hierarchical and non-hierarchical planning schemes on the pushing task . Specifically , we show that keyframes predicted by KEYIN can serve as useful subgoals that can be reached by a low-level planner , enabling long-horizon , hierarchical control . 2 RELATED WORK . Hierarchical temporal structure . Hierarchical neural models for efficiently modeling sequences were proposed in Liu et al . ( 2015 ) ; Buesing et al . ( 2018 ) . These approaches were further extended to predict with an adaptive step size so as to leverage the natural hierarchical structure in language data ( Chung et al. , 2016 ; Kádár et al. , 2018 ) . However , these models rely on autoregressive techniques for text generation and applying them to structured data , such as videos , might be impractical . The video processing community has used keyframe representations as early as 1991 in the MPEG codec ( Gall , 1991 ) . Wu et al . ( 2018 ) adapted this algorithm in the context of neural compression ; however , these approaches use constant offsets between keyframes and thus do not fully reflect the temporal structure of the data . Recently , several neural methods were proposed to leverage such temporal structure . Neitz et al . ( 2018 ) and Jayaraman et al . ( 2019 ) propose models that find and predict the least uncertain “ bottleneck ” frames . Gregor et al . ( 2019 ) construct a representation that can be used to predict any number of frames into the future . In contrast , we propose an approach for hierarchical video representation that discovers the keyframes that best describe a certain sequence . In parallel to our work , Kipf et al . ( 2019 ) propose a related method for video segmentation via generative modeling . Kipf et al . ( 2019 ) focus on using the discovered task boundaries for training hierarchical RL agents , while we show that our model can be used to perform efficient hierarchical planning by representing the sequence with only a small set of keyframes . Also concurrently , Kim et al . ( 2019 ) propose a similar method to KEYIN for learning temporal abstractions . While Kim et al . ( 2019 ) focuses on learning hierarchical state-space models , we propose a model that operates directly in the observation space and performs joint keyframing and inpainting . Video modeling . Early approaches to probabilistic video modeling include autoregressive models that factorize the distribution by considering pixels sequentially ( Kalchbrenner et al. , 2017 ; Reed et al. , 2017 ) . To reason about the images in the video holistically , latent variable approaches were developed based on variational inference ( Chung et al. , 2015 ; Rezende et al. , 2014 ; Kingma & Welling , 2014 ) , including ( Babaeizadeh et al. , 2018 ; Denton & Fergus , 2018 ; Lee et al. , 2018 ) and large-scale models such as ( Castrejon et al. , 2019 ; Villegas et al. , 2019 ) . Kumar et al . ( 2019 ) is a recently proposed approach that uses exact inference based on normalizing flows ( Dinh et al. , 2014 ; Rezende & Mohamed , 2015 ) . We build on existing video modeling approaches and show how they can be used to learn temporal abstractions with a novel keyframe-based generative model . Visual planning and model predictive control . We build on recent work that explored applications of learned visual predictive models to planning and control . Several groups ( Oh et al. , 2015 ; Finn et al. , 2016 ; Chiappa et al. , 2017 ) have proposed models that predict the consequences of actions taken by an agent given its control output . Recent work ( Byravan et al. , 2017 ; Hafner et al. , 2018 ; Ebert et al. , 2018 ) has shown that visual model predictive control based on such models can be applied to a variety of different settings . In this work , we show that the hierarchical representation of a sequence in terms of keyframes improves planning performance in the hierarchical planning setting . 3 KEYFRAMING THE FUTURE Our goal is to develop a model that generates sequences by first predicting key observations and the time steps when they occur and then filling in the remaining observations in between . To achieve this goal , in the following we ( i ) define a probabilistic model for joint keyframing and inpainting , and ( ii ) show how a maximum likelihood objective leads to the discovery of keyframe structure . 3.1 A PROBABILISTIC MODEL FOR JOINT KEYFRAMING AND INPAINTING . We first describe a probabilistic model for joint keyframing and inpainting of a sequence I1 : T . The model consists of two parts : the keyframe predictor and the sequence inpainter ( see Fig . 2 ) . The keyframe predictor takes in C conditioning frames Ico and produces N keyframes K1 : N as well as the corresponding time indices τ1 : N : p ( K1 : N , τ1 : N |Ico ) = ∏ n p ( Kn , τn|K1 : n−1 , τ1 : n−1 , Ico ) . ( 1 ) From each pair of keyframes , the sequence inpainter generates the sequence of frames in between : p ( Iτn : τn+1−1|Kn , Kn+1 , τn+1 − τn ) = ∏ n p ( It|Kn , Kn+1 , Iτn : t−1 , τn+1 − τn ) , ( 2 ) which completes the generation of the full sequence . The inpainter additionally observes the number of frames it needs to generate τn+1− τn . The temporal spacing of the most informative keyframes is data-dependent : shorter keyframe intervals might be required in cases of rapidly fluctuating motion , while longer intervals can be sufficient for steadier motion . Our model handles this by predicting the keyframe indices τ and inpainting τn+1−τn frames between each pair of keyframes . We parametrize the prediction of τn in relative terms by predicting offsets δn : τn = τn−1 + δn . 3.2 KEYFRAME DISCOVERY . To produce a complex multimodal distribution over K we use a per-keyframe latent variable z with prior distribution p ( z ) and approximate posterior q ( z|I , Ico ) .1 We construct a variational lower bound 1For simplicity , the variable representing the full sequence is written without indices ( I is the same as I1 : T ) . on the likelihood of both I and K as follows2 : ln p ( I , K|Ico ) ≥ Eq ( z|I , Ico ) [ N∑ n=1 lnEp ( τn , τn+1|z1 : n , Ico ) [ p ( Iτn : τn+1 |Kn , n+1 , τn+1 − τn ) ] ︸ ︷︷ ︸ inpainting + ln p ( K|z , Ico ) ︸ ︷︷ ︸ keyframing ] −DKL ( q ( z|I , Ico ) ||p ( z ) ) ︸ ︷︷ ︸ regularization . ( 3 ) In practice , we use a weight β on the KL-divergence term , as is common in amortized variational inference ( Higgins et al. , 2017 ; Alemi et al. , 2018 ; Denton & Fergus , 2018 ) . If a simple model is used for inpainting , most of the representational power of the model has to come from the keyframe predictor . We use a relatively powerful latent variable model for the keyframe predictor and a simpler Gaussian distribution produced with a neural network for inpainting . Because of this structure , the keyframe predictor has to predict keyframes that describe the underlying sequence well enough to allow a simpler inpainting process to maximize the likelihood . We will show that pairing a more flexible keyframe predictor with a simpler inpainter allows our model to discover semantically meaningful keyframes in video data .
The paper introduces a model trained for video prediction hierarchically: a series of significant frames called “keyframes” in the paper are first predicted and then intermediate frames between keyframes couples are generated. The training criterion is maximum likelihood with a variational approximation. Experiments are performed on 3 different video datasets and the evaluation is performed for 3 tasks: keyframe detection, frame prediction and planning in robot videos.
SP:8ead93266a4847d000548d8b05896b522d51e5f6
Keyframing the Future: Discovering Temporal Hierarchy with Keyframe-Inpainter Prediction
To flexibly and efficiently reason about temporal sequences , abstract representations that compactly represent the important information in the sequence are needed . One way of constructing such representations is by focusing on the important events in a sequence . In this paper , we propose a model that learns both to discover such key events ( or keyframes ) as well as to represent the sequence in terms of them . We do so using a hierarchical Keyframe-Inpainter ( KEYIN ) model that first generates keyframes and their temporal placement and then inpaints the sequences between keyframes . We propose a fully differentiable formulation for efficiently learning the keyframe placement . We show that KEYIN finds informative keyframes in several datasets with diverse dynamics . When evaluated on a planning task , KEYIN outperforms other recent proposals for learning hierarchical representations . 1 INTRODUCTION . When thinking about the future , humans focus their thoughts on the important things that may happen ( When will the plane depart ? ) without fretting about the minor details that fill each intervening moment ( What is the last word I will say to the taxi driver ? ) . Because the vast majority of elements in a temporal sequence contains redundant information , a temporal abstraction can make reasoning and planning both easier and more efficient . How can we build such an abstraction ? Consider the example of a lead animator who wants to show what happens in the next scene of a cartoon . Before worrying about every low-level detail , the animator first sketches out the story by keyframing , drawing the moments in time when the important events occur . The scene can then be easily finished by other animators who fill in the rest of the sequence from the story laid out by the keyframes . In this paper , we argue that learning to discover such informative keyframes from raw sequences is an efficient and powerful way to learn to reason about the future . Our goal is to learn such an abstraction for future image prediction . In contrast , much of the work on future image prediction has focused on frame-by-frame synthesis ( Oh et al . ( 2015 ) ; Finn et al . ( 2016 ) ) . This strategy puts an equal emphasis on each frame , irrespective of the redundant content it may contain or its usefulness for reasoning relative to the other predicted frames . Other recent work has considered predictions that “ jump ” more than one step into the future , but these approaches either used fixed-offset jumps ( Buesing et al. , 2018 ) or used heuristics to select the predicted frames ( Neitz et al. , 2018 ; Jayaraman et al. , 2019 ; Gregor et al. , 2019 ) . In this work , we propose a method that selects the keyframes that are most informative about the full sequence , so as to allow us to reason about the sequence holistically while only using a small subset of the frames . We do so by ensuring that the full sequence can be recovered from the keyframes with an inpainting strategy , similar to how a supporting animator finishes the story keyframed by the lead . One possible application for a model that discovers informative keyframes is in long-horizon planning . Recently , predictive models have been employed for model-based planning and control ( Ebert et al . ( 2018 ) ) . However , they reason about every single future time step , limiting their applicability to short horizon tasks . In contrast , we show that a model that reasons about the future using a small set of informative keyframes enables visual predictive planning for horizons much greater than previously possible by using keyframes as subgoals in a hierarchical planning framework . To discover informative frames in raw sequence data , we formulate a hierarchical probabilistic model in which a sequence is represented by a subset of its frames ( see Fig . 1 ) . In this two-stage model , a keyframing module represents the keyframes as well as their temporal placement with stochastic latent variables . The images that occur at the timepoints between keyframes are then inferred by an inpainting module . We parametrize this model with a neural network and formulate a variational lower bound on the sequence log-likelihood . Optimizing the resulting objective leads to a model that discovers informative future keyframes that can be easily inpainted to predict the full future sequence . Our contributions are as follows . We formulate a hierarchical approach for the discovery of informative keyframes using joint keyframing and inpainting ( KEYIN ) . We propose a soft objective that allows us to train the model in a fully differentiable way . We first analyze our model on a simple dataset with stochastic dynamics in a controlled setting and show that it can reliably recover the underlying keyframe structure on visual data . We then show that our model discovers hierarchical temporal structure on more complex datasets of demonstrations : an egocentric gridworld environment and a simulated robotic pushing dataset , which is challenging for current approaches to visual planning . We demonstrate that the hierarchy discovered by KEYIN is useful for planning , and that the resulting approach outperforms other proposed hierarchical and non-hierarchical planning schemes on the pushing task . Specifically , we show that keyframes predicted by KEYIN can serve as useful subgoals that can be reached by a low-level planner , enabling long-horizon , hierarchical control . 2 RELATED WORK . Hierarchical temporal structure . Hierarchical neural models for efficiently modeling sequences were proposed in Liu et al . ( 2015 ) ; Buesing et al . ( 2018 ) . These approaches were further extended to predict with an adaptive step size so as to leverage the natural hierarchical structure in language data ( Chung et al. , 2016 ; Kádár et al. , 2018 ) . However , these models rely on autoregressive techniques for text generation and applying them to structured data , such as videos , might be impractical . The video processing community has used keyframe representations as early as 1991 in the MPEG codec ( Gall , 1991 ) . Wu et al . ( 2018 ) adapted this algorithm in the context of neural compression ; however , these approaches use constant offsets between keyframes and thus do not fully reflect the temporal structure of the data . Recently , several neural methods were proposed to leverage such temporal structure . Neitz et al . ( 2018 ) and Jayaraman et al . ( 2019 ) propose models that find and predict the least uncertain “ bottleneck ” frames . Gregor et al . ( 2019 ) construct a representation that can be used to predict any number of frames into the future . In contrast , we propose an approach for hierarchical video representation that discovers the keyframes that best describe a certain sequence . In parallel to our work , Kipf et al . ( 2019 ) propose a related method for video segmentation via generative modeling . Kipf et al . ( 2019 ) focus on using the discovered task boundaries for training hierarchical RL agents , while we show that our model can be used to perform efficient hierarchical planning by representing the sequence with only a small set of keyframes . Also concurrently , Kim et al . ( 2019 ) propose a similar method to KEYIN for learning temporal abstractions . While Kim et al . ( 2019 ) focuses on learning hierarchical state-space models , we propose a model that operates directly in the observation space and performs joint keyframing and inpainting . Video modeling . Early approaches to probabilistic video modeling include autoregressive models that factorize the distribution by considering pixels sequentially ( Kalchbrenner et al. , 2017 ; Reed et al. , 2017 ) . To reason about the images in the video holistically , latent variable approaches were developed based on variational inference ( Chung et al. , 2015 ; Rezende et al. , 2014 ; Kingma & Welling , 2014 ) , including ( Babaeizadeh et al. , 2018 ; Denton & Fergus , 2018 ; Lee et al. , 2018 ) and large-scale models such as ( Castrejon et al. , 2019 ; Villegas et al. , 2019 ) . Kumar et al . ( 2019 ) is a recently proposed approach that uses exact inference based on normalizing flows ( Dinh et al. , 2014 ; Rezende & Mohamed , 2015 ) . We build on existing video modeling approaches and show how they can be used to learn temporal abstractions with a novel keyframe-based generative model . Visual planning and model predictive control . We build on recent work that explored applications of learned visual predictive models to planning and control . Several groups ( Oh et al. , 2015 ; Finn et al. , 2016 ; Chiappa et al. , 2017 ) have proposed models that predict the consequences of actions taken by an agent given its control output . Recent work ( Byravan et al. , 2017 ; Hafner et al. , 2018 ; Ebert et al. , 2018 ) has shown that visual model predictive control based on such models can be applied to a variety of different settings . In this work , we show that the hierarchical representation of a sequence in terms of keyframes improves planning performance in the hierarchical planning setting . 3 KEYFRAMING THE FUTURE Our goal is to develop a model that generates sequences by first predicting key observations and the time steps when they occur and then filling in the remaining observations in between . To achieve this goal , in the following we ( i ) define a probabilistic model for joint keyframing and inpainting , and ( ii ) show how a maximum likelihood objective leads to the discovery of keyframe structure . 3.1 A PROBABILISTIC MODEL FOR JOINT KEYFRAMING AND INPAINTING . We first describe a probabilistic model for joint keyframing and inpainting of a sequence I1 : T . The model consists of two parts : the keyframe predictor and the sequence inpainter ( see Fig . 2 ) . The keyframe predictor takes in C conditioning frames Ico and produces N keyframes K1 : N as well as the corresponding time indices τ1 : N : p ( K1 : N , τ1 : N |Ico ) = ∏ n p ( Kn , τn|K1 : n−1 , τ1 : n−1 , Ico ) . ( 1 ) From each pair of keyframes , the sequence inpainter generates the sequence of frames in between : p ( Iτn : τn+1−1|Kn , Kn+1 , τn+1 − τn ) = ∏ n p ( It|Kn , Kn+1 , Iτn : t−1 , τn+1 − τn ) , ( 2 ) which completes the generation of the full sequence . The inpainter additionally observes the number of frames it needs to generate τn+1− τn . The temporal spacing of the most informative keyframes is data-dependent : shorter keyframe intervals might be required in cases of rapidly fluctuating motion , while longer intervals can be sufficient for steadier motion . Our model handles this by predicting the keyframe indices τ and inpainting τn+1−τn frames between each pair of keyframes . We parametrize the prediction of τn in relative terms by predicting offsets δn : τn = τn−1 + δn . 3.2 KEYFRAME DISCOVERY . To produce a complex multimodal distribution over K we use a per-keyframe latent variable z with prior distribution p ( z ) and approximate posterior q ( z|I , Ico ) .1 We construct a variational lower bound 1For simplicity , the variable representing the full sequence is written without indices ( I is the same as I1 : T ) . on the likelihood of both I and K as follows2 : ln p ( I , K|Ico ) ≥ Eq ( z|I , Ico ) [ N∑ n=1 lnEp ( τn , τn+1|z1 : n , Ico ) [ p ( Iτn : τn+1 |Kn , n+1 , τn+1 − τn ) ] ︸ ︷︷ ︸ inpainting + ln p ( K|z , Ico ) ︸ ︷︷ ︸ keyframing ] −DKL ( q ( z|I , Ico ) ||p ( z ) ) ︸ ︷︷ ︸ regularization . ( 3 ) In practice , we use a weight β on the KL-divergence term , as is common in amortized variational inference ( Higgins et al. , 2017 ; Alemi et al. , 2018 ; Denton & Fergus , 2018 ) . If a simple model is used for inpainting , most of the representational power of the model has to come from the keyframe predictor . We use a relatively powerful latent variable model for the keyframe predictor and a simpler Gaussian distribution produced with a neural network for inpainting . Because of this structure , the keyframe predictor has to predict keyframes that describe the underlying sequence well enough to allow a simpler inpainting process to maximize the likelihood . We will show that pairing a more flexible keyframe predictor with a simpler inpainter allows our model to discover semantically meaningful keyframes in video data .
The authors address the problem of discovering and predicting with hierarchical structure in data sequences of relevance to planning. Starting with the kinds of data that have been used recently in video prediction, the authors aim at learning a sequence of keyframes (i.e., subsets of frames forming the overall sequence) that in a suitable sense "summarize" the overall trace. As they rightly note, many alternate models struggle with making good long term predictions in part because they focus on all levels of prediction equally.
SP:8ead93266a4847d000548d8b05896b522d51e5f6
Generative Latent Flow
1 INTRODUCTION . Generative models have attracted much attention in the literature on deep learning . These models are used to formulate the distribution of complex data as a function of random noise passed through a network , so that rendering samples from the distribution is particularly easy . The most dominant generative models are Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) , as they have exhibited impressive performance in generating high quality images ( Radford et al. , 2015 ; Brock et al. , 2018 ) and in other vision tasks ( Zhu et al. , 2017 ; Ledig et al. , 2017 ) . Despite their success , training GANs can be challenging , partly because they are trained by solving a saddle point optimization problem formulated as an adversarial game . It is well known that training GANs is unstable and sensitive to hyper-parameter settings ( Salimans et al. , 2016 ; Arora et al. , 2017 ) , and sometimes training leads to mode collapse ( Goodfellow , 2016 ) . Although there have been multiple efforts to overcome the difficulties in training GANs ( Arjovsky et al. , 2017 ; Metz Luke & SohlDickstein , 2017 ; Srivastava et al. , 2017 ; Miyato et al. , 2018 ) , researchers are also actively studying non-adversarial methods that are known to be less affected by these issues . Some models explicitly define p ( x ) , the distribution of the data , and training is guided by maximizing the data likelihood . One approach is to express the data distribution in an auto-regressive pattern ( Papamakarios et al. , 2017 ; Oord et al. , 2016 ) ; another is to express it as an invertible transformation of a simple distribution using the change of variable formula , where the invertible transformation is defined using a normalizing flow network ( Dinh et al. , 2014 ; 2016 ; Kingma & Dhariwal , 2018 ) . While being mathematically clear and well defined , normalizing flows keep the dimensionality of the original data in order to maintain bijectivity . Consequently , they can not provide low-dimensional representations of the data and training is computationally expensive . Considering the prohibitively long training time and advanced hardware requirements in training large scale flow models such as ( Kingma & Dhariwal , 2018 ) , we believe that it is worth exploring the application of flows in the low dimensional representation spaces rather than for the original data . Another class of generative models employs an encoder-decoder structure and low dimensional latent variables to represent and generate the data . An encoder is used to produce estimates of the latent variables corresponding to a particular data point , and samples from a predefined prior distribution on the latent space are passed through a decoder to produce new samples from the data distribution . We call these auto-encoder ( AE ) based models , of which variational auto-encoders ( VAEs ) are perhaps the most influential ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) . VAEs use the encoder to produce approximations to the posterior distribution of the latent variable given the data , and the training objective is to maximize a variational lower bound of the data log likelihood . VAEs are easy to train , but their generation quality still lies far below that of GANs , as they tend to generate blurry images ( Dosovitskiy & Brox , 2016 ) . Whereas the original VAE uses a standard Gaussian prior , it can be extended by introducing a learnable parameterized prior distribution . There have been a number of studies in this direction ( see section 2 ) , some of which use a normalizing flow parameterization , where the prior is modeled as a trainable continuous bijective transformation of the standard Gaussian . We carefully study this method , and make the surprising novel observation that in order to produce high quality samples , it is necessary to significantly increase the weight on the reconstruction loss . This corresponds to decreasing the variance of the observational noise of the generative model at each pixel , where we are assuming the data distribution is factorial Gaussian conditioned on the output of the decoder , which yields the MSE as the reconstruction loss . It is important to note that increasing this weight alone without access to a trainable prior does not consistently improve generation quality . We show that as this weight increases , we approach a vanishing noise limit that corresponds to a deterministic auto-encoder . This leads to a new algorithm we call Generative Latent Flow ( GLF ) , which combines a deterministic auto-encoder that learns a mapping to and from a latent space , and a normalizing flow that matches the standard Gaussian to the distribution of latent variables of the training data produced by the encoder . Our contributions are summarized as follows : i ) we carefully study the effects of equipping VAEs with a normalizing flow prior on image generation quality as the weight of the reconstruction loss is increased . ii ) Based on this finding , introduce Generative Latent Flow , which uses auto-encoders instead of VAEs . iii ) Through standard evaluations , we show that our proposed model achieves state-of-the-art sample quality among competing AE based models , and has the additional advantage of faster convergence . 2 RELATED WORK . In general , in order for an AE based model with encoder-decoder structure to generate samples resembling the training data distribution , two criteria need to be ensured : ( a ) the decoder is able to produce a good reconstruction of a training image given its encoded latent variable z ; and ( b ) the empirical latent distribution q ( z ) of z ’ s returned by the encoder is close to the prior p ( z ) . In VAEs , the empirical latent distribution is often called aggregated or marginal posterior : q ( z ) = Ex∼pdata [ q ( z|x ) ] . While ( a ) is mainly driven by the reconstruction loss , satisfying criterion ( b ) is more complicated . Intuitively , criterion ( b ) can possibly be achieved by designing mechanisms that either modify the empirical latent distribution q ( z ) , or conversely modify the prior p ( z ) . There is plenty of previous work in both directions . Modifying the empirical latent distribution q ( z ) : In the classic VAE model , DKL ( q ( z|x ) ‖p ( z ) ) in the ELBO loss can be decomposed as DKL ( q ( z ) ‖p ( z ) ) plus a mutual information term as shown in ( Hoffman & Johnson , 2016 ) . Therefore , VAEs modify q ( z ) indirectly through regularizing the posterior distribution q ( z|x ) . Several modifications to VAE ’ s loss ( Chen et al. , 2018 ; Kim & Mnih , 2018 ) , which are designed for the task of unsupervised disentanglement , put a stronger penalty specifically on the mismatch between q ( z ) and p ( z ) . There are also attempts to incorporate normalizing flows into the encoder to provide more flexible approximate posteriors ( Rezende & Mohamed , 2015 ; Kingma et al. , 2016 ; Berg et al. , 2018 ) . However , empirical evaluation shows that VAEs with flow posteriors do not reduce the mismatch between q ( z ) and p ( z ) ( Rosca et al. , 2018 ) . Furthermore , as of yet , all these modifications to VAEs have not been shown to improve generation quality . Adversarial auto-encoders ( AAEs ) ( Makhzani et al. , 2015 ) and Wasserstein auto-encoders ( WAEs ) ( Tolstikhin et al. , 2017 ) use an adversarial regularizer or MMD regularizer ( Gretton et al. , 2012 ) to force q ( z ) to be close to p ( z ) . WAEs are shown to improve generation quality , as they generate sharper images than VAEs do . Modifying the prior distribution p ( z ) : An alternative to modifying the approximate posterior is using a trainable prior . ( Tomczak & Welling , 2017 ; Klushyn et al. , 2019 ; Bauer & Mnih , 2018 ) propose different ways to approximate q ( z ) using a sampled mixture of posteriors during training , and then use the approximated q ( z ) as the prior in the VAE . This is a natural way to let the prior match q ( z ) , however , these methods have not been shown to improve generation quality . Two-stage VAE ( Dai & Wipf , 2019 ) introduces another VAE on the latent space defined by the first VAE to learn the distribution of its latent variables . VQ-VAE ( Oord et al. , 2017 ) first trains an auto-encoder with discrete latent variables , and then fits an auto-regressive prior on the latent space . GLANN ( Hoshen et al. , 2019 ) learns a latent representation by GLO ( Bojanowski et al. , 2017 ) and matches the densities of the latent variables with an implicit maximum likelihood estimator ( Li & Malik , 2018 ) . RAE+GMM ( Ghosh et al. , 2019 ) trains a regularized auto-encoder ( Alain & Bengio , 2014 ) and fits a mixture of Gaussian distribution on the latent space . Note that all these methods involve two stage-training , which means that the prior distribution is fitted after training the variational or deterministic auto-encoder . They have been shown to improve the quality of the generated images . VAEs with a normalizing flow as a learnable prior ( Chen et al. , 2016b ; Huang et al. , 2017 ) also fall into this category . Since these are the main focus of this paper , we discuss them in detail in Section 3.2 . We note that modifications of VAEs with a normalizing flow posterior have been extensively studied . In contrast , VAEs with flow prior have attracted much less attention . ( Huang et al. , 2017 ) briefly discusses this model to solve the distribution mismatch in the latent space , and recently ( Xu et al. , 2019 ) shows the advantages of learning a flow prior over learning a flow posterior . However , these papers only focus on improvements of the data likelihood . Here we study the model from the perspective of the effects of the normalizing flow prior on sample generation quality , leading to some important and novel observations . 3 COMBINING NORMALIZING FLOW WITH AE BASED MODELS . In this section , we discuss the combination of normalizing flow priors with AE based models in detail . We first review normalizing flows in section 3.1 , then in section 3.2 we introduce VAEs with normalizing flow prior and present some novel observations with respect to this model . Finally in section 3.3 we propose Generative Latent Flow ( GLF ) to further simplify the model and improve performance . 3.1 REVIEW : NORMALIZING FLOWS . Normalizing flows are carefully-designed invertible networks that map the training data to a simple distribution . Let z ∈ Z be an observation from an unknown target distribution z ∼ p ( z ) and p be the unit Gaussian prior distribution on E . Given a bijection fθ : Z → E , we define a probability model pθ ( z ) with parameters θ on Z . The negative log likelihood ( NLL ) of z is computed by the change of variable formula : LNLL ( fθ ( z ) ) , − log ( pθ ( z ) ) = − ( log p ( fθ ( z ) ) + log ∣∣∣∣det ( ∂fθ ( z ) ∂z ) ∣∣∣∣ ) , ( 1 ) where ∂fθ ( z ) ∂z is the Jacobian matrix of fθ . In order to learn the flow fθ , the NLL objective of z is minimized , which is equivalent to maximizing the likelihood of z . Since the mapping is a bijection , sampling from the trained model pθ ( z ) is trivial : simply sample ∼ p and compute z = f−1θ ( ) . The key to designing a normalizing flow model is defining the transformation fθ so that the inverse transformation and the determinant of the Jacobian matrix can be efficiently computed . Based on ( Dinh et al. , 2016 ) , we adopt the following layers to form the flows used in our model . Affine coupling layer : Given D dimensional input data z and d < D , we partition the input into two vectors z1 = z1 : d and z2 = zd+1 : D. The output of one affine coupling layer is given by y1 = z1 , y2 = z2 exp ( s ( z1 ) ) + t ( z1 ) where s and t are functions from Rd → RD−d and is the element-wise product . The inverse of the transformation is explicitly given by z1 = y1 , z2 = ( y2 − t ( y1 ) ) exp ( −s ( y1 ) ) . The determinant of the Jacobian matrix of this transformation is det∂y∂z = ∏d j=1 ( exp [ s ( z1 ) j ] ) . Since computing both the inverse and the Jacobian of an affine coupling layer does not require computing the inverse and Jacobian of s and t , both functions can be arbitrarily complex . Combining coupling layers with random permutation : Affine coupling layers leave some components of the input data unchanged . In order to transform all the components , two coupling layers are combined in an alternating pattern to form a coupling block , so that the unchanged components in the first layer can be transformed in the second layer . In particular , we add a fixed random permutation of the coordinates of the input data at the end of each coupling block . See Figure 1b for an illustration of a coupling block used in our model .
The authors propose a model that combines a simple Auto-Encoder (AE) together with a Normalizing Flow (NF) model, such that to derive a generative model. In particular, the AE is used to learn a low-dimensional representation of the given data in a latent space. Then, a NF model learns under a maximum likelihood principle, the distribution of these latent codes by applying an invertible transformation on a easy to sample distribution.
SP:55367291f235b256ca2f583722106e6507accd05
Generative Latent Flow
1 INTRODUCTION . Generative models have attracted much attention in the literature on deep learning . These models are used to formulate the distribution of complex data as a function of random noise passed through a network , so that rendering samples from the distribution is particularly easy . The most dominant generative models are Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) , as they have exhibited impressive performance in generating high quality images ( Radford et al. , 2015 ; Brock et al. , 2018 ) and in other vision tasks ( Zhu et al. , 2017 ; Ledig et al. , 2017 ) . Despite their success , training GANs can be challenging , partly because they are trained by solving a saddle point optimization problem formulated as an adversarial game . It is well known that training GANs is unstable and sensitive to hyper-parameter settings ( Salimans et al. , 2016 ; Arora et al. , 2017 ) , and sometimes training leads to mode collapse ( Goodfellow , 2016 ) . Although there have been multiple efforts to overcome the difficulties in training GANs ( Arjovsky et al. , 2017 ; Metz Luke & SohlDickstein , 2017 ; Srivastava et al. , 2017 ; Miyato et al. , 2018 ) , researchers are also actively studying non-adversarial methods that are known to be less affected by these issues . Some models explicitly define p ( x ) , the distribution of the data , and training is guided by maximizing the data likelihood . One approach is to express the data distribution in an auto-regressive pattern ( Papamakarios et al. , 2017 ; Oord et al. , 2016 ) ; another is to express it as an invertible transformation of a simple distribution using the change of variable formula , where the invertible transformation is defined using a normalizing flow network ( Dinh et al. , 2014 ; 2016 ; Kingma & Dhariwal , 2018 ) . While being mathematically clear and well defined , normalizing flows keep the dimensionality of the original data in order to maintain bijectivity . Consequently , they can not provide low-dimensional representations of the data and training is computationally expensive . Considering the prohibitively long training time and advanced hardware requirements in training large scale flow models such as ( Kingma & Dhariwal , 2018 ) , we believe that it is worth exploring the application of flows in the low dimensional representation spaces rather than for the original data . Another class of generative models employs an encoder-decoder structure and low dimensional latent variables to represent and generate the data . An encoder is used to produce estimates of the latent variables corresponding to a particular data point , and samples from a predefined prior distribution on the latent space are passed through a decoder to produce new samples from the data distribution . We call these auto-encoder ( AE ) based models , of which variational auto-encoders ( VAEs ) are perhaps the most influential ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) . VAEs use the encoder to produce approximations to the posterior distribution of the latent variable given the data , and the training objective is to maximize a variational lower bound of the data log likelihood . VAEs are easy to train , but their generation quality still lies far below that of GANs , as they tend to generate blurry images ( Dosovitskiy & Brox , 2016 ) . Whereas the original VAE uses a standard Gaussian prior , it can be extended by introducing a learnable parameterized prior distribution . There have been a number of studies in this direction ( see section 2 ) , some of which use a normalizing flow parameterization , where the prior is modeled as a trainable continuous bijective transformation of the standard Gaussian . We carefully study this method , and make the surprising novel observation that in order to produce high quality samples , it is necessary to significantly increase the weight on the reconstruction loss . This corresponds to decreasing the variance of the observational noise of the generative model at each pixel , where we are assuming the data distribution is factorial Gaussian conditioned on the output of the decoder , which yields the MSE as the reconstruction loss . It is important to note that increasing this weight alone without access to a trainable prior does not consistently improve generation quality . We show that as this weight increases , we approach a vanishing noise limit that corresponds to a deterministic auto-encoder . This leads to a new algorithm we call Generative Latent Flow ( GLF ) , which combines a deterministic auto-encoder that learns a mapping to and from a latent space , and a normalizing flow that matches the standard Gaussian to the distribution of latent variables of the training data produced by the encoder . Our contributions are summarized as follows : i ) we carefully study the effects of equipping VAEs with a normalizing flow prior on image generation quality as the weight of the reconstruction loss is increased . ii ) Based on this finding , introduce Generative Latent Flow , which uses auto-encoders instead of VAEs . iii ) Through standard evaluations , we show that our proposed model achieves state-of-the-art sample quality among competing AE based models , and has the additional advantage of faster convergence . 2 RELATED WORK . In general , in order for an AE based model with encoder-decoder structure to generate samples resembling the training data distribution , two criteria need to be ensured : ( a ) the decoder is able to produce a good reconstruction of a training image given its encoded latent variable z ; and ( b ) the empirical latent distribution q ( z ) of z ’ s returned by the encoder is close to the prior p ( z ) . In VAEs , the empirical latent distribution is often called aggregated or marginal posterior : q ( z ) = Ex∼pdata [ q ( z|x ) ] . While ( a ) is mainly driven by the reconstruction loss , satisfying criterion ( b ) is more complicated . Intuitively , criterion ( b ) can possibly be achieved by designing mechanisms that either modify the empirical latent distribution q ( z ) , or conversely modify the prior p ( z ) . There is plenty of previous work in both directions . Modifying the empirical latent distribution q ( z ) : In the classic VAE model , DKL ( q ( z|x ) ‖p ( z ) ) in the ELBO loss can be decomposed as DKL ( q ( z ) ‖p ( z ) ) plus a mutual information term as shown in ( Hoffman & Johnson , 2016 ) . Therefore , VAEs modify q ( z ) indirectly through regularizing the posterior distribution q ( z|x ) . Several modifications to VAE ’ s loss ( Chen et al. , 2018 ; Kim & Mnih , 2018 ) , which are designed for the task of unsupervised disentanglement , put a stronger penalty specifically on the mismatch between q ( z ) and p ( z ) . There are also attempts to incorporate normalizing flows into the encoder to provide more flexible approximate posteriors ( Rezende & Mohamed , 2015 ; Kingma et al. , 2016 ; Berg et al. , 2018 ) . However , empirical evaluation shows that VAEs with flow posteriors do not reduce the mismatch between q ( z ) and p ( z ) ( Rosca et al. , 2018 ) . Furthermore , as of yet , all these modifications to VAEs have not been shown to improve generation quality . Adversarial auto-encoders ( AAEs ) ( Makhzani et al. , 2015 ) and Wasserstein auto-encoders ( WAEs ) ( Tolstikhin et al. , 2017 ) use an adversarial regularizer or MMD regularizer ( Gretton et al. , 2012 ) to force q ( z ) to be close to p ( z ) . WAEs are shown to improve generation quality , as they generate sharper images than VAEs do . Modifying the prior distribution p ( z ) : An alternative to modifying the approximate posterior is using a trainable prior . ( Tomczak & Welling , 2017 ; Klushyn et al. , 2019 ; Bauer & Mnih , 2018 ) propose different ways to approximate q ( z ) using a sampled mixture of posteriors during training , and then use the approximated q ( z ) as the prior in the VAE . This is a natural way to let the prior match q ( z ) , however , these methods have not been shown to improve generation quality . Two-stage VAE ( Dai & Wipf , 2019 ) introduces another VAE on the latent space defined by the first VAE to learn the distribution of its latent variables . VQ-VAE ( Oord et al. , 2017 ) first trains an auto-encoder with discrete latent variables , and then fits an auto-regressive prior on the latent space . GLANN ( Hoshen et al. , 2019 ) learns a latent representation by GLO ( Bojanowski et al. , 2017 ) and matches the densities of the latent variables with an implicit maximum likelihood estimator ( Li & Malik , 2018 ) . RAE+GMM ( Ghosh et al. , 2019 ) trains a regularized auto-encoder ( Alain & Bengio , 2014 ) and fits a mixture of Gaussian distribution on the latent space . Note that all these methods involve two stage-training , which means that the prior distribution is fitted after training the variational or deterministic auto-encoder . They have been shown to improve the quality of the generated images . VAEs with a normalizing flow as a learnable prior ( Chen et al. , 2016b ; Huang et al. , 2017 ) also fall into this category . Since these are the main focus of this paper , we discuss them in detail in Section 3.2 . We note that modifications of VAEs with a normalizing flow posterior have been extensively studied . In contrast , VAEs with flow prior have attracted much less attention . ( Huang et al. , 2017 ) briefly discusses this model to solve the distribution mismatch in the latent space , and recently ( Xu et al. , 2019 ) shows the advantages of learning a flow prior over learning a flow posterior . However , these papers only focus on improvements of the data likelihood . Here we study the model from the perspective of the effects of the normalizing flow prior on sample generation quality , leading to some important and novel observations . 3 COMBINING NORMALIZING FLOW WITH AE BASED MODELS . In this section , we discuss the combination of normalizing flow priors with AE based models in detail . We first review normalizing flows in section 3.1 , then in section 3.2 we introduce VAEs with normalizing flow prior and present some novel observations with respect to this model . Finally in section 3.3 we propose Generative Latent Flow ( GLF ) to further simplify the model and improve performance . 3.1 REVIEW : NORMALIZING FLOWS . Normalizing flows are carefully-designed invertible networks that map the training data to a simple distribution . Let z ∈ Z be an observation from an unknown target distribution z ∼ p ( z ) and p be the unit Gaussian prior distribution on E . Given a bijection fθ : Z → E , we define a probability model pθ ( z ) with parameters θ on Z . The negative log likelihood ( NLL ) of z is computed by the change of variable formula : LNLL ( fθ ( z ) ) , − log ( pθ ( z ) ) = − ( log p ( fθ ( z ) ) + log ∣∣∣∣det ( ∂fθ ( z ) ∂z ) ∣∣∣∣ ) , ( 1 ) where ∂fθ ( z ) ∂z is the Jacobian matrix of fθ . In order to learn the flow fθ , the NLL objective of z is minimized , which is equivalent to maximizing the likelihood of z . Since the mapping is a bijection , sampling from the trained model pθ ( z ) is trivial : simply sample ∼ p and compute z = f−1θ ( ) . The key to designing a normalizing flow model is defining the transformation fθ so that the inverse transformation and the determinant of the Jacobian matrix can be efficiently computed . Based on ( Dinh et al. , 2016 ) , we adopt the following layers to form the flows used in our model . Affine coupling layer : Given D dimensional input data z and d < D , we partition the input into two vectors z1 = z1 : d and z2 = zd+1 : D. The output of one affine coupling layer is given by y1 = z1 , y2 = z2 exp ( s ( z1 ) ) + t ( z1 ) where s and t are functions from Rd → RD−d and is the element-wise product . The inverse of the transformation is explicitly given by z1 = y1 , z2 = ( y2 − t ( y1 ) ) exp ( −s ( y1 ) ) . The determinant of the Jacobian matrix of this transformation is det∂y∂z = ∏d j=1 ( exp [ s ( z1 ) j ] ) . Since computing both the inverse and the Jacobian of an affine coupling layer does not require computing the inverse and Jacobian of s and t , both functions can be arbitrarily complex . Combining coupling layers with random permutation : Affine coupling layers leave some components of the input data unchanged . In order to transform all the components , two coupling layers are combined in an alternating pattern to form a coupling block , so that the unchanged components in the first layer can be transformed in the second layer . In particular , we add a fixed random permutation of the coordinates of the input data at the end of each coupling block . See Figure 1b for an illustration of a coupling block used in our model .
The paper proposes a new model combining an auto-encoder (AE) and a normalising flow (NF). The model, Generative Latent Flow (GLF), uses the AE to map the inputs to a latent space, which is then transformed using the NF. The approach is intuitively beneficial in that the AE can reduce the dimensionality of the inputs such that the NF mapping becomes much faster, computationally. The proposed method is compared to related methods that use a variational AE (VAE) in combination with an NF, and the similarities are pointed out and studied empirically.
SP:55367291f235b256ca2f583722106e6507accd05
Reject Illegal Inputs: Scaling Generative Classifiers with Supervised Deep Infomax
1 INTRODUCTION . Non-robustness of neural network models emerges as a pressing concern since they are observed to be vulnerable to adversarial examples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) . Many attack methods have been developed to find imperceptible perturbations to fool the target classifiers ( Moosavi-Dezfooli et al. , 2016 ; Carlini & Wagner , 2017 ; Brendel et al. , 2017 ) . Meanwhile , many defense schemes have also been proposed to improve the robustnesses of the target models ( Goodfellow et al. , 2014 ; Tramèr et al. , 2017 ; Madry et al. , 2017 ; Samangouei et al. , 2018 ) . An important fact about these works is that they focus on discriminative classifiers , which directly model the conditional probabilities of labels given samples . Another promising direction , which is almost neglected so far , is to explore robustness of generative classifiers ( Ng & Jordan , 2002 ) . A generative classifier explicitly model conditional distributions of inputs given the class labels . During inference , it evaluates all the class conditional likelihoods of the test input , and outputs the class label corresponding to the maximum . Conditional generative models are powerful and natural choices to model the class conditional distributions , but they suffer from two big problems : ( 1 ) it is hard to scale generative classifiers on high-dimensional tasks , like natural images classification , with comparable performance to the discriminative counterparts . Though generative classifiers have shown promising results of adversarial robustness , they hardly achieve acceptable classification performance even on CIFAR10 ( Li et al. , 2018 ; Schott et al. , 2018 ; Fetaya et al. , 2019 ) . ( 2 ) The behaviors of likelihood-based generative models can be counter-intuitive and brittle . They may assign surprisingly higher likelihoods to out-of-distribution ( OoD ) samples ( Nalisnick et al. , 2018 ; Choi & Jang , 2018 ) . Fetaya et al . ( 2019 ) discuss the issues of likelihood as a metric for density modeling , which may be the reason of non-robust classification , e.g . OoD samples detection . In this paper , we propose supervised deep infomax ( SDIM ) by introducing supervised statistical constraints into deep infomax ( DIM , Hjelm et al . ( 2018 ) ) , an unsupervised learning framework by maximizing the mutual information between representations and data . SDIM is trained by optimizing two objectives : ( 1 ) maximizing the mutual information ( MI ) between the inputs and the high-level data representations from encoder ; ( 2 ) ensuring that the representations satisfy the supervised statistical constraints . The supervised statistical constraints can be interpreted as a generative classifier on high-level data representations giving up the full generative process . Unlike full generative models making implicit manifold assumptions , the supervised statistical constraints of SDIM serve as explicit enforcement of manifold assumption : data representations ( low-dimensional ) are trained to form clusters corresponding to their class labels . With SDIM , we could perform classification with rejection ( Nalisnick et al. , 2019 ; Geifman & El-Yaniv , 2017 ) . SDIMs reject illegal inputs based on off-manifold conjecture ( Samangouei et al. , 2018 ; Gu & Rigazio , 2014 ) , where illegal inputs , e.g . adversarial examples , lie far away from the data manifold . Samples whose class conditionals are smaller than the pre-chosen thresholds will be deemed as off-manifold , and prediction requests on them will be rejected . The contributions of this paper are : • We propose Supervised Deep Infomax ( SDIM ) , an end-to-end framework whose probabilistic constraints are equivalent to a generative classifier . SDIMs can achieve comparable classification performance with similar discrinimative counterparts at the cost of small over-parameterization . • We propose a simple but novel rejection policy based on off-manifold conjecture : SDIM outputs a class label only if the test sample ’ s largest class conditional surpasses the prechosen class threshold , otherwise outputs rejection . The choice of thresholds relies only on training set , and takes no additional computations . • Experiments show that SDIM with rejection policy can effectively reject illegal inputs , including OoD samples and adversarial examples generated by a comprehensive group of adversarial attacks . 2 BACKGROUND : DEEP INFOMAX . Deep InfoMax ( DIM , Hjelm et al . ( 2018 ) ) is an unsupervised representation learning framework by maximizing the mutual information ( MI ) of the inputs and outputs of an encoder . The computation of MI takes only input-output pairs with the deep neural networks based esimator MINE ( Belghazi et al. , 2018 ) . Let Eφ be an encoder parameterized by φ , working on the training set X = { xi } Ni=1 , and generating output set Y = { E ( xi ) } Ni=1 . DIM is trained to find the set of parameters φ such that : ( 1 ) the mutual information I ( X , Y ) is maximized over sample sets X and Y . ( 2 ) the representations , depending on the potential downstream tasks , match some prior distribution . Denote J and M the joint and product of marginals of random variables X , Y respectively . MINE estimates a lower-bound of MI with Donsker-Varadhan ( Donsker & Varadhan , 1983 ) representation of KL-divergence : I ( X , Y ) = DKL ( J||M ) ≥ EJ [ Tω ( x , y ) ] − logEM [ eTω ( x , y ) ] ( 1 ) where Tω ( x , y ) ∈ R is a family of functions with parameters ω represented by a neural network . Since in representation learning we are more interested in maximizing MI , than its exact value , nonKL divergences are also favorable candidates . We can get a family of variational lower-bounds using f -divergence representations ( Nguyen et al. , 2010 ) : If ( X , Y ) ≥ EJ [ Tω ( x , y ) ] − EM [ f∗ ( Tω ( x , y ) ) ] ( 2 ) where f∗ is the Fenchel conjugate of a specific divergence f . For KL-divergence , f∗ ( t ) = e ( t−1 ) . A full f∗ list is provided in Tab . 6 of Nowozin et al . ( 2016 ) . Noise-Contrastive Estimation ( Gutmann & Hyvärinen , 2010 ) can also be used as lower-bound of MI in “ infoNCE ” ( Oord et al. , 2018 ) . 3 SUPERVISED DEEP INFOMAX . All the components of SDIM framework are summurized in Fig . 1 . The focus of Supervised Deep InfoMax ( SDIM ) is on introducing supervision to probabilistic constraints of DIM for ( generative ) classification . We choose to maximize the local MI , which has shown to be more effective in classification tasks than maximizing global MI ( Hjelm et al. , 2018 ) . Equivalently , we minimize JMI : JMI = − 1 M2 M2∑ i=1 Ĩ ( L ( i ) φ ( x ) , Eφ ( x ) ) ( 3 ) where Lφ ( x ) is a local M ×M feature map of x extracted from some intermediate layer of encoder E , and Ĩ can be any possible MI lower-bounds . 3.1 EXPLICIT ENFORCEMENT OF MANIFOLD ASSUMPTION . By adopting a generative approach p ( x , y ) = p ( y ) p ( x|y ) , we assume that the data follows the manifold assumption : the ( high-dimensional ) data lies on low-dimensional manifolds corresponding to their class labels . Denote x̃ the compact representation generated with encoder Eφ ( x ) . In order to explicitly enforce the manifold assumption , we admit the existence of data manifold in the representation space . Assume that y is a discrete random variable representing class labels , and p ( x̃|y ) is the real class conditional distribution of the data manifold given y . Let pθ ( x̃|y ) be the class conditionals we model parameterized with θ . We approximate p ( x̃|y ) by minimizing the KL-divergence between p ( x̃|y ) and our model pθ ( x̃|y ) , which is given by : DKL ( p ( x̃|y ) ||pθ ( x̃|y ) ) = Ex̃ , y∼p ( x̃ , y ) [ log p ( x̃|y ) − log pθ ( x̃|y ) ] = Ex̃ , y∼p ( x̃ , y ) [ log p ( x̃|y ) ] − Ex̃ , y∼p ( x̃ , y ) [ log pθ ( x̃|y ) ] ( 4 ) where the first item on RHS is a constant independent of the model parameters θ. Eq . 4 equals to maximize the expectation Ex̃ , y∼p ( x̃ , y ) [ log pθ ( x̃|y ) ] . In practice , we minimize the following loss JNLL , equivalent to empicically maximize the above expectation over { x̃i = Eφ ( xi ) , yi } Ni=1 : JNLL = −Ex̃ , y∼p ( x̃ , y ) [ log pθ ( x̃|y ) ] ≈ − 1 N N∑ i=1 log pθ ( x̃i|yi ) ( 5 ) Besides the introduction of supervision , SDIM differs from DIM in its way of enforcing the statistical constraints : DIM use adversarial learning ( Makhzani et al. , 2015 ) to push the representations to the desired priors , while SDIM directly maximizes the parameterized class conditional probability . Maximize Likelihood Margins Since a generative classifier , at inference , decides which class a test input x belongs to according to its class conditional probability . On one hand , we maximize samples ’ true class conditional probabilities ( classes they belong to ) using JNLL ; On the other hand , we also hope that samples ’ false class conditional probabilities ( classes they do not belong to ) can be minimized . This is assured by the following likelihood margin loss JLM : JLM = 1 N · 1 C − 1 N∑ i=1 C∑ c=1 , c 6=yi max ( log p ( x̃i|y = c ) +K − log p ( x̃i|y = yi ) , 0 ) 2 ( 6 ) where K is a positive constant to control the margin . For each encoder output x̃i , the C − 1 truefalse class conditional gaps are squared1 , which quadratically increases the penalties when the gap becomes large , then are averaged . Putting all these together , the complete loss function we minimize is : JSDIM = α · JMI + β · JNLL + γ · JLM ( 7 ) Parameterization of Class Conditional Probability Each of the class conditional distribution is represented as an isotropic Gaussian . So the generative classifier is simply a embedding layer with C entries , and each entry contains the trainable mean and variance of a Gaussian . This minimized parameterization encourages the encoder to learn simple and stable low-dimensional representations that can be easily explained by even unimodal distributions . Considering that we maximize the true class conditional probability , and minimize the false class conditional probability at the same time , we do not choose conditional normalizing flows , since the parameters are shared across class labels , and the training can be very difficult . In Schott et al . ( 2018 ) , each class conditional probability is represented with a VAE , thus scaling to complex datasets with huge number of classes , e.g . ImageNet , is almost impossible . 3.2 DECISION FUNCTION WITH REJECTION . A generative approach models the class-conditional distributions p ( x|y ) , as well as the class priors p ( y ) . For classification , we compute the posterior probabilities p ( y|x ) through Bayes ’ rule : p ( y|x ) = p ( x|y ) p ( y ) p ( x ) ∝ p ( x|y ) p ( y ) The prior p ( y ) can be computed from the training set , or we simply use uniform class prior for all class labels by default . Then the prediction of test sample x∗ from posteriors is : y∗ = argmax c= [ 1 ... C ] log p ( x∗|y = c ) . ( 8 ) The drawback of the above decision function is that it always gives a prediction even for illegal inputs . Instead of simply outputting the class label that maximizes class conditional probability of x∗ , we set a threshold for each class conditional probability , and define our decision function with rejection to be : { y∗ , if log p ( x∗|y∗ ) ≥ δy∗ Rejection , otherwise ( 9 ) The model gives a rejection when log p ( x∗|y∗ ) is smaller than the threshold δy∗ . Note that here we can use p ( x∗|y∗ ) and p ( x̃∗|y∗ ) interchangeably . This is also known as selective classification ( Geifman & El-Yaniv , 2017 ) or classification with reject option ( Nalisnick et al. , 2019 ) ( See Supp . A )
This paper studies classification problems via a reject option. A reject option could be useful in prediction problems to handle Out-of-distribution examples. The classification procedure studied in this paper builds on three components 1. An auto-encoder that obtains a latent low-dimensional representation of the data point 2. A generative model that models the class-conditional probability model and 3. a margin based loss function that learns a classifier that provides a large probability mass to the class-conditional distribution corresponding to the correct class. The final decision procedure is to reject an input if the best class conditional probability is small and to use the class corresponding to the best class conditional probability otherwise.
SP:0612639384f7b7766e8838d47a3ac973a6df0e1e
Reject Illegal Inputs: Scaling Generative Classifiers with Supervised Deep Infomax
1 INTRODUCTION . Non-robustness of neural network models emerges as a pressing concern since they are observed to be vulnerable to adversarial examples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) . Many attack methods have been developed to find imperceptible perturbations to fool the target classifiers ( Moosavi-Dezfooli et al. , 2016 ; Carlini & Wagner , 2017 ; Brendel et al. , 2017 ) . Meanwhile , many defense schemes have also been proposed to improve the robustnesses of the target models ( Goodfellow et al. , 2014 ; Tramèr et al. , 2017 ; Madry et al. , 2017 ; Samangouei et al. , 2018 ) . An important fact about these works is that they focus on discriminative classifiers , which directly model the conditional probabilities of labels given samples . Another promising direction , which is almost neglected so far , is to explore robustness of generative classifiers ( Ng & Jordan , 2002 ) . A generative classifier explicitly model conditional distributions of inputs given the class labels . During inference , it evaluates all the class conditional likelihoods of the test input , and outputs the class label corresponding to the maximum . Conditional generative models are powerful and natural choices to model the class conditional distributions , but they suffer from two big problems : ( 1 ) it is hard to scale generative classifiers on high-dimensional tasks , like natural images classification , with comparable performance to the discriminative counterparts . Though generative classifiers have shown promising results of adversarial robustness , they hardly achieve acceptable classification performance even on CIFAR10 ( Li et al. , 2018 ; Schott et al. , 2018 ; Fetaya et al. , 2019 ) . ( 2 ) The behaviors of likelihood-based generative models can be counter-intuitive and brittle . They may assign surprisingly higher likelihoods to out-of-distribution ( OoD ) samples ( Nalisnick et al. , 2018 ; Choi & Jang , 2018 ) . Fetaya et al . ( 2019 ) discuss the issues of likelihood as a metric for density modeling , which may be the reason of non-robust classification , e.g . OoD samples detection . In this paper , we propose supervised deep infomax ( SDIM ) by introducing supervised statistical constraints into deep infomax ( DIM , Hjelm et al . ( 2018 ) ) , an unsupervised learning framework by maximizing the mutual information between representations and data . SDIM is trained by optimizing two objectives : ( 1 ) maximizing the mutual information ( MI ) between the inputs and the high-level data representations from encoder ; ( 2 ) ensuring that the representations satisfy the supervised statistical constraints . The supervised statistical constraints can be interpreted as a generative classifier on high-level data representations giving up the full generative process . Unlike full generative models making implicit manifold assumptions , the supervised statistical constraints of SDIM serve as explicit enforcement of manifold assumption : data representations ( low-dimensional ) are trained to form clusters corresponding to their class labels . With SDIM , we could perform classification with rejection ( Nalisnick et al. , 2019 ; Geifman & El-Yaniv , 2017 ) . SDIMs reject illegal inputs based on off-manifold conjecture ( Samangouei et al. , 2018 ; Gu & Rigazio , 2014 ) , where illegal inputs , e.g . adversarial examples , lie far away from the data manifold . Samples whose class conditionals are smaller than the pre-chosen thresholds will be deemed as off-manifold , and prediction requests on them will be rejected . The contributions of this paper are : • We propose Supervised Deep Infomax ( SDIM ) , an end-to-end framework whose probabilistic constraints are equivalent to a generative classifier . SDIMs can achieve comparable classification performance with similar discrinimative counterparts at the cost of small over-parameterization . • We propose a simple but novel rejection policy based on off-manifold conjecture : SDIM outputs a class label only if the test sample ’ s largest class conditional surpasses the prechosen class threshold , otherwise outputs rejection . The choice of thresholds relies only on training set , and takes no additional computations . • Experiments show that SDIM with rejection policy can effectively reject illegal inputs , including OoD samples and adversarial examples generated by a comprehensive group of adversarial attacks . 2 BACKGROUND : DEEP INFOMAX . Deep InfoMax ( DIM , Hjelm et al . ( 2018 ) ) is an unsupervised representation learning framework by maximizing the mutual information ( MI ) of the inputs and outputs of an encoder . The computation of MI takes only input-output pairs with the deep neural networks based esimator MINE ( Belghazi et al. , 2018 ) . Let Eφ be an encoder parameterized by φ , working on the training set X = { xi } Ni=1 , and generating output set Y = { E ( xi ) } Ni=1 . DIM is trained to find the set of parameters φ such that : ( 1 ) the mutual information I ( X , Y ) is maximized over sample sets X and Y . ( 2 ) the representations , depending on the potential downstream tasks , match some prior distribution . Denote J and M the joint and product of marginals of random variables X , Y respectively . MINE estimates a lower-bound of MI with Donsker-Varadhan ( Donsker & Varadhan , 1983 ) representation of KL-divergence : I ( X , Y ) = DKL ( J||M ) ≥ EJ [ Tω ( x , y ) ] − logEM [ eTω ( x , y ) ] ( 1 ) where Tω ( x , y ) ∈ R is a family of functions with parameters ω represented by a neural network . Since in representation learning we are more interested in maximizing MI , than its exact value , nonKL divergences are also favorable candidates . We can get a family of variational lower-bounds using f -divergence representations ( Nguyen et al. , 2010 ) : If ( X , Y ) ≥ EJ [ Tω ( x , y ) ] − EM [ f∗ ( Tω ( x , y ) ) ] ( 2 ) where f∗ is the Fenchel conjugate of a specific divergence f . For KL-divergence , f∗ ( t ) = e ( t−1 ) . A full f∗ list is provided in Tab . 6 of Nowozin et al . ( 2016 ) . Noise-Contrastive Estimation ( Gutmann & Hyvärinen , 2010 ) can also be used as lower-bound of MI in “ infoNCE ” ( Oord et al. , 2018 ) . 3 SUPERVISED DEEP INFOMAX . All the components of SDIM framework are summurized in Fig . 1 . The focus of Supervised Deep InfoMax ( SDIM ) is on introducing supervision to probabilistic constraints of DIM for ( generative ) classification . We choose to maximize the local MI , which has shown to be more effective in classification tasks than maximizing global MI ( Hjelm et al. , 2018 ) . Equivalently , we minimize JMI : JMI = − 1 M2 M2∑ i=1 Ĩ ( L ( i ) φ ( x ) , Eφ ( x ) ) ( 3 ) where Lφ ( x ) is a local M ×M feature map of x extracted from some intermediate layer of encoder E , and Ĩ can be any possible MI lower-bounds . 3.1 EXPLICIT ENFORCEMENT OF MANIFOLD ASSUMPTION . By adopting a generative approach p ( x , y ) = p ( y ) p ( x|y ) , we assume that the data follows the manifold assumption : the ( high-dimensional ) data lies on low-dimensional manifolds corresponding to their class labels . Denote x̃ the compact representation generated with encoder Eφ ( x ) . In order to explicitly enforce the manifold assumption , we admit the existence of data manifold in the representation space . Assume that y is a discrete random variable representing class labels , and p ( x̃|y ) is the real class conditional distribution of the data manifold given y . Let pθ ( x̃|y ) be the class conditionals we model parameterized with θ . We approximate p ( x̃|y ) by minimizing the KL-divergence between p ( x̃|y ) and our model pθ ( x̃|y ) , which is given by : DKL ( p ( x̃|y ) ||pθ ( x̃|y ) ) = Ex̃ , y∼p ( x̃ , y ) [ log p ( x̃|y ) − log pθ ( x̃|y ) ] = Ex̃ , y∼p ( x̃ , y ) [ log p ( x̃|y ) ] − Ex̃ , y∼p ( x̃ , y ) [ log pθ ( x̃|y ) ] ( 4 ) where the first item on RHS is a constant independent of the model parameters θ. Eq . 4 equals to maximize the expectation Ex̃ , y∼p ( x̃ , y ) [ log pθ ( x̃|y ) ] . In practice , we minimize the following loss JNLL , equivalent to empicically maximize the above expectation over { x̃i = Eφ ( xi ) , yi } Ni=1 : JNLL = −Ex̃ , y∼p ( x̃ , y ) [ log pθ ( x̃|y ) ] ≈ − 1 N N∑ i=1 log pθ ( x̃i|yi ) ( 5 ) Besides the introduction of supervision , SDIM differs from DIM in its way of enforcing the statistical constraints : DIM use adversarial learning ( Makhzani et al. , 2015 ) to push the representations to the desired priors , while SDIM directly maximizes the parameterized class conditional probability . Maximize Likelihood Margins Since a generative classifier , at inference , decides which class a test input x belongs to according to its class conditional probability . On one hand , we maximize samples ’ true class conditional probabilities ( classes they belong to ) using JNLL ; On the other hand , we also hope that samples ’ false class conditional probabilities ( classes they do not belong to ) can be minimized . This is assured by the following likelihood margin loss JLM : JLM = 1 N · 1 C − 1 N∑ i=1 C∑ c=1 , c 6=yi max ( log p ( x̃i|y = c ) +K − log p ( x̃i|y = yi ) , 0 ) 2 ( 6 ) where K is a positive constant to control the margin . For each encoder output x̃i , the C − 1 truefalse class conditional gaps are squared1 , which quadratically increases the penalties when the gap becomes large , then are averaged . Putting all these together , the complete loss function we minimize is : JSDIM = α · JMI + β · JNLL + γ · JLM ( 7 ) Parameterization of Class Conditional Probability Each of the class conditional distribution is represented as an isotropic Gaussian . So the generative classifier is simply a embedding layer with C entries , and each entry contains the trainable mean and variance of a Gaussian . This minimized parameterization encourages the encoder to learn simple and stable low-dimensional representations that can be easily explained by even unimodal distributions . Considering that we maximize the true class conditional probability , and minimize the false class conditional probability at the same time , we do not choose conditional normalizing flows , since the parameters are shared across class labels , and the training can be very difficult . In Schott et al . ( 2018 ) , each class conditional probability is represented with a VAE , thus scaling to complex datasets with huge number of classes , e.g . ImageNet , is almost impossible . 3.2 DECISION FUNCTION WITH REJECTION . A generative approach models the class-conditional distributions p ( x|y ) , as well as the class priors p ( y ) . For classification , we compute the posterior probabilities p ( y|x ) through Bayes ’ rule : p ( y|x ) = p ( x|y ) p ( y ) p ( x ) ∝ p ( x|y ) p ( y ) The prior p ( y ) can be computed from the training set , or we simply use uniform class prior for all class labels by default . Then the prediction of test sample x∗ from posteriors is : y∗ = argmax c= [ 1 ... C ] log p ( x∗|y = c ) . ( 8 ) The drawback of the above decision function is that it always gives a prediction even for illegal inputs . Instead of simply outputting the class label that maximizes class conditional probability of x∗ , we set a threshold for each class conditional probability , and define our decision function with rejection to be : { y∗ , if log p ( x∗|y∗ ) ≥ δy∗ Rejection , otherwise ( 9 ) The model gives a rejection when log p ( x∗|y∗ ) is smaller than the threshold δy∗ . Note that here we can use p ( x∗|y∗ ) and p ( x̃∗|y∗ ) interchangeably . This is also known as selective classification ( Geifman & El-Yaniv , 2017 ) or classification with reject option ( Nalisnick et al. , 2019 ) ( See Supp . A )
The paper proposes a scalable approach to train generative classifiers using information maximizing representation learning, with the motivation that generative classifiers could be more robust to adversarial attacks than discriminative classifiers. An off-the-shelf mutual information maximizer (MINE, DIM) is used to learn low-dimensional representations of images. Then, class-conditioned generative models of the representations are learned avoiding full generative modeling of the images. An additional loss is used to train the generative classifier which maximizes likelihood margins. Finally, percentile-based thresholds of the class log-probabilities is proposed to be used to reject classification for out-of-manifold inputs.
SP:0612639384f7b7766e8838d47a3ac973a6df0e1e
Ecological Reinforcement Learning
1 INTRODUCTION . A central goal in current AI research , especially in reinforcement learning ( RL ) , is to develop algorithms that are general , in the sense that the same method can be used to train an effective model for a wide variety of tasks , problems , and domains . In RL , this means designing algorithms that can solve any Markov decision process ( MDP ) . However , natural intelligence – e.g. , humans and animals – exists in the context of a natural environment . People and animals can not be understood separately from the environments that they inhabit any more than brains can be understood separately from the bodies they control . In the same way , perhaps a complete understanding of artificial intelligence can also only be obtained in the context of an environment , or at least a set of assumptions on that environment . There has been comparatively little study in the field of reinforcement learning to understand how properties of the environment impact the learning process for complex RL agents . Many of the environments used in modern reinforcement learning research differ in fundamental ways from the real world . First , standard RL benchmarks , such as the arcade learning environment ( ALE ) ( Bellemare et al. , 2013 ) and Gym ( Brockman et al. , 2016 ) are episodic , while natural environments are continual and lack a “ reset ” mechanism , requiring an agent to learn through continual interaction . Second , most of these environments include detailed reward functions that not only correspond to overall task success , but also provide intermediate learning signal , thus shaping the learning process . These signals can aid in learning , but they can also bias the learning process . Third , the environments are typically static , in the sense that only the agent ’ s own actions substantively impact the world . In contrast , natural environments are stochastic and dynamic : an agent that does nothing will still experience many different states , due to the behavior of natural processes and other creatures . In this paper , we aim to study how these properties affect the learning process . At the core of our work is the concept of ecological reinforcement learning : the idea that the behavior and learning dynamics of an agent , like that of an animal , must be understood in the context of the environment in which it is situated . We therefore study how particular properties of the environment can facilitate or harm the emergence of complex behaviors . We focus our attention on the three properties outlined above : ( 1 ) continual , non-episodic environments where the agent must learn over the course of one “ lifetime , ” ( 2 ) environments that lack detailed reward shaping , but instead provides a reward signal based on a simple “ fundamental drive , ” ( 3 ) environments that are inherently dynamic , evolving on their own around the agent even if the agent does not take meaningful or useful actions . We study how each of these properties affects the learning process . Although on the surface these properties would seem to make the learning process harder , we observe that in some cases , they can actually make reinforcement learning easier . The degree to which these properties make learning easier is highly dependent on the degree of scaffolding that is provided by an environment . For example , an agent tasked with collecting and making food pellets might struggle to learn if it must first complete a complex sequence of actions . However , if food pellets are initially plentiful , the agent can first learn that food pellets are rewarding , and then gradually learn to make them out of raw ingredients as the initial supply becomes scarce . This provides a natural scaffolding and curriculum without requiring manual reward engineering . More generally , “ environment shaping ” can be used as a way to craft the agent ’ s curriculum without modifying its reward function . This benefit is counter-balanced by the fact that non-episodic learning is inherently harder – the resets in episodic tasks provide a more stationary learning problem , preventing the agent from getting “ stuck ” due to a bad initial policy . However , natural environments can also counteract this difficulty : a dynamic environment that gradually changes on its own can provide a sort of “ soft ” reset that can mitigate the difficulties of reset-free learning , and we observe this empirically in our experiments . We illustrate some of these ideas in Figure 1 . The contribution of this work is an empirical study of how the properties of environments – particularly properties that we believe reflect realistic environments – impact reinforcement learning . We study the effect of ( 1 ) continual , non-episodic learning , ( 2 ) learning with and without reward shaping , and ( 3 ) learning in dynamic environments that evolve on their own . We find that , though each of these properties can make learning harder , they can also be combined in realistic ways to actually make learning easier . We also provide an open-source environment for future experiments studying “ ecological ” reinforcement learning , and we hope that our experimental conclusions will encourage future research that studies how the nature of the environment in which the RL agent is situated can facilitate learning and the emergence of complex skills . This exercise helps us determine which types of algorithmic challenges we should focus our development efforts towards in order to solve natural environments that agents might encounter . 2 RELATED WORK . Solving general RL problems can be extremely hard in general ( Kakade & Langford , 2002 ) . Reward shaping is a common technique to guide learning ( Ng et al. , 1999 ; Devlin & Kudenko , 2012 ; Brys et al. , 2015 ) but is usually hand crafted and must be carefully designed by human experts ( Griffith et al. , 2013 ) . Shaping the reward may also lead to suboptimal solutions , as it alters the objective of the learning problem . Curriculum learning can be used to first provide the agent with easier tasks , followed by more challenging tasks ( Bengio et al. , 2009 ; Graves et al. , 2017 ; Randløv & Alstrøm , 1998 ; Wang et al. , 2019a ; Yu et al. , 2018 ; Heess et al. , 2017 ) . Curriculum learning can also be viewed in the context of multiple learning agents in an adversarial or cooperative setting ( Silver et al. , 2016 ; Al-Shedivat et al. , 2017 ; Sukhbaatar et al. , 2017 ; Omidshafiei et al. , 2018 ) or where the curriculum is automatically generated ( Florensa et al. , 2017b ; a ; Riedmiller et al. , 2018 ; Wang et al. , 2019b ) . The “ environment shaping ” that we study in our experiments can be viewed as a kind of curriculum learning , and we argue – and show empirically – that this environment shaping approach can in some cases be more effective than more commonly used reward shaping . Improved exploration methods are a possible solution to solving sparse reward tasks . Prior work has used approximate state-visitation counts ( Tang et al. , 2016 ; Bellemare et al. , 2013 ) , information gain , or prediction error ( Houthooft et al. , 2016 ; Pathak et al. , 2017 ) , or model ensemble uncertainty ( Osband et al. , 2016 ) . A recent work ( Ecoffet et al. , 2019 ) maintains a set of novel states and first returns to the novel states before exploring from this frontier . Our work could be combined with an exploration method , however , this work indicates that sparse reward tasks can be solved with an appropriately shaped environment . Prior work on RL without resets has focused on safe exploration ( Moldovan & Abbeel , 2012 ; Chatzilygeroudis et al. , 2018 ) or learning a policy to reset the environment ( Eysenbach et al. , 2017 ; Han et al. , 2015 ) . Even-Dar et al . ( 2005 ) studies reset free RL in POMDPs and implements a homing strategy which approximately resets the agent . Rather than trying to convert the reset-free problem to one that looks more like a scenario with resets , our experiments study under which conditions reset-free learning can actually be easier , and show that dynamic environments – which we argue better reflect the real world – actually make learning without resets easier . Learning in non-episodic settings has been studied from the perspective of continual learning Ring ( 1997 ) , where a number of tasks are learned in sequence . These algorithms typically consider the problem of “ catastrophic forgetting ” ( Mccloskey , 1989 ; French , 1999 ) , where previously learned tasks are forgotten while learning new tasks . To solve this problem , algorithms use methods such as explicit memorization ( Rusu et al. , 2016 ; Schwarz et al. , 2018 ) , generative replay ( Shin et al. , 2017 ) and explicit weight regularization ( Kirkpatrick et al. , 2016 ; Kaplanis et al. , 2018 ) . These works assume that resets and task boundaries are available whereas we assume that the agent is unable to reset . There has also been work on building more complex tasks in large diverse worlds with Mujoco ( Todorov et al. , 2012 ; Singh et al. , 2019 ; Yu et al. , 2019 ) , Malmo ( Johnson et al. , 2016 ; Guss et al. , 2019 ) , DeepMind Lab ( Beattie et al. , 2016 ) , and many others , however , again , these environments are studied in the context of episodic-learning . 3 PROPERTIES OF NATURAL ENVIRONMENTS . In contrast to most simulated environments that are used for reinforcement learning experiments ( Brockman et al. , 2016 ) , agents learning in natural environments experience a continual stream of experience , without episode boundaries . The typical reward function engineering that is often employed in reinforcement learning experiments is also generally unavailable in the real world , where agents must rely on their own low-level perception to understand the world . Finally , natural environments change on their own , even when the agent does not follow a coordinated or intelligent course of action . This dynamism can create additional challenges , but can also facilitate learning , mitigating some of the issues due to non-episodic and non-resettable learning settings . In this paper , our aim is to study how these aspects of the environment impact the performance of reinforcement learning agents . We term this approach ecological reinforcement learning , in that it deals specifically with the relationship between properties of the environment and the reinforcement learning agent , rather than studying reinforcement learning algorithms in the general case , regardless of the particular properties of the learning environment . We believe that the properties outlined above are broadly reflected in real-world settings , and are often absent in simulated reinforcement learning benchmarks . In this section , we discuss each of these properties , and formulate our hypotheses about how these properties might influence learning . Continual non-episodic learning . In the real world , all learning must at some level be nonepisodic : though we may instrument environments to make them appear episodic , there is always a single underlying temporal process . In general , this makes the learning problem harder : when the agent is not reset to randomly chosen initial states , mistakes early on in training can put it into undesirable situations , from which it might be harder to recover and – more importantly – harder to learn . A non-episodic learning process is non-stationary , and the agent can become trapped in difficult regions of the state space . Hypothesis 1 : Non-episodic learning is more difficult than episodic learning because the agent must handle a non-stationary learning problem , and can become trapped in difficult states . We will study this hypothesis in our experiments , and show how some of the other properties of natural environments can help alleviate this difficulty . Sparse rewards and environment shaping . While in principle RL algorithms can handle relatively uninformative rewards , in practice reward shaping is often an essential tool for getting RL methods to acquire effective policies . For example , an agent that must learn a policy to collect resources to make an axe ( see Figure 2 ) might make use of a reward function that specifies the distance to the nearest resource , or at least provides a small reward for each resource obtained , as opposed to a reward given only for obtaining the final goal . However , well-shaped rewards are generally not available and difficult to provide in the real world , since they require knowledge of privileged state variables ( e.g. , positions of objects ) or the process by which the task must be completed ( e.g. , required resources ) , both of which should in principle be learned automatically by the agent . Furthermore , reward shaping might introduce bias , since the optimal policy for a shaped reward may not in fact be optimal for the original task reward . On the other hand , agents in the real world do not learn in a vacuum : even for humans and animals , it is reasonable to assume a reasonably cooperative environment that has been set up so as to facilitate learning . For humans , this kind of “ scaffolding ” is often provided by other agents ( e.g. , parents and teachers ) . But even without other agents , natural environments might provide automatic scaffolding – e.g. , an animal might find apples that fell from a tree , and thereby learn that apples are a source of food . Once the fallen apples are exhausted , the animal might use its knowledge of the value of apples to learn to climb the tree to obtain the apples on its own . This kind of “ environment shaping ” could serve as a tool for guiding the learning process , without the bias or manual engineering inherent in reward shaping . Hypothesis 2 : Environment shaping can enable agents to learn even with simple sparse rewards , and can in fact result in more proficient policies if applied correctly , as opposed to reward shaping . Dynamic environments . Standard reinforcement learning benchmark tasks are typically situated in static environments ( Brockman et al. , 2016 ; Bellemare et al. , 2013 ) , in the sense that the environment does not change substantially unless the agent takes a coordinated course of action . On the other hand , real-world settings are typically dynamic , in the sense that the environment changes even if the agent does not follow any coordinated course of action : animals will move around , times of day will change , seasons will change , etc . Dynamic environments present their own challenges , but they can also facilitate learning , by automatically exposing the agent to a wide variety of situations . Hypothesis 3 : While dynamic environments could make learning more difficult , in fact they can alleviate some of the challenges associated with non-episodic learning , by providing the agent with a variety of learning conditions even in the absence of coordinated and intelligent behavior ( as is the case , e.g. , early on in training ) .
This paper discusses the value of creating more challenging environments for training reinforcement learning agents. Specifically, the paper focuses on three characteristics of the environment that the paper claims are necessary for developing intelligent agents. The first of these properties is stochasticity in the environment transitions, specifically stochasticity that is independent of the action taken by the agent. The next is sparsity of rewards; discontinuing the use of reward shaping to define desired behavior. Finally the paper argues that environments should not be episodic, that natural environments are continuing tasks so research focus should be around solving continuous tasks.
SP:ff58b8e7f4f0ae436627d7039f8c883a63f15101
Ecological Reinforcement Learning
1 INTRODUCTION . A central goal in current AI research , especially in reinforcement learning ( RL ) , is to develop algorithms that are general , in the sense that the same method can be used to train an effective model for a wide variety of tasks , problems , and domains . In RL , this means designing algorithms that can solve any Markov decision process ( MDP ) . However , natural intelligence – e.g. , humans and animals – exists in the context of a natural environment . People and animals can not be understood separately from the environments that they inhabit any more than brains can be understood separately from the bodies they control . In the same way , perhaps a complete understanding of artificial intelligence can also only be obtained in the context of an environment , or at least a set of assumptions on that environment . There has been comparatively little study in the field of reinforcement learning to understand how properties of the environment impact the learning process for complex RL agents . Many of the environments used in modern reinforcement learning research differ in fundamental ways from the real world . First , standard RL benchmarks , such as the arcade learning environment ( ALE ) ( Bellemare et al. , 2013 ) and Gym ( Brockman et al. , 2016 ) are episodic , while natural environments are continual and lack a “ reset ” mechanism , requiring an agent to learn through continual interaction . Second , most of these environments include detailed reward functions that not only correspond to overall task success , but also provide intermediate learning signal , thus shaping the learning process . These signals can aid in learning , but they can also bias the learning process . Third , the environments are typically static , in the sense that only the agent ’ s own actions substantively impact the world . In contrast , natural environments are stochastic and dynamic : an agent that does nothing will still experience many different states , due to the behavior of natural processes and other creatures . In this paper , we aim to study how these properties affect the learning process . At the core of our work is the concept of ecological reinforcement learning : the idea that the behavior and learning dynamics of an agent , like that of an animal , must be understood in the context of the environment in which it is situated . We therefore study how particular properties of the environment can facilitate or harm the emergence of complex behaviors . We focus our attention on the three properties outlined above : ( 1 ) continual , non-episodic environments where the agent must learn over the course of one “ lifetime , ” ( 2 ) environments that lack detailed reward shaping , but instead provides a reward signal based on a simple “ fundamental drive , ” ( 3 ) environments that are inherently dynamic , evolving on their own around the agent even if the agent does not take meaningful or useful actions . We study how each of these properties affects the learning process . Although on the surface these properties would seem to make the learning process harder , we observe that in some cases , they can actually make reinforcement learning easier . The degree to which these properties make learning easier is highly dependent on the degree of scaffolding that is provided by an environment . For example , an agent tasked with collecting and making food pellets might struggle to learn if it must first complete a complex sequence of actions . However , if food pellets are initially plentiful , the agent can first learn that food pellets are rewarding , and then gradually learn to make them out of raw ingredients as the initial supply becomes scarce . This provides a natural scaffolding and curriculum without requiring manual reward engineering . More generally , “ environment shaping ” can be used as a way to craft the agent ’ s curriculum without modifying its reward function . This benefit is counter-balanced by the fact that non-episodic learning is inherently harder – the resets in episodic tasks provide a more stationary learning problem , preventing the agent from getting “ stuck ” due to a bad initial policy . However , natural environments can also counteract this difficulty : a dynamic environment that gradually changes on its own can provide a sort of “ soft ” reset that can mitigate the difficulties of reset-free learning , and we observe this empirically in our experiments . We illustrate some of these ideas in Figure 1 . The contribution of this work is an empirical study of how the properties of environments – particularly properties that we believe reflect realistic environments – impact reinforcement learning . We study the effect of ( 1 ) continual , non-episodic learning , ( 2 ) learning with and without reward shaping , and ( 3 ) learning in dynamic environments that evolve on their own . We find that , though each of these properties can make learning harder , they can also be combined in realistic ways to actually make learning easier . We also provide an open-source environment for future experiments studying “ ecological ” reinforcement learning , and we hope that our experimental conclusions will encourage future research that studies how the nature of the environment in which the RL agent is situated can facilitate learning and the emergence of complex skills . This exercise helps us determine which types of algorithmic challenges we should focus our development efforts towards in order to solve natural environments that agents might encounter . 2 RELATED WORK . Solving general RL problems can be extremely hard in general ( Kakade & Langford , 2002 ) . Reward shaping is a common technique to guide learning ( Ng et al. , 1999 ; Devlin & Kudenko , 2012 ; Brys et al. , 2015 ) but is usually hand crafted and must be carefully designed by human experts ( Griffith et al. , 2013 ) . Shaping the reward may also lead to suboptimal solutions , as it alters the objective of the learning problem . Curriculum learning can be used to first provide the agent with easier tasks , followed by more challenging tasks ( Bengio et al. , 2009 ; Graves et al. , 2017 ; Randløv & Alstrøm , 1998 ; Wang et al. , 2019a ; Yu et al. , 2018 ; Heess et al. , 2017 ) . Curriculum learning can also be viewed in the context of multiple learning agents in an adversarial or cooperative setting ( Silver et al. , 2016 ; Al-Shedivat et al. , 2017 ; Sukhbaatar et al. , 2017 ; Omidshafiei et al. , 2018 ) or where the curriculum is automatically generated ( Florensa et al. , 2017b ; a ; Riedmiller et al. , 2018 ; Wang et al. , 2019b ) . The “ environment shaping ” that we study in our experiments can be viewed as a kind of curriculum learning , and we argue – and show empirically – that this environment shaping approach can in some cases be more effective than more commonly used reward shaping . Improved exploration methods are a possible solution to solving sparse reward tasks . Prior work has used approximate state-visitation counts ( Tang et al. , 2016 ; Bellemare et al. , 2013 ) , information gain , or prediction error ( Houthooft et al. , 2016 ; Pathak et al. , 2017 ) , or model ensemble uncertainty ( Osband et al. , 2016 ) . A recent work ( Ecoffet et al. , 2019 ) maintains a set of novel states and first returns to the novel states before exploring from this frontier . Our work could be combined with an exploration method , however , this work indicates that sparse reward tasks can be solved with an appropriately shaped environment . Prior work on RL without resets has focused on safe exploration ( Moldovan & Abbeel , 2012 ; Chatzilygeroudis et al. , 2018 ) or learning a policy to reset the environment ( Eysenbach et al. , 2017 ; Han et al. , 2015 ) . Even-Dar et al . ( 2005 ) studies reset free RL in POMDPs and implements a homing strategy which approximately resets the agent . Rather than trying to convert the reset-free problem to one that looks more like a scenario with resets , our experiments study under which conditions reset-free learning can actually be easier , and show that dynamic environments – which we argue better reflect the real world – actually make learning without resets easier . Learning in non-episodic settings has been studied from the perspective of continual learning Ring ( 1997 ) , where a number of tasks are learned in sequence . These algorithms typically consider the problem of “ catastrophic forgetting ” ( Mccloskey , 1989 ; French , 1999 ) , where previously learned tasks are forgotten while learning new tasks . To solve this problem , algorithms use methods such as explicit memorization ( Rusu et al. , 2016 ; Schwarz et al. , 2018 ) , generative replay ( Shin et al. , 2017 ) and explicit weight regularization ( Kirkpatrick et al. , 2016 ; Kaplanis et al. , 2018 ) . These works assume that resets and task boundaries are available whereas we assume that the agent is unable to reset . There has also been work on building more complex tasks in large diverse worlds with Mujoco ( Todorov et al. , 2012 ; Singh et al. , 2019 ; Yu et al. , 2019 ) , Malmo ( Johnson et al. , 2016 ; Guss et al. , 2019 ) , DeepMind Lab ( Beattie et al. , 2016 ) , and many others , however , again , these environments are studied in the context of episodic-learning . 3 PROPERTIES OF NATURAL ENVIRONMENTS . In contrast to most simulated environments that are used for reinforcement learning experiments ( Brockman et al. , 2016 ) , agents learning in natural environments experience a continual stream of experience , without episode boundaries . The typical reward function engineering that is often employed in reinforcement learning experiments is also generally unavailable in the real world , where agents must rely on their own low-level perception to understand the world . Finally , natural environments change on their own , even when the agent does not follow a coordinated or intelligent course of action . This dynamism can create additional challenges , but can also facilitate learning , mitigating some of the issues due to non-episodic and non-resettable learning settings . In this paper , our aim is to study how these aspects of the environment impact the performance of reinforcement learning agents . We term this approach ecological reinforcement learning , in that it deals specifically with the relationship between properties of the environment and the reinforcement learning agent , rather than studying reinforcement learning algorithms in the general case , regardless of the particular properties of the learning environment . We believe that the properties outlined above are broadly reflected in real-world settings , and are often absent in simulated reinforcement learning benchmarks . In this section , we discuss each of these properties , and formulate our hypotheses about how these properties might influence learning . Continual non-episodic learning . In the real world , all learning must at some level be nonepisodic : though we may instrument environments to make them appear episodic , there is always a single underlying temporal process . In general , this makes the learning problem harder : when the agent is not reset to randomly chosen initial states , mistakes early on in training can put it into undesirable situations , from which it might be harder to recover and – more importantly – harder to learn . A non-episodic learning process is non-stationary , and the agent can become trapped in difficult regions of the state space . Hypothesis 1 : Non-episodic learning is more difficult than episodic learning because the agent must handle a non-stationary learning problem , and can become trapped in difficult states . We will study this hypothesis in our experiments , and show how some of the other properties of natural environments can help alleviate this difficulty . Sparse rewards and environment shaping . While in principle RL algorithms can handle relatively uninformative rewards , in practice reward shaping is often an essential tool for getting RL methods to acquire effective policies . For example , an agent that must learn a policy to collect resources to make an axe ( see Figure 2 ) might make use of a reward function that specifies the distance to the nearest resource , or at least provides a small reward for each resource obtained , as opposed to a reward given only for obtaining the final goal . However , well-shaped rewards are generally not available and difficult to provide in the real world , since they require knowledge of privileged state variables ( e.g. , positions of objects ) or the process by which the task must be completed ( e.g. , required resources ) , both of which should in principle be learned automatically by the agent . Furthermore , reward shaping might introduce bias , since the optimal policy for a shaped reward may not in fact be optimal for the original task reward . On the other hand , agents in the real world do not learn in a vacuum : even for humans and animals , it is reasonable to assume a reasonably cooperative environment that has been set up so as to facilitate learning . For humans , this kind of “ scaffolding ” is often provided by other agents ( e.g. , parents and teachers ) . But even without other agents , natural environments might provide automatic scaffolding – e.g. , an animal might find apples that fell from a tree , and thereby learn that apples are a source of food . Once the fallen apples are exhausted , the animal might use its knowledge of the value of apples to learn to climb the tree to obtain the apples on its own . This kind of “ environment shaping ” could serve as a tool for guiding the learning process , without the bias or manual engineering inherent in reward shaping . Hypothesis 2 : Environment shaping can enable agents to learn even with simple sparse rewards , and can in fact result in more proficient policies if applied correctly , as opposed to reward shaping . Dynamic environments . Standard reinforcement learning benchmark tasks are typically situated in static environments ( Brockman et al. , 2016 ; Bellemare et al. , 2013 ) , in the sense that the environment does not change substantially unless the agent takes a coordinated course of action . On the other hand , real-world settings are typically dynamic , in the sense that the environment changes even if the agent does not follow any coordinated course of action : animals will move around , times of day will change , seasons will change , etc . Dynamic environments present their own challenges , but they can also facilitate learning , by automatically exposing the agent to a wide variety of situations . Hypothesis 3 : While dynamic environments could make learning more difficult , in fact they can alleviate some of the challenges associated with non-episodic learning , by providing the agent with a variety of learning conditions even in the absence of coordinated and intelligent behavior ( as is the case , e.g. , early on in training ) .
The authors study what they refer to as ecological reinforcement learning, defined as the interaction between properties of the environment and the reinforcement learning agent. They introduce environments with characteristics that reflect natural environments: non-episodic learning, uninformative reward signals, and natural dynamics that cause the environment to change. These factors are shown to significantly affect the learning progress of RL agents and, unexpectedly, the agents can sometimes learn more efficiently in these more challenging conditions.
SP:ff58b8e7f4f0ae436627d7039f8c883a63f15101
Alleviating Privacy Attacks via Causal Learning
1 INTRODUCTION . Machine learning algorithms , especially deep neural networks ( DNNs ) have found diverse applications in various fields such as healthcare ( Esteva et al. , 2019 ) , gaming ( Mnih et al. , 2013 ) , and finance ( Tsantekidis et al. , 2017 ; Fischer & Krauss , 2018 ) . However , a line of recent research has shown that deep learning algorithms are susceptible to privacy attacks that leak information about the training dataset ( Fredrikson et al. , 2015 ; Rahman et al. , 2018 ; Song & Shmatikov , 2018 ; Hayes et al. , 2017 ) . Particularly , one such attack called membership inference reveals whether a particular data sample was present in the training dataset ( Shokri et al. , 2017 ) . The privacy risks due to membership inference elevate when the DNNs are trained on sensitive data such as in healthcare applications . For example , patients providing medical records to build a model that detects HIV would not want to reveal their participation in the training dataset . Membership inference attacks are shown to exploit overfitting of the model on the training dataset ( Yeom et al. , 2018 ) . Existing defenses propose the use of generalization techniques such as adding learning rate decay , dropout or using adversarial regularization techniques ( Nasr et al. , 2018b ; Salem et al. , 2018 ) . All these approaches assume that the test data is from the same distribution as the training dataset . In practice , a model trained using data from one distribution is often used on a ( slightly ) different distribution . For example , hospitals in one region may train a model to detect HIV and share it with hospitals in different regions . However , generalizing to a new context is a challenge for any machine learning model . We extend the scope of membership privacy to different distributions and show that the risk from membership attack increases further on DNNs as the test distribution is changed . That is , the abiltity of an adversary to distinguish a member from a non-member improves with change in test distributions . To alleviate privacy attacks , we propose using models that depend on the causal relationship between input features and the output . Causal learning has been extensively used to guarantee fairness and explainability properties of the predicted output ( Kusner et al. , 2017 ; Nabi & Shpitser , 2018 ; Datta et al. , 2016 ) . However , the connection of causal learning to privacy is yet unexplored . To the best of our knowledge , we provide the first analysis of privacy benefits of causal models . By definition , causal relationships are invariant across input distributions ( Peters et al. , 2016 ) , and hence make the predictions of causal models independent of the observed data distribution , let alone the observed dataset . Hence , causal models generalize better even with change in the distributions . In this paper , we show that the generalizability property of causal models directly ensures better privacy guarantees for the input data . Concretely , we prove that with reasonable assumptions , a causal model always provides stronger ( i.e. , smaller value ) differential privacy guarantees than a corresponding associational model trained on the same features and the same amount of added noise to the training dataset . Consequently , we show that membership inference attacks are ineffective ( equivalent to a random guess ) on causal models trained on infinite samples . Empirical attack accuracies on four different datasets confirm our theoretical claims . We find that 60K training samples are sufficient to reduce the attack accuracy of a causal model to a random guess . In contrast , membership attack accuracy for neural network-based associational models increase as test distributions are changed . The attack accuracy reaches nearly 80 % when the target associational model is trained on 60K training samples and used to predict test data that belong to a different distribution than the training data . Our results show that causal learning approaches are a promising direction for training models on sensitive data . Section 2 describes the properties of causal models . Section 3 proves the connections of causality to differential privacy and robustness to membership attacks . Section 4 provides empirical results . To summarize , we make the following contributions : • For the same amount of added noise , models learned using causal structure provide stronger -differential privacy guarantees than corresponding associational models . • Causal models are provably more robust to membership inference attacks than typical associational models such as neural networks . • We simulate practical settings where the test distribution may not be the same as the training distribution and find that the membership inference attack accuracy of causal models is close to a “ random guess ” ( i.e. , 50 % ) while associational models exhibit upto 80 % attack accuracy . 2 PROPERTIES OF CAUSAL MODELS . Causal models are shown to generalize well since the output of these models depend only on the causal relationship between the input features and the outcomes instead of the associations between them . From prior work , we know that the causal relationship between the features is invariant to the their distribution ( Peters et al. , 2016 ) . Using this property , we study its effects on the privacy of data . 2.1 BACKGROUND : CAUSAL MODEL . Intuitively , a causal model identifies a subset of features that have a causal relationship with the outcome and learns a function from the subset to the outcome . To construct a causal model , one may use a structural causal graph based on domain knowledge that defines causal features as parents of the outcome under the graph . Alternatively , one may exploit the strong relevance property from Pellet & Elisseeff ( 2008 ) , use score-based learning algorithms ( Scutari , 2009 ) or recent methods for learning invariant relationships from training datasets from different distributions ( Peters et al. , 2016 ; Bengio et al. , 2019 ) , or learn based on a combination of randomized experiments and observed data . Note that this is different from training probabilistic graphical models , wherein an edge conveys an associational relationship . Further details on causal models are in Pearl ( 2009 ) ; Peters et al . ( 2017 ) . For ease of exposition , we assume the structural causal graph framework throughout . Consider data from a distribution ( X , Y ) ∼ P where X is a k-dimensional vector and Y ∈ { 0 , 1 } . Our goal is to learn a function h ( X ) that predicts Y . Figure 1 shows causal graphs that denote the different relationships between X and Y . Nodes of the graph represent variables and a directed edge represents a direct causal relationship from a source to target node . Denote Xpa ⊆ X , the parents of Y in the causal graph . Figure 1a shows the scenario where X contains variables XS0 that are correlated to Xpa in P , but not necessarily connected to either Xpa or Y . These correlations may change in the future , thus a generalizable model should not include these features . Similarly , Figure 1b shows parents and children of Xpa . The d-separation principle states that a node is independent of its ancestors conditioned on all its parents ( Pearl , 2009 ) . Thus , Y is independent of XS1 and XS2 conditional on Xpa . Therefore , including them in a model does not add predictive value ( and further , avoids problems when the relationships between XS1 and XS2 may also change ) . Finally , for completeness , the exhaustive set of variables to include is known as the causal Markov Blanket1 , XC which includes Y ’ s parents , ( Xpa ) , children ( Ych ) and parents of children . Conditioned on its Markov blanket ( Figure 1c ) , Y is independent of all other variables in the causal graph . When Y has no descendants in the graph , then the effective Markov blanket includes only its parents , Xpa . The key insight is that building a model for predicting Y using the Markov Blanket XC ensures that the model generalizes to other distributions of X , and also to changes in other causal relationships between X , as long as the causal relationship of XC to Y is stable . We call such a model as a causal model , the features in ( XC ) as the causal features , and assume that all the causal features for Y are observed . In contrast , we call a model that uses all available features as an associational model . 2.2 GENERALIZATION TO NEW DISTRIBUTIONS . We state the generalization property of causal models and show how it results in a stronger differential privacy guarantee . We first define In-distribution and Out-of-distribution generalization error . Throughout , L ( . , . ) refers to the loss on a single input and LP ( . , . ) = EPL ( . , . ) refers to the expected value of the loss over a distribution P ( X , Y ) . We refer f : X→ Y as the ground-truth labeling function and h : X→ Y as the hypothesis function or simply the model . Then , L ( h , h′ ) is any loss function quantifying the difference between two models h and h′ . Definition 1 . In-Distribution Generalization Error ( IDE ) . Consider a dataset S ∼ P ( X , Y ) . Then for a model h : X→ Y trained on S , the in-distribution generalization error is given by : IDEP ( h , f ) = LP ( h , f ) − LS∼P ( h , f ) ( 1 ) Definition 2 . Out-of-Distribution Generalization Error ( ODE ) . Consider a dataset S sampled from a distribution P ( X , Y ) . Then for a model h : X→ Y trained on S , the out-of-distribution generalization error with respect to another distribution P∗ ( X , Y ) is given by : ODEP , P∗ ( h , f ) = LP∗ ( h , f ) − LS∼P ( h , f ) ( 2 ) Definition 3 . Discrepancy Distance ( discL ) ( Def . 4 in Mansour et al . ( 2009 ) ) . Let H be a set of hypotheses , h : X→ Y . Let L : Y× Y→ R+ define a loss function over Y for any such hypothesis h. Then the discrepancy distance discL over any two distributions P ( X , Y ) and P∗ ( X , Y ) is given by : discL ( P , P ∗ ) = max h , h′∈H |LP ( h , h′ ) − LP∗ ( h , h′ ) | ( 3 ) Intuitively , the term discL ( P , P∗ ) denotes the distance between the two distributions . Higher the distance , higher is the chance of an error when transferring h from one distribution to another . Now , we will state the theorem on the generalization property of causal models . Theorem 1 . Consider a structural causal graph G that connects X to Y , and causal features XC where XC is a Markov Blanket of Y under G. Let P ( X , Y ) and P∗ ( X , Y ) be two distributions with arbitrary P ( X ) and P∗ ( X ) such that the causal relationship between XC and Y is preserved , which implies that P ( Y|XC ) = P∗ ( Y|XC ) . Let f : XC → Y be the resultant invariant labelling function such that y = f ( XC ) . Further , assume thatHC represents the set of causal models hc : XC → Y that use all causal features andHA represent the set of associational models ha : X→ Y that may use all available features , such thatHC ⊆ HA and f ∈ HC . Then , for any symmetric loss function L that obeys the triangle inequality , the upper bound of ODE from a dataset S ∼ P ( X , Y ) to P∗ ( called ODE-Bound ) for a causal model hc ∈ HC is less than or equal to the upper bound ODE-Bound of an associational model ha ∈ HA , with probability at least ( 1− δ ) 2 . ODE-BoundP , P∗ ( hc , f ; δ ) ≤ ODE-BoundP , P∗ ( ha , f ; δ ) ( 4 ) 1We call it the causal Markov Blanket since it is based on the structural causal graph , to distinguish it from the associational Markov Blanket that is based on conditional probability distribution from a Bayesian network . Proof . As an example , consider a colored MNIST data distribution P where the classification task is to detect whether a digit is greater than 5 , and where all digits above 5 are colored with the same color . Then , under a suitably expressive class of models , the loss-minimizing associational model may use only the color feature to obtain zero error , while the loss-minimizing causal model will still use the shape ( causal ) features to obtain zero error . On any new P ∗ that does not follow the same correlation of digits with color , we expect that the loss-minimizing associational model will have higher error than the loss-minimizing causal model . Formally , since P ( Y|XC ) = P∗ ( Y|XC ) , the optimal causal model that minimizes loss over P is the same as the loss-minimizing model over P∗ . That is , hOPTc , P = h OPT c , P∗ . However , for some associational models , hOPTa , P 6= hOPTa , P∗ and thus there is an additional loss term when generalizing to data from P ∗ . The rest of the proof follows from triangle inequality of the loss function and the standard bounds for IDE from past work . Detailed proof is in Appendix Section A.1 . Corollary 1 . Consider a causal model hc : XC → Y and an associational model ha : X→ Y trained on a dataset S ∼ P. Let ( x , y ) ∈ S and ( x′ , y′ ) /∈ S be two input instances such that they share the same true labelling function y = f ( xc ) and y′ = f ( x′c ) . Then , the worst-case generalization error for a causal model on any such x′ is less than or equal to that for an associational model . [ Proof in Appendix Section A.2 ] max x∈S , x′ : y′=f ( x′c ) Lx′ ( hc , f ) − Lx ( hc , f ) ≤ max x∈S , x′ : y′=f ( x′c ) Lx′ ( ha , f ) − Lx ( ha , f ) ( 5 )
The authors consider a transfer learning problem where the source distribution is P(X,Y) while the target distribution is P*(X,Y) and classifier is trained on data from the source distribution. They also assume that the causal graph generating the data (X and Y) is identical while the conditional probabilities (mechanisms) could change between the source and the target. Further, they assume that if X_C is the Markov Blanket for variable Y in P and P*, then P(Y|X_C) = P*(Y|X_C). Therefore the best predictor in terms of cross entropy loss for both distributions is identical if it focuses on the variables in the Markov Blanket. Authors define "causal hypothesis" as the one that uses only variables in the Markov Blanket (X_C) to predict Y.
SP:6260d6cfb07fe0981539d9a1e4a47d21479316ad
Alleviating Privacy Attacks via Causal Learning
1 INTRODUCTION . Machine learning algorithms , especially deep neural networks ( DNNs ) have found diverse applications in various fields such as healthcare ( Esteva et al. , 2019 ) , gaming ( Mnih et al. , 2013 ) , and finance ( Tsantekidis et al. , 2017 ; Fischer & Krauss , 2018 ) . However , a line of recent research has shown that deep learning algorithms are susceptible to privacy attacks that leak information about the training dataset ( Fredrikson et al. , 2015 ; Rahman et al. , 2018 ; Song & Shmatikov , 2018 ; Hayes et al. , 2017 ) . Particularly , one such attack called membership inference reveals whether a particular data sample was present in the training dataset ( Shokri et al. , 2017 ) . The privacy risks due to membership inference elevate when the DNNs are trained on sensitive data such as in healthcare applications . For example , patients providing medical records to build a model that detects HIV would not want to reveal their participation in the training dataset . Membership inference attacks are shown to exploit overfitting of the model on the training dataset ( Yeom et al. , 2018 ) . Existing defenses propose the use of generalization techniques such as adding learning rate decay , dropout or using adversarial regularization techniques ( Nasr et al. , 2018b ; Salem et al. , 2018 ) . All these approaches assume that the test data is from the same distribution as the training dataset . In practice , a model trained using data from one distribution is often used on a ( slightly ) different distribution . For example , hospitals in one region may train a model to detect HIV and share it with hospitals in different regions . However , generalizing to a new context is a challenge for any machine learning model . We extend the scope of membership privacy to different distributions and show that the risk from membership attack increases further on DNNs as the test distribution is changed . That is , the abiltity of an adversary to distinguish a member from a non-member improves with change in test distributions . To alleviate privacy attacks , we propose using models that depend on the causal relationship between input features and the output . Causal learning has been extensively used to guarantee fairness and explainability properties of the predicted output ( Kusner et al. , 2017 ; Nabi & Shpitser , 2018 ; Datta et al. , 2016 ) . However , the connection of causal learning to privacy is yet unexplored . To the best of our knowledge , we provide the first analysis of privacy benefits of causal models . By definition , causal relationships are invariant across input distributions ( Peters et al. , 2016 ) , and hence make the predictions of causal models independent of the observed data distribution , let alone the observed dataset . Hence , causal models generalize better even with change in the distributions . In this paper , we show that the generalizability property of causal models directly ensures better privacy guarantees for the input data . Concretely , we prove that with reasonable assumptions , a causal model always provides stronger ( i.e. , smaller value ) differential privacy guarantees than a corresponding associational model trained on the same features and the same amount of added noise to the training dataset . Consequently , we show that membership inference attacks are ineffective ( equivalent to a random guess ) on causal models trained on infinite samples . Empirical attack accuracies on four different datasets confirm our theoretical claims . We find that 60K training samples are sufficient to reduce the attack accuracy of a causal model to a random guess . In contrast , membership attack accuracy for neural network-based associational models increase as test distributions are changed . The attack accuracy reaches nearly 80 % when the target associational model is trained on 60K training samples and used to predict test data that belong to a different distribution than the training data . Our results show that causal learning approaches are a promising direction for training models on sensitive data . Section 2 describes the properties of causal models . Section 3 proves the connections of causality to differential privacy and robustness to membership attacks . Section 4 provides empirical results . To summarize , we make the following contributions : • For the same amount of added noise , models learned using causal structure provide stronger -differential privacy guarantees than corresponding associational models . • Causal models are provably more robust to membership inference attacks than typical associational models such as neural networks . • We simulate practical settings where the test distribution may not be the same as the training distribution and find that the membership inference attack accuracy of causal models is close to a “ random guess ” ( i.e. , 50 % ) while associational models exhibit upto 80 % attack accuracy . 2 PROPERTIES OF CAUSAL MODELS . Causal models are shown to generalize well since the output of these models depend only on the causal relationship between the input features and the outcomes instead of the associations between them . From prior work , we know that the causal relationship between the features is invariant to the their distribution ( Peters et al. , 2016 ) . Using this property , we study its effects on the privacy of data . 2.1 BACKGROUND : CAUSAL MODEL . Intuitively , a causal model identifies a subset of features that have a causal relationship with the outcome and learns a function from the subset to the outcome . To construct a causal model , one may use a structural causal graph based on domain knowledge that defines causal features as parents of the outcome under the graph . Alternatively , one may exploit the strong relevance property from Pellet & Elisseeff ( 2008 ) , use score-based learning algorithms ( Scutari , 2009 ) or recent methods for learning invariant relationships from training datasets from different distributions ( Peters et al. , 2016 ; Bengio et al. , 2019 ) , or learn based on a combination of randomized experiments and observed data . Note that this is different from training probabilistic graphical models , wherein an edge conveys an associational relationship . Further details on causal models are in Pearl ( 2009 ) ; Peters et al . ( 2017 ) . For ease of exposition , we assume the structural causal graph framework throughout . Consider data from a distribution ( X , Y ) ∼ P where X is a k-dimensional vector and Y ∈ { 0 , 1 } . Our goal is to learn a function h ( X ) that predicts Y . Figure 1 shows causal graphs that denote the different relationships between X and Y . Nodes of the graph represent variables and a directed edge represents a direct causal relationship from a source to target node . Denote Xpa ⊆ X , the parents of Y in the causal graph . Figure 1a shows the scenario where X contains variables XS0 that are correlated to Xpa in P , but not necessarily connected to either Xpa or Y . These correlations may change in the future , thus a generalizable model should not include these features . Similarly , Figure 1b shows parents and children of Xpa . The d-separation principle states that a node is independent of its ancestors conditioned on all its parents ( Pearl , 2009 ) . Thus , Y is independent of XS1 and XS2 conditional on Xpa . Therefore , including them in a model does not add predictive value ( and further , avoids problems when the relationships between XS1 and XS2 may also change ) . Finally , for completeness , the exhaustive set of variables to include is known as the causal Markov Blanket1 , XC which includes Y ’ s parents , ( Xpa ) , children ( Ych ) and parents of children . Conditioned on its Markov blanket ( Figure 1c ) , Y is independent of all other variables in the causal graph . When Y has no descendants in the graph , then the effective Markov blanket includes only its parents , Xpa . The key insight is that building a model for predicting Y using the Markov Blanket XC ensures that the model generalizes to other distributions of X , and also to changes in other causal relationships between X , as long as the causal relationship of XC to Y is stable . We call such a model as a causal model , the features in ( XC ) as the causal features , and assume that all the causal features for Y are observed . In contrast , we call a model that uses all available features as an associational model . 2.2 GENERALIZATION TO NEW DISTRIBUTIONS . We state the generalization property of causal models and show how it results in a stronger differential privacy guarantee . We first define In-distribution and Out-of-distribution generalization error . Throughout , L ( . , . ) refers to the loss on a single input and LP ( . , . ) = EPL ( . , . ) refers to the expected value of the loss over a distribution P ( X , Y ) . We refer f : X→ Y as the ground-truth labeling function and h : X→ Y as the hypothesis function or simply the model . Then , L ( h , h′ ) is any loss function quantifying the difference between two models h and h′ . Definition 1 . In-Distribution Generalization Error ( IDE ) . Consider a dataset S ∼ P ( X , Y ) . Then for a model h : X→ Y trained on S , the in-distribution generalization error is given by : IDEP ( h , f ) = LP ( h , f ) − LS∼P ( h , f ) ( 1 ) Definition 2 . Out-of-Distribution Generalization Error ( ODE ) . Consider a dataset S sampled from a distribution P ( X , Y ) . Then for a model h : X→ Y trained on S , the out-of-distribution generalization error with respect to another distribution P∗ ( X , Y ) is given by : ODEP , P∗ ( h , f ) = LP∗ ( h , f ) − LS∼P ( h , f ) ( 2 ) Definition 3 . Discrepancy Distance ( discL ) ( Def . 4 in Mansour et al . ( 2009 ) ) . Let H be a set of hypotheses , h : X→ Y . Let L : Y× Y→ R+ define a loss function over Y for any such hypothesis h. Then the discrepancy distance discL over any two distributions P ( X , Y ) and P∗ ( X , Y ) is given by : discL ( P , P ∗ ) = max h , h′∈H |LP ( h , h′ ) − LP∗ ( h , h′ ) | ( 3 ) Intuitively , the term discL ( P , P∗ ) denotes the distance between the two distributions . Higher the distance , higher is the chance of an error when transferring h from one distribution to another . Now , we will state the theorem on the generalization property of causal models . Theorem 1 . Consider a structural causal graph G that connects X to Y , and causal features XC where XC is a Markov Blanket of Y under G. Let P ( X , Y ) and P∗ ( X , Y ) be two distributions with arbitrary P ( X ) and P∗ ( X ) such that the causal relationship between XC and Y is preserved , which implies that P ( Y|XC ) = P∗ ( Y|XC ) . Let f : XC → Y be the resultant invariant labelling function such that y = f ( XC ) . Further , assume thatHC represents the set of causal models hc : XC → Y that use all causal features andHA represent the set of associational models ha : X→ Y that may use all available features , such thatHC ⊆ HA and f ∈ HC . Then , for any symmetric loss function L that obeys the triangle inequality , the upper bound of ODE from a dataset S ∼ P ( X , Y ) to P∗ ( called ODE-Bound ) for a causal model hc ∈ HC is less than or equal to the upper bound ODE-Bound of an associational model ha ∈ HA , with probability at least ( 1− δ ) 2 . ODE-BoundP , P∗ ( hc , f ; δ ) ≤ ODE-BoundP , P∗ ( ha , f ; δ ) ( 4 ) 1We call it the causal Markov Blanket since it is based on the structural causal graph , to distinguish it from the associational Markov Blanket that is based on conditional probability distribution from a Bayesian network . Proof . As an example , consider a colored MNIST data distribution P where the classification task is to detect whether a digit is greater than 5 , and where all digits above 5 are colored with the same color . Then , under a suitably expressive class of models , the loss-minimizing associational model may use only the color feature to obtain zero error , while the loss-minimizing causal model will still use the shape ( causal ) features to obtain zero error . On any new P ∗ that does not follow the same correlation of digits with color , we expect that the loss-minimizing associational model will have higher error than the loss-minimizing causal model . Formally , since P ( Y|XC ) = P∗ ( Y|XC ) , the optimal causal model that minimizes loss over P is the same as the loss-minimizing model over P∗ . That is , hOPTc , P = h OPT c , P∗ . However , for some associational models , hOPTa , P 6= hOPTa , P∗ and thus there is an additional loss term when generalizing to data from P ∗ . The rest of the proof follows from triangle inequality of the loss function and the standard bounds for IDE from past work . Detailed proof is in Appendix Section A.1 . Corollary 1 . Consider a causal model hc : XC → Y and an associational model ha : X→ Y trained on a dataset S ∼ P. Let ( x , y ) ∈ S and ( x′ , y′ ) /∈ S be two input instances such that they share the same true labelling function y = f ( xc ) and y′ = f ( x′c ) . Then , the worst-case generalization error for a causal model on any such x′ is less than or equal to that for an associational model . [ Proof in Appendix Section A.2 ] max x∈S , x′ : y′=f ( x′c ) Lx′ ( hc , f ) − Lx ( hc , f ) ≤ max x∈S , x′ : y′=f ( x′c ) Lx′ ( ha , f ) − Lx ( ha , f ) ( 5 )
Overview: This paper discusses the risk of membership inference attacks that deep neural networks might face when used in a practical manner on real world datasets. Membership inference attacks can result in privacy breaches, a significant concern for many fields who might stand to benefit from using deep learning in applications. The authors demonstrate how attack accuracy goes up when one dataset is used for training while another altogether is used for testing. They propose the use of causal learning approaches in order to negate risk of membership inference attacks. Causal models can handle distribution shifts across datasets because they learn using a causal structure.
SP:6260d6cfb07fe0981539d9a1e4a47d21479316ad
Learning Effective Exploration Strategies For Contextual Bandits
1 INTRODUCTION . In a contextual bandit problem , an agent attempts to optimize its behavior over a sequence of rounds based on limited feedback ( Kaelbling , 1994 ; Auer , 2003 ; Langford & Zhang , 2008 ) . In each round , the agent chooses an action based on a context ( features ) for that round , and observes a reward for that action but no others ( §2 ) . Contextual bandit problems arise in many real-world settings like online recommendations and personalized medicine . As in reinforcement learning , the agent must learn to balance exploitation ( taking actions that , based on past experience , it believes will lead to high instantaneous reward ) and exploration ( trying actions that it knows less about ) . In this paper , we present a meta-learning approach to automatically learn a good exploration mechanism from data . To achieve this , we use synthetic supervised learning data sets on which we can simulate contextual bandit tasks in an offline setting . Based on these simulations , our algorithm , MÊLÉE ( MEta LEarner for Exploration ) 1 , learns a good heuristic exploration strategy that should ideally generalize to future contextual bandit problems . MÊLÉE contrasts with more classical approaches to exploration ( like -greedy or LinUCB ; see §6 ) , in which exploration strategies are constructed by expert algorithm designers . These approaches often achieve provably good exploration strategies in the worst case , but are potentially overly pessimistic and are sometimes computationally intractable . At training time ( § 3.2 ) , MÊLÉE simulates many contextual bandit problems from fully labeled synthetic data . Using this data , in each round , MÊLÉE is able to counterfactually simulate what would happen under all possible action choices . We can then use this information to compute regret estimates for each action , which can be optimized using the AggreVaTe imitation learning algorithm ( Ross & Bagnell , 2014 ) . Our imitation learning strategy mirrors that of the meta-learning approach of Bachman et al . ( 2017 ) in the active learning setting . We present a simplified , stylized analysis of the behavior of MÊLÉE to ensure that our cost function encourages good behavior ( §4 ) . Empirically , we use MÊLÉE to train an exploration policy on only synthetic datasets and evaluate this policy on both a contextual bandit task based on a natural learning to rank dataset as well as three hundred simulated contextual bandit tasks ( §5.2 ) . We compare the trained policy to a number of alternative exploration algorithms , and show the efficacy of our approach ( §5.3 ) . 1Code release : the code is available online https : //www.dropbox.com/sh/ dc3v8po5cbu8zaw/AACu1f_4c4wIZxD1e7W0KVZ0a ? dl=0 2 PRELIMINARIES : CONTEXTUAL BANDITS AND POLICY OPTIMIZATION . Contextual bandits is a model of interaction in which an agent chooses actions ( based on contexts ) and receives immediate rewards for that action alone . For example , in a simplified news personalization setting , at each time step t , a user arrives and the system must choose a news article to display to them . Each possible news article corresponds to an action a , and the user corresponds to a context xt . After the system chooses an article at to display , it can observe , for instance , the amount of time that the user spends reading that article , which it can use as a reward rt ( at ) . Formally , we largely follow the setup and notation of Agarwal et al . ( 2014 ) . Let X be an input space of contexts ( users ) and [ K ] = { 1 , . . . , K } be a finite action space ( articles ) . We consider the statistical setting in which there exists a fixed but unknown distribution D over pairs ( x , r ) ∈ X× [ 0 , 1 ] K , where r is a vector of rewards ( for convenience , we assume all rewards are bounded in [ 0 , 1 ] ) . In this setting , the world operates iteratively over rounds t = 1 , 2 , . . . . Each round t : 1 . The world draws ( xt , rt ) ∼ D and reveals context xt . 2 . The agent ( randomly ) chooses action at ∈ [ K ] based on xt , and observes reward rt ( at ) . The goal of an algorithm is to maximize the cumulative sum of rewards over time . Typically the primary quantity considered is the average regret of a sequence of actions a1 , . . . , aT to the behavior of the best possible function in a prespecified class F : λ ( a1 , . . . , aT ) = max f∈F 1 T T∑ t=1 [ rt ( f ( xt ) ) − rt ( at ) ] ( 1 ) An agent is call no-regret if its average regret is zero in the limit of large T . To produce a good agent for interacting with the world , we assume access to a function class F and to an oracle policy optimizer for that function class . For example , F may be a set of single layer neural networks mapping user features x ∈ X to predicted rewards for actions a ∈ [ K ] . Formally , the observable record of interaction resulting from round t is the tuple ( xt , at , rt ( at ) , pt ( at ) ) ∈ X× [ K ] × [ 0 , 1 ] × [ 0 , 1 ] , where pt ( at ) is the probability that the agent chose action at , and the full history of interaction is ht = 〈 ( xi , ai , ri ( ai ) , pi ( ai ) ) 〉ti=1 . The oracle policy optimizer , POLOPT , takes as input a history of user interactions and outputs an f ∈ F with low expected regret . A standard example of a policy optimizer is to combine inverse propensity scaling ( IPS ) with a regression algorithm ( Dudik et al. , 2011 ) . Here , given a history h , each tuple ( x , a , r , p ) in that history is mapped to a multiple-output regression example . The input for this regression example is the same x ; the output is a vector of K costs , all of which are zero except the ath component , which takes value r/p . This mapping is done for all tuples in the history , and a supervised learning algorithm on the function class F is used to produce a low-regret regressor f . This is the function returned by the policy optimizer . IPS , and other estimators that have lower-variance than IPS ( such as the doubly-robust estimator ) , have the property of being unbiased . In experiments , we use the direct method ( Dudik et al. , 2011 ) largely for its simplicity , however , MÊLÉE is agnostic to the type of the estimator used by the policy optimizer . 3 APPROACH : LEARNING AND EFFECTIVE EXPLORATION STRATEGY . In order to have an effective approach to the contextual bandit problem , one must be able to both optimize a policy based on historic data and make decisions about how to explore . The exploration/exploitation dilemma is fundamentally about long-term payoffs : is it worth trying something potentially suboptimal now in order to learn how to behave better in the future ? A particularly simple and effective form of exploration is -greedy : given a function f output by POLOPT , act according to f ( x ) with probability ( 1− ) and act uniformly at random with probability . Intuitively , one would hope to improve on such a strategy by taking more ( any ! ) information into account ; for instance , basing the probability of exploration on f ’ s uncertainty . In this section , we describe MÊLÉE , first by describing how it operates at test time when applied to a new contextual bandit problem ( §3.1 ) , and then by describing how to train it using synthetic simulated contextual bandit problems ( §3.2 ) . 3.1 TEST TIME BEHAVIOR OF MÊLÉE . Our goal in this paper is to learn how to explore from experience . The training procedure for MÊLÉE will use offline supervised learning problems to learn an exploration policy π , which takes two inputs : a function f ∈ F and a context x , and outputs an action . In our example , f will be the output of the policy optimizer on all historic data , and x will be the current user . This is used to produce an agent which interacts with the world , maintaining an initially empty history buffer h , as : 1 . The world draws ( xt , rt ) ∼ D and reveals context xt . 2 . The agent computes ft ← POLOPT ( h ) and a greedy action ãt = π ( ft , xt ) . 3 . The agent plays at = ãt with probability ( 1− µ ) , and at uniformly at random otherwise . 4 . The agent observes rt ( at ) and appends ( xt , at , rt ( at ) , pt ) to the history h , where pt = µ/K if at 6= ãt ; and pt = 1− µ+ µ/K if at = ãt . Here , ft is the function optimized on the historical data , and π uses it and xt to choose an action . Intuitively , π might choose to use the prediction ft ( xt ) most of the time , unless ft is quite uncertain on this example , in which case π might choose to return the second ( or third ) most likely action according to ft . The agent then performs a small amount of additional µ-greedy-style exploration : most of the time it acts according to π but occasionally it explores some more . In practice ( §5 ) , we find that setting µ = 0 is optimal in aggregate , but non-zero µ is necessary for our theory ( §4 ) . Importantly , we wish to train π using one set of tasks ( for which we have fully supervised data on which to run simulations ) and apply it to wholly different tasks ( for which we only have bandit feedback ) . To achieve this , we allow π to depend representationally on ft in arbitrary ways : for instance , it might use features that capture ft ’ s uncertainty on the current example ( see §5.1 for details ) . We additionally allow π to depend in a task-independent manner on the history ( for instance , which actions have not yet been tried ) : it can use features of the actions , rewards and probabilities in the history but not depend directly on the contexts x . This is to ensure that π only learns to explore and not also to solve the underlying task-dependent classification problem . Because π needs to learn to be task independent , we found that if ft ’ s predictions were uncalibrated , it was very difficult for π to generalize well to unseen tasks . Therefore , we additionally allow π to depend on a very small amount of fully labeled data from the task at hand , which we use to allow π to calibrate ft ’ s predictions . In our experiments we use only 30 fully labeled examples , but alternative approaches to calibrating ft that do not require this data would be preferable .
This paper proposes a meta-learning algorithm to solve the problem of exploration in a contextual bandit task using prior knowledge. This is analogous to how exploration strategies have been learned from past data in the meta-learning for RL (for example, [1]). Their algorithm simulates contextual bandit problems from fully labeled data and uses it to learn an exploration policy that works well on tasks with bandit feedback. The training step for the exploration policy builds upon the AggreVaTe algorithm, where policy optimization is performed on the history augmented data by using a separate roll-out policy to estimate the advantage of a particular action for a particular context, from the point of view of regret minimization. In terms of theory, they show that by using specific algorithms (example, Banditron) as the inner policy optimization procedure, a no-regret algorithm can be obtained.
SP:bc0d62459bcb00a581f2273b5f4aabe3ead1d0c1
Learning Effective Exploration Strategies For Contextual Bandits
1 INTRODUCTION . In a contextual bandit problem , an agent attempts to optimize its behavior over a sequence of rounds based on limited feedback ( Kaelbling , 1994 ; Auer , 2003 ; Langford & Zhang , 2008 ) . In each round , the agent chooses an action based on a context ( features ) for that round , and observes a reward for that action but no others ( §2 ) . Contextual bandit problems arise in many real-world settings like online recommendations and personalized medicine . As in reinforcement learning , the agent must learn to balance exploitation ( taking actions that , based on past experience , it believes will lead to high instantaneous reward ) and exploration ( trying actions that it knows less about ) . In this paper , we present a meta-learning approach to automatically learn a good exploration mechanism from data . To achieve this , we use synthetic supervised learning data sets on which we can simulate contextual bandit tasks in an offline setting . Based on these simulations , our algorithm , MÊLÉE ( MEta LEarner for Exploration ) 1 , learns a good heuristic exploration strategy that should ideally generalize to future contextual bandit problems . MÊLÉE contrasts with more classical approaches to exploration ( like -greedy or LinUCB ; see §6 ) , in which exploration strategies are constructed by expert algorithm designers . These approaches often achieve provably good exploration strategies in the worst case , but are potentially overly pessimistic and are sometimes computationally intractable . At training time ( § 3.2 ) , MÊLÉE simulates many contextual bandit problems from fully labeled synthetic data . Using this data , in each round , MÊLÉE is able to counterfactually simulate what would happen under all possible action choices . We can then use this information to compute regret estimates for each action , which can be optimized using the AggreVaTe imitation learning algorithm ( Ross & Bagnell , 2014 ) . Our imitation learning strategy mirrors that of the meta-learning approach of Bachman et al . ( 2017 ) in the active learning setting . We present a simplified , stylized analysis of the behavior of MÊLÉE to ensure that our cost function encourages good behavior ( §4 ) . Empirically , we use MÊLÉE to train an exploration policy on only synthetic datasets and evaluate this policy on both a contextual bandit task based on a natural learning to rank dataset as well as three hundred simulated contextual bandit tasks ( §5.2 ) . We compare the trained policy to a number of alternative exploration algorithms , and show the efficacy of our approach ( §5.3 ) . 1Code release : the code is available online https : //www.dropbox.com/sh/ dc3v8po5cbu8zaw/AACu1f_4c4wIZxD1e7W0KVZ0a ? dl=0 2 PRELIMINARIES : CONTEXTUAL BANDITS AND POLICY OPTIMIZATION . Contextual bandits is a model of interaction in which an agent chooses actions ( based on contexts ) and receives immediate rewards for that action alone . For example , in a simplified news personalization setting , at each time step t , a user arrives and the system must choose a news article to display to them . Each possible news article corresponds to an action a , and the user corresponds to a context xt . After the system chooses an article at to display , it can observe , for instance , the amount of time that the user spends reading that article , which it can use as a reward rt ( at ) . Formally , we largely follow the setup and notation of Agarwal et al . ( 2014 ) . Let X be an input space of contexts ( users ) and [ K ] = { 1 , . . . , K } be a finite action space ( articles ) . We consider the statistical setting in which there exists a fixed but unknown distribution D over pairs ( x , r ) ∈ X× [ 0 , 1 ] K , where r is a vector of rewards ( for convenience , we assume all rewards are bounded in [ 0 , 1 ] ) . In this setting , the world operates iteratively over rounds t = 1 , 2 , . . . . Each round t : 1 . The world draws ( xt , rt ) ∼ D and reveals context xt . 2 . The agent ( randomly ) chooses action at ∈ [ K ] based on xt , and observes reward rt ( at ) . The goal of an algorithm is to maximize the cumulative sum of rewards over time . Typically the primary quantity considered is the average regret of a sequence of actions a1 , . . . , aT to the behavior of the best possible function in a prespecified class F : λ ( a1 , . . . , aT ) = max f∈F 1 T T∑ t=1 [ rt ( f ( xt ) ) − rt ( at ) ] ( 1 ) An agent is call no-regret if its average regret is zero in the limit of large T . To produce a good agent for interacting with the world , we assume access to a function class F and to an oracle policy optimizer for that function class . For example , F may be a set of single layer neural networks mapping user features x ∈ X to predicted rewards for actions a ∈ [ K ] . Formally , the observable record of interaction resulting from round t is the tuple ( xt , at , rt ( at ) , pt ( at ) ) ∈ X× [ K ] × [ 0 , 1 ] × [ 0 , 1 ] , where pt ( at ) is the probability that the agent chose action at , and the full history of interaction is ht = 〈 ( xi , ai , ri ( ai ) , pi ( ai ) ) 〉ti=1 . The oracle policy optimizer , POLOPT , takes as input a history of user interactions and outputs an f ∈ F with low expected regret . A standard example of a policy optimizer is to combine inverse propensity scaling ( IPS ) with a regression algorithm ( Dudik et al. , 2011 ) . Here , given a history h , each tuple ( x , a , r , p ) in that history is mapped to a multiple-output regression example . The input for this regression example is the same x ; the output is a vector of K costs , all of which are zero except the ath component , which takes value r/p . This mapping is done for all tuples in the history , and a supervised learning algorithm on the function class F is used to produce a low-regret regressor f . This is the function returned by the policy optimizer . IPS , and other estimators that have lower-variance than IPS ( such as the doubly-robust estimator ) , have the property of being unbiased . In experiments , we use the direct method ( Dudik et al. , 2011 ) largely for its simplicity , however , MÊLÉE is agnostic to the type of the estimator used by the policy optimizer . 3 APPROACH : LEARNING AND EFFECTIVE EXPLORATION STRATEGY . In order to have an effective approach to the contextual bandit problem , one must be able to both optimize a policy based on historic data and make decisions about how to explore . The exploration/exploitation dilemma is fundamentally about long-term payoffs : is it worth trying something potentially suboptimal now in order to learn how to behave better in the future ? A particularly simple and effective form of exploration is -greedy : given a function f output by POLOPT , act according to f ( x ) with probability ( 1− ) and act uniformly at random with probability . Intuitively , one would hope to improve on such a strategy by taking more ( any ! ) information into account ; for instance , basing the probability of exploration on f ’ s uncertainty . In this section , we describe MÊLÉE , first by describing how it operates at test time when applied to a new contextual bandit problem ( §3.1 ) , and then by describing how to train it using synthetic simulated contextual bandit problems ( §3.2 ) . 3.1 TEST TIME BEHAVIOR OF MÊLÉE . Our goal in this paper is to learn how to explore from experience . The training procedure for MÊLÉE will use offline supervised learning problems to learn an exploration policy π , which takes two inputs : a function f ∈ F and a context x , and outputs an action . In our example , f will be the output of the policy optimizer on all historic data , and x will be the current user . This is used to produce an agent which interacts with the world , maintaining an initially empty history buffer h , as : 1 . The world draws ( xt , rt ) ∼ D and reveals context xt . 2 . The agent computes ft ← POLOPT ( h ) and a greedy action ãt = π ( ft , xt ) . 3 . The agent plays at = ãt with probability ( 1− µ ) , and at uniformly at random otherwise . 4 . The agent observes rt ( at ) and appends ( xt , at , rt ( at ) , pt ) to the history h , where pt = µ/K if at 6= ãt ; and pt = 1− µ+ µ/K if at = ãt . Here , ft is the function optimized on the historical data , and π uses it and xt to choose an action . Intuitively , π might choose to use the prediction ft ( xt ) most of the time , unless ft is quite uncertain on this example , in which case π might choose to return the second ( or third ) most likely action according to ft . The agent then performs a small amount of additional µ-greedy-style exploration : most of the time it acts according to π but occasionally it explores some more . In practice ( §5 ) , we find that setting µ = 0 is optimal in aggregate , but non-zero µ is necessary for our theory ( §4 ) . Importantly , we wish to train π using one set of tasks ( for which we have fully supervised data on which to run simulations ) and apply it to wholly different tasks ( for which we only have bandit feedback ) . To achieve this , we allow π to depend representationally on ft in arbitrary ways : for instance , it might use features that capture ft ’ s uncertainty on the current example ( see §5.1 for details ) . We additionally allow π to depend in a task-independent manner on the history ( for instance , which actions have not yet been tried ) : it can use features of the actions , rewards and probabilities in the history but not depend directly on the contexts x . This is to ensure that π only learns to explore and not also to solve the underlying task-dependent classification problem . Because π needs to learn to be task independent , we found that if ft ’ s predictions were uncalibrated , it was very difficult for π to generalize well to unseen tasks . Therefore , we additionally allow π to depend on a very small amount of fully labeled data from the task at hand , which we use to allow π to calibrate ft ’ s predictions . In our experiments we use only 30 fully labeled examples , but alternative approaches to calibrating ft that do not require this data would be preferable .
This paper introduced a meta-learning algorithm for the contextual bandit problem, MELEE, which learns an exploration policy based on simulated and synthetic contextual bandit tasks. The training is mainly divided into two steps. In step one, they proposed to train a policy optimizer, which maps features and actions to rewards. This policy optimizer could be used to reveal the most valuable action to take according to the modeled reward. All possible actions and their corresponding values are revealed to the policy optimizer because of the existing ground-truth labels in the synthetic dataset. The policy optimizer would then suggest which action to take. The algorithm takes the action in an greedy fashion, i.e. with probability it will follow the suggestion and with probability it will sample it uniformly at random. The policy optimizer, historical actions and the taken actions are appended to the training set for training the exploration policy in the next step. The procedure in step one is proposed to be done in rounds. In step two, the training set is used for training an exploration policy . During testing, the contexts are drawn from the real world, the policy optimizer will first evaluate the whole history and the exploration policy will generate actions with the input from the policy optimizer and the context. The algorithm suggests the action to explore in an greedy fashion. The proposed algorithm is evaluated on a dataset for learning to rank, and 300 synthetic datasets. It shows better performances in most cases.
SP:bc0d62459bcb00a581f2273b5f4aabe3ead1d0c1
Differential Privacy in Adversarial Learning with Provable Robustness
1 INTRODUCTION . The pervasiveness of machine learning exposes new vulnerabilities in software systems , in which deployed machine learning models can be used ( a ) to reveal sensitive information in private training data ( Fredrikson et al. , 2015 ) , and/or ( b ) to make the models misclassify , such as adversarial examples ( Carlini & Wagner , 2017 ) . Efforts to prevent such attacks typically seek one of three solutions : ( 1 ) Models which preserve differential privacy ( DP ) ( Dwork et al. , 2006 ) , a rigorous formulation of privacy in probabilistic terms ; ( 2 ) Adversarial training algorithms , which augment training data to consist of benign examples and adversarial examples crafted during the training process , thereby empirically increasing the classification accuracy given adversarial examples ( Kardan & Stanley , 2017 ; Matyasko & Chau , 2017 ) ; and ( 3 ) Provable robustness , in which the model classification given adversarial examples is theoretically guaranteed to be consistent , i.e. , a small perturbation in the input does not change the predicted label ( Cisse et al. , 2017 ; Kolter & Wong , 2017 ) . On the one hand , private models , trained with existing privacy-preserving mechanisms ( Abadi et al. , 2016 ; Shokri & Shmatikov , 2015 ; Phan et al. , 2016 ; 2017b ; a ; Yu et al. , 2019 ; Lee & Kifer , 2018 ) , are unshielded under adversarial examples . On the other hand , robust models , trained with adversarial learning algorithms ( with or without provable robustness to adversarial examples ) , do not offer privacy protections to the training data . That one-sided approach poses serious risks to machine learning-based systems ; since adversaries can attack a deployed model by using both privacy inference attacks and adversarial examples . To be safe , a model must be i ) private to protect the training data , and ii ) robust to adversarial examples . Unfortunately , there has not yet been research on how to develop such a model , which thus remains a largely open challenge . Simply combining existing DP-preserving mechanisms and provable robustness conditions ( Cisse et al. , 2017 ; Kolter & Wong , 2017 ; Raghunathan et al. , 2018 ) can not solve the problem , for many reasons . ( a ) Existing sensitivity bounds ( Phan et al. , 2016 ; 2017b ; a ) and designs ( Yu et al. , 2019 ; Lee & Kifer , 2018 ) have not been developed to protect the training data in adversarial training . It is obvious that using adversarial examples crafted from the private training data to train our models introduces a previously unknown privacy risk , disclosing the participation of the benign examples ( Song et al. , 2019 ) . ( b ) There is an unrevealed interplay among DP preservation , adversarial learning , and robustness bounds . ( c ) Existing algorithms can not be readily applied to address the trade-off among model utility , privacy loss , and robustness . Therefore , theoretically bounding the robustness of a model ( which both protects the privacy and is robust against adversarial examples ) is nontrivial . Our Contributions . Motivated by this open problem , we propose to develop a novel differentially private adversarial learning ( DPAL ) mechanism to : 1 ) preserve DP of the training data , 2 ) be provably and practically robust to adversarial examples , and 3 ) retain high model utility . In our mech- anism , privacy-preserving noise is injected into inputs and hidden layers to achieve DP in learning private model parameters ( Theorem 1 ) . Then , we incorporate ensemble adversarial learning into our mechanism to improve the decision boundary under DP protections . To do this , we introduce a concept of DP adversarial examples crafted using benign examples in the private training data under DP guarantees ( Eq . 9 ) . To address the trade-off between model utility and privacy loss , we propose a new DP adversarial objective function to tighten the model ’ s global sensitivity ( Theorem 3 ) ; thus , we significantly reduce the amount of noise injected into our function , compared with existing works ( Phan et al. , 2016 ; 2017b ; a ) . In addition , ensemble DP adversarial examples with a dynamic perturbation size µa are introduced into the training process to further improve the robustness of our mechanism under different attack algorithms . An end-to-end privacy analysis shows that , by slitting the private training data into disjoint and fixed batches across epochs , the privacy budget in our DPAL is not accumulated across training steps ( Theorem 4 ) . After preserving DP in learning model parameters , we establish a solid connection among privacy preservation , adversarial learning , and provable robustness . Noise injected into different layers is considered as a sequence of randomizing mechanisms , providing different levels of robustness . By leveraging the sequential composition theory in DP ( Dwork & Roth , 2014 ) , we derive a novel generalized robustness bound , which essentially is a composition of these levels of robustness ( Theorem 5 and Proposition 1 ) . To our knowledge , our mechanism establishes the first connection between DP preservation and provable robustness against adversarial examples in adversarial learning . Rigorous experiments conducted on MNIST and CIFAR-10 datasets ( Lecun et al. , 1998 ; Krizhevsky & Hinton , 2009 ) show that our mechanism notably enhances the robustness of DP deep neural networks , compared with existing mechanisms . 2 BACKGROUND . In this section , we revisit adversarial learning , DP , and our problem definition . Let D be a database that contains N tuples , each of which contains data x ∈ [ −1 , 1 ] d and a ground-truth label y ∈ ZK , withK possible categorical outcomes . Each y is a one-hot vector ofK categories y = { y1 , . . . , yK } . A single true class label yx ∈ y given x ∈ D is assigned to only one of the K categories . On input x and parameters θ , a model outputs class scores f : Rd → RK that maps d-dimensional inputs x to a vector of scores f ( x ) = { f1 ( x ) , . . . , fK ( x ) } s.t . ∀k ∈ [ 1 , K ] : fk ( x ) ∈ [ 0 , 1 ] and∑K k=1 fk ( x ) = 1 . The class with the highest score value is selected as the predicted label for the data tuple , denoted as y ( x ) = maxk∈K fk ( x ) . A loss function L ( f ( x ) , y ) presents the penalty for mismatching between the predicted values f ( x ) and original values y . For the sake of clarity , the notations and terminologies frequently used in this paper are summarized in Table 1 ( Appendix A ) . Let us briefly revisit DP-preserving techniques in deep learning , starting with the definition of DP . Definition 1 ( , δ ) -DP ( Dwork et al. , 2006 ) . A randomized algorithm A fulfills ( , δ ) -DP , if for any two databases D and D′ differing at most one tuple , and for all O ⊆ Range ( A ) , we have : Pr [ A ( D ) = O ] ≤ e Pr [ A ( D′ ) = O ] + δ ( 1 ) A smaller enforces a stronger privacy guarantee . Here , controls the amount by which the distributions induced by D and D′ may differ , δ is a broken probability . DP also applies to general metrics ρ ( D , D′ ) ≤ 1 , where ρ can be lp-norms ( Chatzikokolakis et al. , 2013 ) . DP-preserving algorithms in deep learning can be categorized into two lines : 1 ) introducing noise into gradients of parameters ( Abadi et al. , 2016 ; Shokri & Shmatikov , 2015 ; Abadi et al. , 2017 ; Yu et al. , 2019 ; Lee & Kifer , 2018 ; Phan et al. , 2019 ) , 2 ) injecting noise into objective functions ( Phan et al. , 2016 ; 2017b ; a ) , and 3 ) injecting noise into labels ( Papernot et al. , 2018 ) . In Lemmas 2 and 4 , we will show that our mechanism achieves better sensitivity bounds compared with existing works ( Phan et al. , 2016 ; 2017b ; a ) . Adversarial Learning . For some target model f and inputs ( x , yx ) , the adversary ’ s goal is to find an adversarial example xadv = x + α , where α is the perturbation introduced by the attacker , such that : ( 1 ) xadv and x are close , and ( 2 ) the model misclassifies xadv , i.e. , y ( xadv ) 6= y ( x ) . In this paper , we consider well-known lp∈ { 1,2 , ∞ } -norm bounded attacks ( Goodfellow et al. , 2014b ) . Let lp ( µ ) = { α ∈ Rd : ‖α‖p ≤ µ } be the lp-norm ball of radius µ . One of the goals in adversarial learning is to minimize the risk over adversarial examples : θ∗ = arg minθ E ( x , ytrue ) ∼D [ max‖α‖p≤µ L ( f ( x + α , θ ) , yx ) ] , where an attack is used to approximate solutions to the inner maximization problem , and the outer minimization problem corresponds to training the model f with parameters θ over these adversarial examples xadv = x + α . There are two basic adversarial example attacks . The first one is a single-step algorithm , in which only a single gradient computation is required . For instance , FGSM algorithm ( Goodfellow et al. , 2014b ) finds adversarial examples by solving the inner maximization max‖α‖p≤µ L ( f ( x + α , θ ) , yx ) . The second one is an iterative algorithm , in which multiple gradients are computed and updated . For instance , in ( Kurakin et al. , 2016a ) , FGSM is applied multiple times with Tµ small steps , each of which has a size of µ/Tµ . To improve the robustness of models , prior work focused on two directions : 1 ) Producing correct predictions on adversarial examples , while not compromising the accuracy on legitimate inputs ( Kardan & Stanley , 2017 ; Matyasko & Chau , 2017 ; Wang et al. , 2016 ; Papernot et al. , 2016b ; a ; Gu & Rigazio , 2014 ; Papernot & McDaniel , 2017 ; Hosseini et al. , 2017 ) ; and 2 ) Detecting adversarial examples ( Metzen et al. , 2017 ; Grosse et al. , 2017 ; Xu et al. , 2017 ; Abbasi & Gagné , 2017 ; Gao et al. , 2017 ) . Among existing solutions , adversarial training appears to hold the greatest promise for learning robust models ( Tramèr et al. , 2017 ) . One of the well-known algorithms was proposed in ( Kurakin et al. , 2016b ) . At every training step , new adversarial examples are generated and injected into batches containing both benign and adversarial examples . The typical adversarial learning in ( Kurakin et al. , 2016b ) is presented in Alg . 2 ( Appendix B ) . DP and Provable Robustness . Recently , some algorithms ( Cisse et al. , 2017 ; Kolter & Wong , 2017 ; Raghunathan et al. , 2018 ; Cohen et al. , 2019 ; Li et al. , 2018 ) have been proposed to derive provable robustness , in which each prediction is guaranteed to be consistent under the perturbation α , if a robustness condition is held . Given a benign example x , we focus on achieving a robustness condition to attacks of lp ( µ ) -norm , as follows : ∀α ∈ lp ( µ ) : fk ( x+ α ) > max i : i 6=k fi ( x+ α ) ( 2 ) where k = y ( x ) , indicating that a small perturbation α in the input does not change the predicted label y ( x ) . To achieve the robustness condition in Eq . 2 , Lecuyer et al . ( Lecuyer et al. , 2018 ) introduce an algorithm , called PixelDP . By considering an input x ( e.g. , images ) as databases in DP parlance , and individual features ( e.g. , pixels ) as tuples , PixelDP shows that randomizing the scoring function f ( x ) to enforce DP on a small number of pixels in an image guarantees robustness of predictions against adversarial examples . To randomize f ( x ) , random noise σr is injected into either input x or an arbitrary hidden layer , resulting in the following ( r , δr ) -PixelDP condition : Lemma 1 ( r , δr ) -PixelDP ( Lecuyer et al. , 2018 ) . Given a randomized scoring function f ( x ) satisfying ( r , δr ) -PixelDP w.r.t . a lp-norm metric , we have : ∀k , ∀α ∈ lp ( 1 ) : Efk ( x ) ≤ e rEfk ( x+ α ) + δr ( 3 ) where Efk ( x ) is the expected value of fk ( x ) , r is a predefined budget , δr is a broken probability . At the prediction time , a certified robustness check is implemented for each prediction . A generalized robustness condition is proposed as follows : Êlbfk ( x ) > e2 r max i : i 6=k Êubfi ( x ) + ( 1 + e r ) δr ( 4 ) where Êlb and Êub are the lower and upper bounds of the expected value Êf ( x ) = 1n ∑ n f ( x ) n , derived from the Monte Carlo estimation with an η-confidence , given n is the number of invocations of f ( x ) with independent draws in the noise σr . Passing the check for a given input guarantees that no perturbation up to lp ( 1 ) -norm can change the model ’ s prediction . PixelDP does not preserve DP in learning private parameters θ to protect the training data . That is different from our goal .
This paper focus on providing both differential privacy and adversarial robustness to machine learning models. The authors propose an algorithm called differentially private adversarial learning (DPAL) to achieve such goal. DPAL consists two sub-models: (1) An auto-encoder to extract feature representation; and (2) A classifier takes the embedding of encoder and return the predict logits. The auto encoder uses the reconstruction loss and the classifier uses the similar loss as in adversarial learning.
SP:d26829f8a3a08c2935f10ab5871b847fd11c9887
Differential Privacy in Adversarial Learning with Provable Robustness
1 INTRODUCTION . The pervasiveness of machine learning exposes new vulnerabilities in software systems , in which deployed machine learning models can be used ( a ) to reveal sensitive information in private training data ( Fredrikson et al. , 2015 ) , and/or ( b ) to make the models misclassify , such as adversarial examples ( Carlini & Wagner , 2017 ) . Efforts to prevent such attacks typically seek one of three solutions : ( 1 ) Models which preserve differential privacy ( DP ) ( Dwork et al. , 2006 ) , a rigorous formulation of privacy in probabilistic terms ; ( 2 ) Adversarial training algorithms , which augment training data to consist of benign examples and adversarial examples crafted during the training process , thereby empirically increasing the classification accuracy given adversarial examples ( Kardan & Stanley , 2017 ; Matyasko & Chau , 2017 ) ; and ( 3 ) Provable robustness , in which the model classification given adversarial examples is theoretically guaranteed to be consistent , i.e. , a small perturbation in the input does not change the predicted label ( Cisse et al. , 2017 ; Kolter & Wong , 2017 ) . On the one hand , private models , trained with existing privacy-preserving mechanisms ( Abadi et al. , 2016 ; Shokri & Shmatikov , 2015 ; Phan et al. , 2016 ; 2017b ; a ; Yu et al. , 2019 ; Lee & Kifer , 2018 ) , are unshielded under adversarial examples . On the other hand , robust models , trained with adversarial learning algorithms ( with or without provable robustness to adversarial examples ) , do not offer privacy protections to the training data . That one-sided approach poses serious risks to machine learning-based systems ; since adversaries can attack a deployed model by using both privacy inference attacks and adversarial examples . To be safe , a model must be i ) private to protect the training data , and ii ) robust to adversarial examples . Unfortunately , there has not yet been research on how to develop such a model , which thus remains a largely open challenge . Simply combining existing DP-preserving mechanisms and provable robustness conditions ( Cisse et al. , 2017 ; Kolter & Wong , 2017 ; Raghunathan et al. , 2018 ) can not solve the problem , for many reasons . ( a ) Existing sensitivity bounds ( Phan et al. , 2016 ; 2017b ; a ) and designs ( Yu et al. , 2019 ; Lee & Kifer , 2018 ) have not been developed to protect the training data in adversarial training . It is obvious that using adversarial examples crafted from the private training data to train our models introduces a previously unknown privacy risk , disclosing the participation of the benign examples ( Song et al. , 2019 ) . ( b ) There is an unrevealed interplay among DP preservation , adversarial learning , and robustness bounds . ( c ) Existing algorithms can not be readily applied to address the trade-off among model utility , privacy loss , and robustness . Therefore , theoretically bounding the robustness of a model ( which both protects the privacy and is robust against adversarial examples ) is nontrivial . Our Contributions . Motivated by this open problem , we propose to develop a novel differentially private adversarial learning ( DPAL ) mechanism to : 1 ) preserve DP of the training data , 2 ) be provably and practically robust to adversarial examples , and 3 ) retain high model utility . In our mech- anism , privacy-preserving noise is injected into inputs and hidden layers to achieve DP in learning private model parameters ( Theorem 1 ) . Then , we incorporate ensemble adversarial learning into our mechanism to improve the decision boundary under DP protections . To do this , we introduce a concept of DP adversarial examples crafted using benign examples in the private training data under DP guarantees ( Eq . 9 ) . To address the trade-off between model utility and privacy loss , we propose a new DP adversarial objective function to tighten the model ’ s global sensitivity ( Theorem 3 ) ; thus , we significantly reduce the amount of noise injected into our function , compared with existing works ( Phan et al. , 2016 ; 2017b ; a ) . In addition , ensemble DP adversarial examples with a dynamic perturbation size µa are introduced into the training process to further improve the robustness of our mechanism under different attack algorithms . An end-to-end privacy analysis shows that , by slitting the private training data into disjoint and fixed batches across epochs , the privacy budget in our DPAL is not accumulated across training steps ( Theorem 4 ) . After preserving DP in learning model parameters , we establish a solid connection among privacy preservation , adversarial learning , and provable robustness . Noise injected into different layers is considered as a sequence of randomizing mechanisms , providing different levels of robustness . By leveraging the sequential composition theory in DP ( Dwork & Roth , 2014 ) , we derive a novel generalized robustness bound , which essentially is a composition of these levels of robustness ( Theorem 5 and Proposition 1 ) . To our knowledge , our mechanism establishes the first connection between DP preservation and provable robustness against adversarial examples in adversarial learning . Rigorous experiments conducted on MNIST and CIFAR-10 datasets ( Lecun et al. , 1998 ; Krizhevsky & Hinton , 2009 ) show that our mechanism notably enhances the robustness of DP deep neural networks , compared with existing mechanisms . 2 BACKGROUND . In this section , we revisit adversarial learning , DP , and our problem definition . Let D be a database that contains N tuples , each of which contains data x ∈ [ −1 , 1 ] d and a ground-truth label y ∈ ZK , withK possible categorical outcomes . Each y is a one-hot vector ofK categories y = { y1 , . . . , yK } . A single true class label yx ∈ y given x ∈ D is assigned to only one of the K categories . On input x and parameters θ , a model outputs class scores f : Rd → RK that maps d-dimensional inputs x to a vector of scores f ( x ) = { f1 ( x ) , . . . , fK ( x ) } s.t . ∀k ∈ [ 1 , K ] : fk ( x ) ∈ [ 0 , 1 ] and∑K k=1 fk ( x ) = 1 . The class with the highest score value is selected as the predicted label for the data tuple , denoted as y ( x ) = maxk∈K fk ( x ) . A loss function L ( f ( x ) , y ) presents the penalty for mismatching between the predicted values f ( x ) and original values y . For the sake of clarity , the notations and terminologies frequently used in this paper are summarized in Table 1 ( Appendix A ) . Let us briefly revisit DP-preserving techniques in deep learning , starting with the definition of DP . Definition 1 ( , δ ) -DP ( Dwork et al. , 2006 ) . A randomized algorithm A fulfills ( , δ ) -DP , if for any two databases D and D′ differing at most one tuple , and for all O ⊆ Range ( A ) , we have : Pr [ A ( D ) = O ] ≤ e Pr [ A ( D′ ) = O ] + δ ( 1 ) A smaller enforces a stronger privacy guarantee . Here , controls the amount by which the distributions induced by D and D′ may differ , δ is a broken probability . DP also applies to general metrics ρ ( D , D′ ) ≤ 1 , where ρ can be lp-norms ( Chatzikokolakis et al. , 2013 ) . DP-preserving algorithms in deep learning can be categorized into two lines : 1 ) introducing noise into gradients of parameters ( Abadi et al. , 2016 ; Shokri & Shmatikov , 2015 ; Abadi et al. , 2017 ; Yu et al. , 2019 ; Lee & Kifer , 2018 ; Phan et al. , 2019 ) , 2 ) injecting noise into objective functions ( Phan et al. , 2016 ; 2017b ; a ) , and 3 ) injecting noise into labels ( Papernot et al. , 2018 ) . In Lemmas 2 and 4 , we will show that our mechanism achieves better sensitivity bounds compared with existing works ( Phan et al. , 2016 ; 2017b ; a ) . Adversarial Learning . For some target model f and inputs ( x , yx ) , the adversary ’ s goal is to find an adversarial example xadv = x + α , where α is the perturbation introduced by the attacker , such that : ( 1 ) xadv and x are close , and ( 2 ) the model misclassifies xadv , i.e. , y ( xadv ) 6= y ( x ) . In this paper , we consider well-known lp∈ { 1,2 , ∞ } -norm bounded attacks ( Goodfellow et al. , 2014b ) . Let lp ( µ ) = { α ∈ Rd : ‖α‖p ≤ µ } be the lp-norm ball of radius µ . One of the goals in adversarial learning is to minimize the risk over adversarial examples : θ∗ = arg minθ E ( x , ytrue ) ∼D [ max‖α‖p≤µ L ( f ( x + α , θ ) , yx ) ] , where an attack is used to approximate solutions to the inner maximization problem , and the outer minimization problem corresponds to training the model f with parameters θ over these adversarial examples xadv = x + α . There are two basic adversarial example attacks . The first one is a single-step algorithm , in which only a single gradient computation is required . For instance , FGSM algorithm ( Goodfellow et al. , 2014b ) finds adversarial examples by solving the inner maximization max‖α‖p≤µ L ( f ( x + α , θ ) , yx ) . The second one is an iterative algorithm , in which multiple gradients are computed and updated . For instance , in ( Kurakin et al. , 2016a ) , FGSM is applied multiple times with Tµ small steps , each of which has a size of µ/Tµ . To improve the robustness of models , prior work focused on two directions : 1 ) Producing correct predictions on adversarial examples , while not compromising the accuracy on legitimate inputs ( Kardan & Stanley , 2017 ; Matyasko & Chau , 2017 ; Wang et al. , 2016 ; Papernot et al. , 2016b ; a ; Gu & Rigazio , 2014 ; Papernot & McDaniel , 2017 ; Hosseini et al. , 2017 ) ; and 2 ) Detecting adversarial examples ( Metzen et al. , 2017 ; Grosse et al. , 2017 ; Xu et al. , 2017 ; Abbasi & Gagné , 2017 ; Gao et al. , 2017 ) . Among existing solutions , adversarial training appears to hold the greatest promise for learning robust models ( Tramèr et al. , 2017 ) . One of the well-known algorithms was proposed in ( Kurakin et al. , 2016b ) . At every training step , new adversarial examples are generated and injected into batches containing both benign and adversarial examples . The typical adversarial learning in ( Kurakin et al. , 2016b ) is presented in Alg . 2 ( Appendix B ) . DP and Provable Robustness . Recently , some algorithms ( Cisse et al. , 2017 ; Kolter & Wong , 2017 ; Raghunathan et al. , 2018 ; Cohen et al. , 2019 ; Li et al. , 2018 ) have been proposed to derive provable robustness , in which each prediction is guaranteed to be consistent under the perturbation α , if a robustness condition is held . Given a benign example x , we focus on achieving a robustness condition to attacks of lp ( µ ) -norm , as follows : ∀α ∈ lp ( µ ) : fk ( x+ α ) > max i : i 6=k fi ( x+ α ) ( 2 ) where k = y ( x ) , indicating that a small perturbation α in the input does not change the predicted label y ( x ) . To achieve the robustness condition in Eq . 2 , Lecuyer et al . ( Lecuyer et al. , 2018 ) introduce an algorithm , called PixelDP . By considering an input x ( e.g. , images ) as databases in DP parlance , and individual features ( e.g. , pixels ) as tuples , PixelDP shows that randomizing the scoring function f ( x ) to enforce DP on a small number of pixels in an image guarantees robustness of predictions against adversarial examples . To randomize f ( x ) , random noise σr is injected into either input x or an arbitrary hidden layer , resulting in the following ( r , δr ) -PixelDP condition : Lemma 1 ( r , δr ) -PixelDP ( Lecuyer et al. , 2018 ) . Given a randomized scoring function f ( x ) satisfying ( r , δr ) -PixelDP w.r.t . a lp-norm metric , we have : ∀k , ∀α ∈ lp ( 1 ) : Efk ( x ) ≤ e rEfk ( x+ α ) + δr ( 3 ) where Efk ( x ) is the expected value of fk ( x ) , r is a predefined budget , δr is a broken probability . At the prediction time , a certified robustness check is implemented for each prediction . A generalized robustness condition is proposed as follows : Êlbfk ( x ) > e2 r max i : i 6=k Êubfi ( x ) + ( 1 + e r ) δr ( 4 ) where Êlb and Êub are the lower and upper bounds of the expected value Êf ( x ) = 1n ∑ n f ( x ) n , derived from the Monte Carlo estimation with an η-confidence , given n is the number of invocations of f ( x ) with independent draws in the noise σr . Passing the check for a given input guarantees that no perturbation up to lp ( 1 ) -norm can change the model ’ s prediction . PixelDP does not preserve DP in learning private parameters θ to protect the training data . That is different from our goal .
This paper propose an algorithm with DP preservation to train adversarially robust neural networks. To preserve DP, a single-layer linear autoencoder with shared weights is learned to extract features from training data, whose encoder is used to extract private features for the training and inference of a deeper network. To enhance robustness against various attacks, adversarial examples crafted with such attacks are injected into the training set in this algorithm. Guarantees of privacy preservation for training on both clean data and adversarial examples for the autoencoder and the inference network are given. Certified robustness of the smoothed classifier is also given, which depends on the privacy budget of each compositing mechanism. Experimental evaluations of two small (2 and 3 conv layers) networks are given on the MNIST and CIFAR10 dataset, showing improved conventional accuracy on clean samples and adversarial attacks, and certified accuracy than 4 baseline privacy-preserving algorithms.
SP:d26829f8a3a08c2935f10ab5871b847fd11c9887
Maxmin Q-learning: Controlling the Estimation Bias of Q-learning
1 INTRODUCTION . Q-learning ( Watkins , 1989 ) is one of the most popular reinforcement learning algorithms . One of the reasons for this widespread adoption is the simplicity of the update . On each step , the agent updates its action value estimates towards the observed reward and the estimated value of the maximal action in the next state . This target represents the highest value the agent thinks it could obtain from the current state and action , given the observed reward . Unfortunately , this simple update rule has been shown to suffer from overestimation bias ( Thrun & Schwartz , 1993 ; van Hasselt , 2010 ) . The agent updates with the maximum over action values might be large because an action ’ s value actually is high , or it can be misleadingly high simply because of the stochasticity or errors in the estimator . With many actions , there is a higher probability that one of the estimates is large simply due to stochasticity and the agent will overestimate the value . This issue is particularly problematic under function approximation , and can significant impede the quality of the learned policy ( Thrun & Schwartz , 1993 ; Szita & Lőrincz , 2008 ; Strehl et al. , 2009 ) or even lead to failures of Q-learning ( Thrun & Schwartz , 1993 ) . More recently , experiments across several domains suggest that this overestimation problem is common ( Hado van Hasselt et al. , 2016 ) . Double Q-learning ( van Hasselt , 2010 ) is introduced to instead ensure underestimation bias . The idea is to maintain two unbiased independent estimators of the action values . The expected action value of estimator one is selected for the maximal action from estimator two , which is guaranteed not to overestimate the true maximum action value . Double DQN ( Hado van Hasselt et al. , 2016 ) , the extension of this idea to Q-learning with neural networks , has been shown to significantly improve performance over Q-learning . However , this is not a complete answer to this problem , because trading overestimation bias for underestimation bias is not always desirable , as we show in our experiments . 1Code is available at https : //github.com/qlan3/Explorer Several other methods have been introduced to reduce overestimation bias , without fully moving towards underestimation . Weighted Double Q-learning ( Zhang et al. , 2017 ) uses a weighted combination of the Double Q-learning estimate , which likely has underestimation bias , and the Q-learning estimate , which likely has overestimation bias . Bias-corrected Q-Learning ( Lee et al. , 2013 ) reduces the overestimation bias through a bias correction term . Ensemble Q-learning and Averaged Q-learning ( Anschel et al. , 2017 ) take averages of multiple action values , to both reduce the overestimation bias and the estimation variance . However , with a finite number of actionvalue functions , the average operation in these two algorithms will never completely remove the overestimation bias , as the average of several overestimation biases is always positive . Further , these strategies do not guide how strongly we should correct for overestimation bias , nor how to determine—or control—the level of bias . The overestimation bias also appears in the actor-critic setting ( Fujimoto et al. , 2018 ; Haarnoja et al. , 2018 ) . For example , Fujimoto et al . ( 2018 ) propose the Twin Delayed Deep Deterministic policy gradient algorithm ( TD3 ) which reduces the overestimation bias by taking the minimum value between two critics . However , they do not provide a rigorous theoretical analysis for the effect of applying the minimum operator . There is also no theoretical guide for choosing the number of estimators such that the overestimation bias can be reduced to 0 . In this paper , we study the effects of overestimation and underestimation bias on learning performance , and use them to motivate a generalization of Q-learning called Maxmin Q-learning . Maxmin Q-learning directly mitigates the overestimation bias by using a minimization over multiple action-value estimates . Moreover , it is able to control the estimation bias varying from positive to negative which helps improve learning efficiency as we will show in next sections . We prove that , theoretically , with an appropriate number of action-value estimators , we are able to acquire an unbiased estimator with a lower approximation variance than Q-learning . We empirically verify our claims on several benchmarks . We study the convergence properties of our algorithm within a novel Generalized Q-learning framework , which is suitable for studying several of the recently proposed Q-learning variants . We also combine deep neural networks with Maxmin Q-learning ( Maxmin DQN ) and demonstrate its effectiveness in several benchmark domains . 2 PROBLEM SETTING . We formalize the problem as a Markov Decision Process ( MDP ) , ( S , A , P , r , γ ) , where S is the state space , A is the action space , P : S×A×S → [ 0 , 1 ] is the transition probabilities , r : S×A×S → R is the reward mapping , and γ ∈ [ 0 , 1 ] is the discount factor . At each time step t , the agent observes a state St ∈ S and takes an action At ∈ A and then transitions to a new state St+1 ∈ S according to the transition probabilities P and receives a scalar reward Rt+1 = r ( St , At , St+1 ) ∈ R. The goal of the agent is to find a policy π : S × A → [ 0 , 1 ] that maximizes the expected return starting from some initial state . Q-learning is an off-policy algorithm which attempts to learn the state-action valuesQ : S×A → R for the optimal policy . It tries to solve for Q∗ ( s , a ) = E [ Rt+1 + max a′∈A Q∗ ( St+1 , a ′ ) ∣∣∣ St = s , At = a ] The optimal policy is to act greedily with respect to these action values : from each s select a from arg maxa∈AQ ∗ ( s , a ) . The update rule for an approximation Q for a sampled transition st , at , rt+1 , st+1 is : Q ( st , at ) ← Q ( st , at ) + α ( Y Qt −Q ( st , at ) ) for Y Q t def = rt+1 + γ max a′∈A Q ( st+1 , a ′ ) ( 1 ) where α is the step-size . The transition can be generated off-policy , from any behaviour that sufficiently covers the state space . This algorithm is known to converge in the tabular setting ( Tsitsiklis , 1994 ) , with some limited results for the function approximation setting ( Melo & Ribeiro , 2007 ) . 3 UNDERSTANDING WHEN OVERESTIMATION BIAS HELPS AND HURTS . In this section , we briefly discuss the estimation bias issue , and empirically show that both overestimation and underestimation bias may improve learning performance , depending on the environment . This motivates our Maxmin Q-learning algorithm described in the next section , which allows us to flexibly control the estimation bias and reduce the estimation variance . The overestimation bias occurs since the target maxa′∈AQ ( st+1 , a′ ) is used in the Q-learning update . Because Q is an approximation , it is probable that the approximation is higher than the true value for one or more of the actions . The maximum over these estimators , then , is likely to be skewed towards an overestimate . For example , even unbiased estimates Q ( st+1 , a′ ) for all a′ , will vary due to stochasticity . Q ( st+1 , a′ ) = Q∗ ( st+1 , a′ ) + ea′ , and for some actions , ea′ will be positive . As a result , E [ maxa′∈AQ ( st+1 , a′ ) ] ≥ maxa′∈A E [ Q ( st+1 , a′ ) ] = maxa′∈AQ∗ ( st+1 , a′ ) . This overestimation bias , however , may not always be detrimental . And , further , in some cases , erring towards an underestimation bias can be harmful . Overestimation bias can help encourage exploration for overestimated actions , whereas underestimation bias might discourage exploration . In particular , we expect more overestimation bias in highly stochastic areas of the world ; if those highly stochastic areas correspond to high-value regions , then encouraging exploration there might be beneficial . An underestimation bias might actually prevent an agent from learning that a region is high-value . Alternatively , if highly stochastic areas also have low values , overestimation bias might cause an agent to over-explore a low-value region . We show this effect in the simple MDP , shown in Figure 1 . The MDP for state A has only two actions : Left and Right . It has a deterministic neutral reward for both the Left action and the Right action . The Left action transitions to state B where there are eight actions transitions to a terminate state with a highly stochastic reward . The mean of this stochastic reward is µ . By selecting µ > 0 , the stochastic region becomes high-value , and we expect overestimation bias to help and underestimation bias to hurt . By selecting µ < 0 , the stochastic region becomes low-value , and we expect overestimation bias to hurt and underestimation bias to help . We test Q-learning , Double Q-learning and our new algorithm Maxmin Q-learning in this environment . Maxmin Q-learning ( described fully in the next section ) uses N estimates of the action values in the targets . For N = 1 , it corresponds to Q-learning ; otherwise , it progresses from overestimation bias at N = 1 towards underestimation bias with increasing N . In the experiment , we used a discount factor γ = 1 ; a replay buffer with size 100 ; an -greedy behaviour with = 0.1 ; tabular action-values , initialized with a Gaussian distributionN ( 0 , 0.01 ) ; and a step-size of 0.01 for all algorithms . The results in Figure 2 verify our hypotheses for when overestimation and underestimation bias help and hurt . Double Q-learning underestimates too much for µ = +1 , and converges to a suboptimal policy . Q-learning learns the optimal policy the fastest , though for all values of N = 2 , 4 , 6 , 8 , Maxmin Q-learning does progress towards the optimal policy . All methods get to the optimal policy for µ = −1 , but now Double Q-learning reaches the optimal policy the fastest , and followed by Maxmin Q-learning with larger N . 4 MAXMIN Q-LEARNING . In this section , we develop Maxmin Q-learning , a simple generalization of Q-learning designed to control the estimation bias , as well as reduce the estimation variance of action values . The idea is to maintain N estimates of the action values , Qi , and use the minimum of these estimates in the Q-learning target : maxa′ mini∈ { 1 , ... , N } Qi ( s′ , a′ ) . For N = 1 , the update is simply Q-learning , and so likely has overestimation bias . As N increase , the overestimation decreases ; for some N > 1 , this maxmin estimator switches from an overestimate , in expectation , to an underestimate . We characterize the relationship between N and the expected estimation bias below in Theorem 1 . Note that Maxmin Q-learning uses a different mechanism to reduce overestimation bias than Double Qlearning ; Maxmin Q-learning with N = 2 is not Double Q-learning . The full algorithm is summarized in Algorithm 1 , and is a simple modification of Q-learning with experience replay . We use random subsamples of the observed data for each of the N estimators , to make them nearly independent . To do this training online , we keep a replay buffer . On each step , a random estimator i is chosen and updated using a mini-batch from the buffer . Multiple such updates can be performed on each step , just like in experience replay , meaning multiple estimators can be updated per step using different random mini-batches . In our experiments , to better match DQN , we simply do one update per step . Finally , it is also straightforward to incorporate target networks to get Maxmin DQN , by maintaining a target network for each estimator . We now characterize the relation between the number of action-value functions used in Maxmin Q-learning and the estimation bias of action values . For compactness , we write Qisa instead of Qi ( s , a ) . Each Qisa has random approximation error e i sa Qisa = Q ∗ sa + e i sa . We assume that eisa is a uniform random variable U ( −τ , τ ) for some τ > 0 . The uniform random assumption was used by Thrun & Schwartz ( 1993 ) to demonstrate bias in Q-learning , and reflects that non-negligible positive and negative eisa are possible . Notice that for N estimators with nsa samples , the τ will be proportional to some function of nsa/N , because the data will be shared amongst the N estimators . For the general theorem , we use a generic τ , and in the following corollary provide a specific form for τ in terms of N and nsa . Recall that M is the number of actions applicable at state s′ . Define the estimation bias ZMN for transition s , a , r , s′ to be ZMN def = ( r + γmax a′ Qmins′a′ ) − ( r + γmax a′ Q∗s′a′ ) = γ ( max a′ Qmins′a′ −max a′ Q∗s′a′ ) Algorithm 1 : Maxmin Q-learning Input : step-size α , exploration parameter > 0 , number of action-value functions N Initialize N action-value functions { Q1 , . . . , QN } randomly Initialize empty replay buffer D Observe initial state s while Agent is interacting with the Environment do Qmin ( s , a ) ← mink∈ { 1 , ... , N } Qk ( s , a ) , ∀a ∈ A Choose action a by -greedy based on Qmin Take action a , observe r , s′ Store transition ( s , a , r , s′ ) in D Select a subset S from { 1 , . . . , N } ( e.g. , randomly select one i to update ) for i ∈ S do Sample random mini-batch of transitions ( sD , aD , rD , s′D ) from D Get update target : YMQ ← rD + γmaxa′∈AQmin ( s′D , a′ ) Update action-value Qi : Qi ( sD , aD ) ← Qi ( sD , aD ) + α [ YMQ −Qi ( sD , aD ) ] end s← s′ end where Qminsa def = min i∈ { 1 , ... , N } Qisa = Q ∗ sa + min i∈ { 1 , ... , N } eisa We now show how the expected estimation bias E [ ZMN ] and the variance of Qminsa are related to the number of action-value functions N in Maxmin Q-learning . Theorem 1 Under the conditions stated above , ( i ) the expected estimation bias is E [ ZMN ] = γτ [ 1− 2tMN ] where tMN = M ( M − 1 ) · · · 1 ( M + 1N ) ( M − 1 + 1 N ) · · · ( 1 + 1 N ) . E [ ZMN ] decreases as N increases : E [ ZM , N=1 ] = γτ M−1M+1 and E [ ZM , N→∞ ] = −γτ . ( ii ) V ar [ Qminsa ] = 4Nτ2 ( N + 1 ) 2 ( N + 2 ) . V ar [ Qminsa ] decreases as N increases : V ar [ Q min sa ] = τ2 3 for N=1 and V ar [ Q min sa ] = 0 for N →∞ . Theorem 1 is a generalization of the first lemma in Thrun & Schwartz ( 1993 ) ; we provide the proof in Appendix A as well as a visualization of the expected bias for varying M and N . This theorem shows that the average estimation bias E [ ZMN ] , decreases as N increases . Thus , we can control the bias by changing the number of estimators in Maxmin Q-learning . Specifically , the average estimation bias can be reduced from positive to negative as N increases . Notice that E [ ZMN ] = 0 when tMN = 12 . This suggests that by choosing N such that tMN ≈ 1 2 , we can reduce the bias to near 0 . Furthermore , V ar [ Qminsa ] decreases asN increases . This indicates that we can control the estimation variance of target action value throughN . We show just this in the following Corollary . The subtlety is that with increasing N , each estimator will receive less data . The fair comparison is to compare the variance of a single estimator that uses all of the data , as compared to the maxmin estimator which shares the samples across N estimators . We show that there is an N such that the variance is lower , which arises largely due to the fact that the variance of each estimator decreases linearly in n , but the τ parameter for each estimator only decreases at a square root rate in the number of samples . Corollary 1 Assuming the nsa samples are evenly allocated amongst the N estimators , then τ =√ 3σ2N/nsa where σ2 is the variance of samples for ( s , a ) and , for Qsa the estimator that uses all nsa samples for a single estimate , V ar [ Qminsa ] = 12N2 ( N + 1 ) 2 ( N + 2 ) V ar [ Qsa ] . Under this uniform random noise assumption , for N ≥ 8 , V ar [ Qminsa ] < V ar [ Qsa ] .
This paper proposes a new Q learning algorithm framework: maxmin Q-learning, to address the overestimation bias issue of Q learning. The main contributions of this paper are three folds: 1) It provides an inspiring example on overestimation/underestimation of Q learning. 2) Generalize Q learning by a new maxmin Q-learning by maintaining independent Q estimator and interact them in a max-min way for the update. 3) Provide both theoretical and empirical analyses of their algorithm.
SP:e6e0533858e89d3cdf4265cb5a89ba6f4f9837bb
Maxmin Q-learning: Controlling the Estimation Bias of Q-learning
1 INTRODUCTION . Q-learning ( Watkins , 1989 ) is one of the most popular reinforcement learning algorithms . One of the reasons for this widespread adoption is the simplicity of the update . On each step , the agent updates its action value estimates towards the observed reward and the estimated value of the maximal action in the next state . This target represents the highest value the agent thinks it could obtain from the current state and action , given the observed reward . Unfortunately , this simple update rule has been shown to suffer from overestimation bias ( Thrun & Schwartz , 1993 ; van Hasselt , 2010 ) . The agent updates with the maximum over action values might be large because an action ’ s value actually is high , or it can be misleadingly high simply because of the stochasticity or errors in the estimator . With many actions , there is a higher probability that one of the estimates is large simply due to stochasticity and the agent will overestimate the value . This issue is particularly problematic under function approximation , and can significant impede the quality of the learned policy ( Thrun & Schwartz , 1993 ; Szita & Lőrincz , 2008 ; Strehl et al. , 2009 ) or even lead to failures of Q-learning ( Thrun & Schwartz , 1993 ) . More recently , experiments across several domains suggest that this overestimation problem is common ( Hado van Hasselt et al. , 2016 ) . Double Q-learning ( van Hasselt , 2010 ) is introduced to instead ensure underestimation bias . The idea is to maintain two unbiased independent estimators of the action values . The expected action value of estimator one is selected for the maximal action from estimator two , which is guaranteed not to overestimate the true maximum action value . Double DQN ( Hado van Hasselt et al. , 2016 ) , the extension of this idea to Q-learning with neural networks , has been shown to significantly improve performance over Q-learning . However , this is not a complete answer to this problem , because trading overestimation bias for underestimation bias is not always desirable , as we show in our experiments . 1Code is available at https : //github.com/qlan3/Explorer Several other methods have been introduced to reduce overestimation bias , without fully moving towards underestimation . Weighted Double Q-learning ( Zhang et al. , 2017 ) uses a weighted combination of the Double Q-learning estimate , which likely has underestimation bias , and the Q-learning estimate , which likely has overestimation bias . Bias-corrected Q-Learning ( Lee et al. , 2013 ) reduces the overestimation bias through a bias correction term . Ensemble Q-learning and Averaged Q-learning ( Anschel et al. , 2017 ) take averages of multiple action values , to both reduce the overestimation bias and the estimation variance . However , with a finite number of actionvalue functions , the average operation in these two algorithms will never completely remove the overestimation bias , as the average of several overestimation biases is always positive . Further , these strategies do not guide how strongly we should correct for overestimation bias , nor how to determine—or control—the level of bias . The overestimation bias also appears in the actor-critic setting ( Fujimoto et al. , 2018 ; Haarnoja et al. , 2018 ) . For example , Fujimoto et al . ( 2018 ) propose the Twin Delayed Deep Deterministic policy gradient algorithm ( TD3 ) which reduces the overestimation bias by taking the minimum value between two critics . However , they do not provide a rigorous theoretical analysis for the effect of applying the minimum operator . There is also no theoretical guide for choosing the number of estimators such that the overestimation bias can be reduced to 0 . In this paper , we study the effects of overestimation and underestimation bias on learning performance , and use them to motivate a generalization of Q-learning called Maxmin Q-learning . Maxmin Q-learning directly mitigates the overestimation bias by using a minimization over multiple action-value estimates . Moreover , it is able to control the estimation bias varying from positive to negative which helps improve learning efficiency as we will show in next sections . We prove that , theoretically , with an appropriate number of action-value estimators , we are able to acquire an unbiased estimator with a lower approximation variance than Q-learning . We empirically verify our claims on several benchmarks . We study the convergence properties of our algorithm within a novel Generalized Q-learning framework , which is suitable for studying several of the recently proposed Q-learning variants . We also combine deep neural networks with Maxmin Q-learning ( Maxmin DQN ) and demonstrate its effectiveness in several benchmark domains . 2 PROBLEM SETTING . We formalize the problem as a Markov Decision Process ( MDP ) , ( S , A , P , r , γ ) , where S is the state space , A is the action space , P : S×A×S → [ 0 , 1 ] is the transition probabilities , r : S×A×S → R is the reward mapping , and γ ∈ [ 0 , 1 ] is the discount factor . At each time step t , the agent observes a state St ∈ S and takes an action At ∈ A and then transitions to a new state St+1 ∈ S according to the transition probabilities P and receives a scalar reward Rt+1 = r ( St , At , St+1 ) ∈ R. The goal of the agent is to find a policy π : S × A → [ 0 , 1 ] that maximizes the expected return starting from some initial state . Q-learning is an off-policy algorithm which attempts to learn the state-action valuesQ : S×A → R for the optimal policy . It tries to solve for Q∗ ( s , a ) = E [ Rt+1 + max a′∈A Q∗ ( St+1 , a ′ ) ∣∣∣ St = s , At = a ] The optimal policy is to act greedily with respect to these action values : from each s select a from arg maxa∈AQ ∗ ( s , a ) . The update rule for an approximation Q for a sampled transition st , at , rt+1 , st+1 is : Q ( st , at ) ← Q ( st , at ) + α ( Y Qt −Q ( st , at ) ) for Y Q t def = rt+1 + γ max a′∈A Q ( st+1 , a ′ ) ( 1 ) where α is the step-size . The transition can be generated off-policy , from any behaviour that sufficiently covers the state space . This algorithm is known to converge in the tabular setting ( Tsitsiklis , 1994 ) , with some limited results for the function approximation setting ( Melo & Ribeiro , 2007 ) . 3 UNDERSTANDING WHEN OVERESTIMATION BIAS HELPS AND HURTS . In this section , we briefly discuss the estimation bias issue , and empirically show that both overestimation and underestimation bias may improve learning performance , depending on the environment . This motivates our Maxmin Q-learning algorithm described in the next section , which allows us to flexibly control the estimation bias and reduce the estimation variance . The overestimation bias occurs since the target maxa′∈AQ ( st+1 , a′ ) is used in the Q-learning update . Because Q is an approximation , it is probable that the approximation is higher than the true value for one or more of the actions . The maximum over these estimators , then , is likely to be skewed towards an overestimate . For example , even unbiased estimates Q ( st+1 , a′ ) for all a′ , will vary due to stochasticity . Q ( st+1 , a′ ) = Q∗ ( st+1 , a′ ) + ea′ , and for some actions , ea′ will be positive . As a result , E [ maxa′∈AQ ( st+1 , a′ ) ] ≥ maxa′∈A E [ Q ( st+1 , a′ ) ] = maxa′∈AQ∗ ( st+1 , a′ ) . This overestimation bias , however , may not always be detrimental . And , further , in some cases , erring towards an underestimation bias can be harmful . Overestimation bias can help encourage exploration for overestimated actions , whereas underestimation bias might discourage exploration . In particular , we expect more overestimation bias in highly stochastic areas of the world ; if those highly stochastic areas correspond to high-value regions , then encouraging exploration there might be beneficial . An underestimation bias might actually prevent an agent from learning that a region is high-value . Alternatively , if highly stochastic areas also have low values , overestimation bias might cause an agent to over-explore a low-value region . We show this effect in the simple MDP , shown in Figure 1 . The MDP for state A has only two actions : Left and Right . It has a deterministic neutral reward for both the Left action and the Right action . The Left action transitions to state B where there are eight actions transitions to a terminate state with a highly stochastic reward . The mean of this stochastic reward is µ . By selecting µ > 0 , the stochastic region becomes high-value , and we expect overestimation bias to help and underestimation bias to hurt . By selecting µ < 0 , the stochastic region becomes low-value , and we expect overestimation bias to hurt and underestimation bias to help . We test Q-learning , Double Q-learning and our new algorithm Maxmin Q-learning in this environment . Maxmin Q-learning ( described fully in the next section ) uses N estimates of the action values in the targets . For N = 1 , it corresponds to Q-learning ; otherwise , it progresses from overestimation bias at N = 1 towards underestimation bias with increasing N . In the experiment , we used a discount factor γ = 1 ; a replay buffer with size 100 ; an -greedy behaviour with = 0.1 ; tabular action-values , initialized with a Gaussian distributionN ( 0 , 0.01 ) ; and a step-size of 0.01 for all algorithms . The results in Figure 2 verify our hypotheses for when overestimation and underestimation bias help and hurt . Double Q-learning underestimates too much for µ = +1 , and converges to a suboptimal policy . Q-learning learns the optimal policy the fastest , though for all values of N = 2 , 4 , 6 , 8 , Maxmin Q-learning does progress towards the optimal policy . All methods get to the optimal policy for µ = −1 , but now Double Q-learning reaches the optimal policy the fastest , and followed by Maxmin Q-learning with larger N . 4 MAXMIN Q-LEARNING . In this section , we develop Maxmin Q-learning , a simple generalization of Q-learning designed to control the estimation bias , as well as reduce the estimation variance of action values . The idea is to maintain N estimates of the action values , Qi , and use the minimum of these estimates in the Q-learning target : maxa′ mini∈ { 1 , ... , N } Qi ( s′ , a′ ) . For N = 1 , the update is simply Q-learning , and so likely has overestimation bias . As N increase , the overestimation decreases ; for some N > 1 , this maxmin estimator switches from an overestimate , in expectation , to an underestimate . We characterize the relationship between N and the expected estimation bias below in Theorem 1 . Note that Maxmin Q-learning uses a different mechanism to reduce overestimation bias than Double Qlearning ; Maxmin Q-learning with N = 2 is not Double Q-learning . The full algorithm is summarized in Algorithm 1 , and is a simple modification of Q-learning with experience replay . We use random subsamples of the observed data for each of the N estimators , to make them nearly independent . To do this training online , we keep a replay buffer . On each step , a random estimator i is chosen and updated using a mini-batch from the buffer . Multiple such updates can be performed on each step , just like in experience replay , meaning multiple estimators can be updated per step using different random mini-batches . In our experiments , to better match DQN , we simply do one update per step . Finally , it is also straightforward to incorporate target networks to get Maxmin DQN , by maintaining a target network for each estimator . We now characterize the relation between the number of action-value functions used in Maxmin Q-learning and the estimation bias of action values . For compactness , we write Qisa instead of Qi ( s , a ) . Each Qisa has random approximation error e i sa Qisa = Q ∗ sa + e i sa . We assume that eisa is a uniform random variable U ( −τ , τ ) for some τ > 0 . The uniform random assumption was used by Thrun & Schwartz ( 1993 ) to demonstrate bias in Q-learning , and reflects that non-negligible positive and negative eisa are possible . Notice that for N estimators with nsa samples , the τ will be proportional to some function of nsa/N , because the data will be shared amongst the N estimators . For the general theorem , we use a generic τ , and in the following corollary provide a specific form for τ in terms of N and nsa . Recall that M is the number of actions applicable at state s′ . Define the estimation bias ZMN for transition s , a , r , s′ to be ZMN def = ( r + γmax a′ Qmins′a′ ) − ( r + γmax a′ Q∗s′a′ ) = γ ( max a′ Qmins′a′ −max a′ Q∗s′a′ ) Algorithm 1 : Maxmin Q-learning Input : step-size α , exploration parameter > 0 , number of action-value functions N Initialize N action-value functions { Q1 , . . . , QN } randomly Initialize empty replay buffer D Observe initial state s while Agent is interacting with the Environment do Qmin ( s , a ) ← mink∈ { 1 , ... , N } Qk ( s , a ) , ∀a ∈ A Choose action a by -greedy based on Qmin Take action a , observe r , s′ Store transition ( s , a , r , s′ ) in D Select a subset S from { 1 , . . . , N } ( e.g. , randomly select one i to update ) for i ∈ S do Sample random mini-batch of transitions ( sD , aD , rD , s′D ) from D Get update target : YMQ ← rD + γmaxa′∈AQmin ( s′D , a′ ) Update action-value Qi : Qi ( sD , aD ) ← Qi ( sD , aD ) + α [ YMQ −Qi ( sD , aD ) ] end s← s′ end where Qminsa def = min i∈ { 1 , ... , N } Qisa = Q ∗ sa + min i∈ { 1 , ... , N } eisa We now show how the expected estimation bias E [ ZMN ] and the variance of Qminsa are related to the number of action-value functions N in Maxmin Q-learning . Theorem 1 Under the conditions stated above , ( i ) the expected estimation bias is E [ ZMN ] = γτ [ 1− 2tMN ] where tMN = M ( M − 1 ) · · · 1 ( M + 1N ) ( M − 1 + 1 N ) · · · ( 1 + 1 N ) . E [ ZMN ] decreases as N increases : E [ ZM , N=1 ] = γτ M−1M+1 and E [ ZM , N→∞ ] = −γτ . ( ii ) V ar [ Qminsa ] = 4Nτ2 ( N + 1 ) 2 ( N + 2 ) . V ar [ Qminsa ] decreases as N increases : V ar [ Q min sa ] = τ2 3 for N=1 and V ar [ Q min sa ] = 0 for N →∞ . Theorem 1 is a generalization of the first lemma in Thrun & Schwartz ( 1993 ) ; we provide the proof in Appendix A as well as a visualization of the expected bias for varying M and N . This theorem shows that the average estimation bias E [ ZMN ] , decreases as N increases . Thus , we can control the bias by changing the number of estimators in Maxmin Q-learning . Specifically , the average estimation bias can be reduced from positive to negative as N increases . Notice that E [ ZMN ] = 0 when tMN = 12 . This suggests that by choosing N such that tMN ≈ 1 2 , we can reduce the bias to near 0 . Furthermore , V ar [ Qminsa ] decreases asN increases . This indicates that we can control the estimation variance of target action value throughN . We show just this in the following Corollary . The subtlety is that with increasing N , each estimator will receive less data . The fair comparison is to compare the variance of a single estimator that uses all of the data , as compared to the maxmin estimator which shares the samples across N estimators . We show that there is an N such that the variance is lower , which arises largely due to the fact that the variance of each estimator decreases linearly in n , but the τ parameter for each estimator only decreases at a square root rate in the number of samples . Corollary 1 Assuming the nsa samples are evenly allocated amongst the N estimators , then τ =√ 3σ2N/nsa where σ2 is the variance of samples for ( s , a ) and , for Qsa the estimator that uses all nsa samples for a single estimate , V ar [ Qminsa ] = 12N2 ( N + 1 ) 2 ( N + 2 ) V ar [ Qsa ] . Under this uniform random noise assumption , for N ≥ 8 , V ar [ Qminsa ] < V ar [ Qsa ] .
The paper tackles the problem of bias in target Q-values when performing Q-learning. The paper proposes a technique for computing target Q-values, by first taking the min over an ensemble of learned Q-values and then taking the max over actions. The paper provides some theoretical properties of this technique: (1) the bias of the estimator can be somewhat controlled by the size of the ensemble; (2) performing Q-learning with these target values is convergent. Experimental results show that the proposed technique can provide performance improvement on a number of tasks.
SP:e6e0533858e89d3cdf4265cb5a89ba6f4f9837bb
Improved Generalization Bound of Permutation Invariant Deep Neural Networks
√ n ! where n is a number of permuting coordinates of data . Moreover , we prove that an approximation power of invariant deep neural networks can achieve an optimal rate , though the networks are restricted to be invariant . To achieve the results , we develop several new proof techniques such as correspondence with a fundamental domain and a scale-sensitive metric entropy . 1 INTRODUCTION . A learning task with permutation invariant data frequently appears in various situations in data analysis . A typical example is learning on sets such as a point cloud , namely , the data are given as a set of points and permuting the points in the data does not change a result of its prediction . Another example is learning with graphs which contain a huge number of edges and nodes . Such the tasks are very common in various scientific fields ( Ntampaka et al. , 2016 ; Ravanbakhsh et al. , 2016 ; Faber et al. , 2016 ) , hence , numerous deep neural networks have been developed to handle such the data with invariance ( Zaheer et al. , 2017 ; Li et al. , 2018a ; Su et al. , 2018 ; Li et al. , 2018b ; Yang et al. , 2018 ; Xu et al. , 2018 ) . The succeeding methods show that their networks for invariance can greatly improve the accuracy with a limited size of networks and data . An important question with invariant data is to understand the reason for the empirical high accuracy from theoretical aspects . Since invariant data are high-dimensional in general , learning theory claims that the high-dimensionality reduces its generalization performance . However , the methods for invariant data achieve better accuracy , thus it contradicts the theoretical principle . Though several theoretical studies ( Maron et al . ( 2019 ) and Sannai et al . ( 2019 ) ) prove a universal approximation property of neural networks for invariant data and guarantee that invariant deep neural networks have sufficient expressive power , the generalization power of the invariant deep neural networks is left as an open question . In this paper , we prove a theoretical bound for generalization of invariant deep neural networks . To show an overview of our result , we provide a simplified version as follows . We consider a supervised-learning problem with m pairs of observations ( Xi , Yi ) where Xi are regarded as pdimensional vectors , and Xi can divided to n coordinates and each of them have D = p/n dimension . Also , let fSn denote a function by a deep neural network which satisfies an invariant property , f ( x ) = f ( σ · x ) holds for any x ∈ Rn×D where σ is an arbitrary permutation of D-dimensional coordinates in x . Also , we define Rm ( f ) = m−1 ∑m i=1 L ( Yi , f ( Xi ) ) and R ( f ) = E [ L ( Y , f ( X ) ) ] as an empirical and expected loss value L ( Y , f ( X ) ) . Then , we show that following : Theorem 1 ( Informal version of Theorem 2 ) . Let fSn be a function by a deep neural network which takes p-dimensional inputs and invariant to any permutations of n coordinates . Then , for sufficiently small ε > 0 , we obtain R ( fSn ) ≤ Rm ( fSn ) + √ C n ! mεp +O ( log ( 1/ε ) ) , with probability at least 1−O ( ε ) . Here , C > 0 is a constant independent of m and n. As a consequence of Theorem 1 , the generalization bound is improved by √ n ! by the invariant property . Since the number of coordinates n is huge in practice , e.g . there are n ≥ 1 , 000 points in the point cloud data in Zaheer et al . ( 2017 ) and hence √ n ! ≥ 101,000 holds , we show that the derived generalization bound is largely improved by invariance . Further , we also derive a rate of approximation of neural networks with invariance ( Theorem 4 ) and its optimality , thus we show that an invariance property for deep neural networks does not reduce an expressive power . From a technical aspect , we develop mainly three proof technique to obtain the improved bound in Theorem 1 . Firstly , we introduce a notion of a fundamental domain to handle invariance of functions and evaluate the complexity of the domain ( Lemma 1 ) . Secondly , we show a one-to-one correspondence between a function by invariant deep neural networks and a function on the fundamental domain ( Proposition 2 ) . Thirdly , we develop a scale-sensitive covering number to control a volume of invariant functions with neural networks ( Proposition 5 ) . Based on the techniques , we can connect a generalization analysis to the invariance of deep neural networks . We summarize the contributions of this paper as follow : • We investigate the generalization bound of deep neural networks which are invariant to permutation of n coordinates , then we show that the bound is improved by √ n ! . • We derive a rate of approximation of invariant deep neural networks . The result shows that the approximation rate is optimal . • We develop several proof techniques to achieve the bound such as a complexity analysis for a fundamental domain and a scale-sensitive metric entropy . 1.1 NOTATION . For a vector b ∈ RD , its d-th element is denoted by bd . Also , b−d : = ( b1 , ... , bd−1 , bd+1 , ... , bD ) ∈ RD−1 is a vector without bd . ‖b‖q : = ( ∑D j=d b q d ) 1/q is the q-norm for q ∈ [ 0 , ∞ ] . For a tensor A ∈ RD1×D2 , a ( d1 , d2 ) -th element of A is written as Ad1 , d2 . For a function f : Ω → R with a set Ω , ‖f‖Lq : = ( ∫ Ω |f ( x ) |qdx ) 1/q denotes the Lq-norm for q ∈ [ 0 , ∞ ] . For a subset Λ ⊂ Ω , f Λ denotes the restriction of f to Λ . For an integer z , z ! = ∏n j=1 j denotes a factorial of z . For a set Ω with a norm ‖ · ‖ , N ( ε , Ω , ‖ · ‖ ) : = inf { N : ∃ { ωj } Nj=1 s.t . ∪Nj=1 { ω : ‖ω − ωj‖ ≤ ε } ⊃ Ω } is a covering number of Ω with ε > 0 . For a set Ω , idΩ or id denotes the identity map on Ω , namely idΩ ( x ) = x for any x ∈ Ω . For a subset ∆ ⊂ Rn , int ( ∆ ) denotes the set of the inner points of ∆ . 2 PROBLEM SETTING . 2.1 INVARIANT DEEP NEURAL NETWORK . We define a set of permutation Sn in this paper . Consider x ∈ Rn×D where n be a number of coordinates in x and D be a dimension of each coordinate . Then , an action σ ∈ Sn on x is defined as ( σ · x ) i , d = xσ−1 ( i ) , d , i = 1 , ... , n , d = 1 , ... , D , here , σ is a permutation of indexes i . Also , we define an invariant property for general functions . Definition 1 ( Sn-Invariant/Equivariant Function ) . For a set X ⊂ Rn×D , we say that a map f : X → RM is • Sn-invariant ( or simply invariant ) if f ( σ · x ) = f ( x ) for any σ ∈ Sn and any x ∈ X , • Sn-equivariant ( or simply equivariant ) if there is an Sn-action on RM and f ( σ · x ) = σ · f ( x ) for any σ ∈ Sn and any x ∈ X . In this paper , we mainly treat fully connected deep neural networks with a ReLU activation function . The ReLU activation function is defined by ReLU ( x ) = max ( 0 , x ) . Deep neural networks are built by stacking blocks which consist of a linear map and a ReLU activation . More formally , it is a function Zi : Rdi → Rdi+1 defined by Zi ( x ) = ReLU ( Wix + bi ) , where Wi ∈ Rdi+1×di and bi ∈ Rdi+1 for i = 1 , ... , H . Here , H is a depth of the deep neural network and di is a width of the i-th layer . An output of deep neural networks is formulated as f ( x ) : = ZH ◦ ZH−1 . . . Z2 ◦ Z1 ( x ) . ( 1 ) Let FDNN be a set of functions by deep neural networks . We also consider an invariant deep neural network defined as follows : Definition 2 ( Invariant Deep Neural Network ) . f ∈ FDNN is a function by a Sn-invariant deep neural network , if f is a Sn-invariant function . Let FSnDNN ⊂ FDNN be a set of functions by Sn-invariant deep neural networks . The definition is a general notion and it contains several explicit invariant deep neural networks . We provide several representative examples as follow . Example 1 ( Deep Sets ) . Zaheer et al . ( 2017 ) develops an architecture for invariant deep neural networks by utilizing layer-wise equivariance . Their architecture consists of equivariant layers ` 1 , ... , ` j , an invariant linear layer h , and a fully-connected layer f ′ . For each ` i.i = 1 , .. , j , its parameter matrix is defined as Wi = λI + γ ( 11 > ) , λ , γ ∈ R,1 = [ 1 , ... , 1 ] > , which makes ` i as a layer-wise equivariant function . They show that f = f ′ ◦ h ◦ ` j ◦ · · · ` 1 is an invariant function . Its illustration is provided in Figure 1 . Example 2 ( Invariant Feature Extraction ) . Let e is a mapping for invariant feature extraction which will be explicitly constructed by deep neural networks in Proposition 2 . Then , a function f = g ◦ e where g is a function by deep neural networks with a restricted domain . Figure 2 provides its image . 2.2 LEARNING PROBLEM WITH INVARIANT NETWORK . Problem formulation : Let I = [ 0 , 1 ] n×D be an input space with dimension p = dD . Let Y be an output space . Also , let L : Y × Y → R be a loss function which satisfies supy , y′∈Y |L ( y , y′ ) | ≤ 1 and 1-Lipschitz continuous . Let P ∗ ( x , y ) be the true unknown distribution on I×Y , and for f : I → Y , R ( f ) = E ( X , Y ) ∼P∗ [ L ( f∗ ( X ) , Y ) ] be the expected risk of f . Also , suppose we observe a training datasetDm : = { ( X1 , Y1 ) , ... , ( Xm , Ym ) } of sizem . LetRm ( f ) : = m−1 ∑m i=1 L ( f ( Xi ) , Yi ) be the empirical risk of f . A goal of this study to investigate the expected lossR ( f ) with a function f from a set of functions as a hypothesis set . Learning with Invariant Network : We consider learning with a hypothesis set by invariant deep networks . Namely , we fix an architecture of deep neural networks preserves fSn ∈ FSnDNN to be an invariant function . Then , we evaluate the expected loss R ( fSn ) .
This paper derives a generalization bound for permutation invariant networks. The main idea is to prove that the bound is inversely proportional to the square-root of the number of possible permutations to the input. The key result is Theorem 3 that bounds the covering number of a neural network (defined under an approximation control bound, Thm 4) using the number of permutations. The paper proves the theorem by showing that the space of input permutations can reduced to group actions over a fundamental domain, and deriving a bound for the covering number of the fundamental domain (Lemma 1), which is then extended to derive the same for the neural network setting. For the permutation invariance setting, the fundamental domain is obtained via the sorting operator.
SP:62d218e9619a8a076aa2ef20f64bb26eb8516591
Improved Generalization Bound of Permutation Invariant Deep Neural Networks
√ n ! where n is a number of permuting coordinates of data . Moreover , we prove that an approximation power of invariant deep neural networks can achieve an optimal rate , though the networks are restricted to be invariant . To achieve the results , we develop several new proof techniques such as correspondence with a fundamental domain and a scale-sensitive metric entropy . 1 INTRODUCTION . A learning task with permutation invariant data frequently appears in various situations in data analysis . A typical example is learning on sets such as a point cloud , namely , the data are given as a set of points and permuting the points in the data does not change a result of its prediction . Another example is learning with graphs which contain a huge number of edges and nodes . Such the tasks are very common in various scientific fields ( Ntampaka et al. , 2016 ; Ravanbakhsh et al. , 2016 ; Faber et al. , 2016 ) , hence , numerous deep neural networks have been developed to handle such the data with invariance ( Zaheer et al. , 2017 ; Li et al. , 2018a ; Su et al. , 2018 ; Li et al. , 2018b ; Yang et al. , 2018 ; Xu et al. , 2018 ) . The succeeding methods show that their networks for invariance can greatly improve the accuracy with a limited size of networks and data . An important question with invariant data is to understand the reason for the empirical high accuracy from theoretical aspects . Since invariant data are high-dimensional in general , learning theory claims that the high-dimensionality reduces its generalization performance . However , the methods for invariant data achieve better accuracy , thus it contradicts the theoretical principle . Though several theoretical studies ( Maron et al . ( 2019 ) and Sannai et al . ( 2019 ) ) prove a universal approximation property of neural networks for invariant data and guarantee that invariant deep neural networks have sufficient expressive power , the generalization power of the invariant deep neural networks is left as an open question . In this paper , we prove a theoretical bound for generalization of invariant deep neural networks . To show an overview of our result , we provide a simplified version as follows . We consider a supervised-learning problem with m pairs of observations ( Xi , Yi ) where Xi are regarded as pdimensional vectors , and Xi can divided to n coordinates and each of them have D = p/n dimension . Also , let fSn denote a function by a deep neural network which satisfies an invariant property , f ( x ) = f ( σ · x ) holds for any x ∈ Rn×D where σ is an arbitrary permutation of D-dimensional coordinates in x . Also , we define Rm ( f ) = m−1 ∑m i=1 L ( Yi , f ( Xi ) ) and R ( f ) = E [ L ( Y , f ( X ) ) ] as an empirical and expected loss value L ( Y , f ( X ) ) . Then , we show that following : Theorem 1 ( Informal version of Theorem 2 ) . Let fSn be a function by a deep neural network which takes p-dimensional inputs and invariant to any permutations of n coordinates . Then , for sufficiently small ε > 0 , we obtain R ( fSn ) ≤ Rm ( fSn ) + √ C n ! mεp +O ( log ( 1/ε ) ) , with probability at least 1−O ( ε ) . Here , C > 0 is a constant independent of m and n. As a consequence of Theorem 1 , the generalization bound is improved by √ n ! by the invariant property . Since the number of coordinates n is huge in practice , e.g . there are n ≥ 1 , 000 points in the point cloud data in Zaheer et al . ( 2017 ) and hence √ n ! ≥ 101,000 holds , we show that the derived generalization bound is largely improved by invariance . Further , we also derive a rate of approximation of neural networks with invariance ( Theorem 4 ) and its optimality , thus we show that an invariance property for deep neural networks does not reduce an expressive power . From a technical aspect , we develop mainly three proof technique to obtain the improved bound in Theorem 1 . Firstly , we introduce a notion of a fundamental domain to handle invariance of functions and evaluate the complexity of the domain ( Lemma 1 ) . Secondly , we show a one-to-one correspondence between a function by invariant deep neural networks and a function on the fundamental domain ( Proposition 2 ) . Thirdly , we develop a scale-sensitive covering number to control a volume of invariant functions with neural networks ( Proposition 5 ) . Based on the techniques , we can connect a generalization analysis to the invariance of deep neural networks . We summarize the contributions of this paper as follow : • We investigate the generalization bound of deep neural networks which are invariant to permutation of n coordinates , then we show that the bound is improved by √ n ! . • We derive a rate of approximation of invariant deep neural networks . The result shows that the approximation rate is optimal . • We develop several proof techniques to achieve the bound such as a complexity analysis for a fundamental domain and a scale-sensitive metric entropy . 1.1 NOTATION . For a vector b ∈ RD , its d-th element is denoted by bd . Also , b−d : = ( b1 , ... , bd−1 , bd+1 , ... , bD ) ∈ RD−1 is a vector without bd . ‖b‖q : = ( ∑D j=d b q d ) 1/q is the q-norm for q ∈ [ 0 , ∞ ] . For a tensor A ∈ RD1×D2 , a ( d1 , d2 ) -th element of A is written as Ad1 , d2 . For a function f : Ω → R with a set Ω , ‖f‖Lq : = ( ∫ Ω |f ( x ) |qdx ) 1/q denotes the Lq-norm for q ∈ [ 0 , ∞ ] . For a subset Λ ⊂ Ω , f Λ denotes the restriction of f to Λ . For an integer z , z ! = ∏n j=1 j denotes a factorial of z . For a set Ω with a norm ‖ · ‖ , N ( ε , Ω , ‖ · ‖ ) : = inf { N : ∃ { ωj } Nj=1 s.t . ∪Nj=1 { ω : ‖ω − ωj‖ ≤ ε } ⊃ Ω } is a covering number of Ω with ε > 0 . For a set Ω , idΩ or id denotes the identity map on Ω , namely idΩ ( x ) = x for any x ∈ Ω . For a subset ∆ ⊂ Rn , int ( ∆ ) denotes the set of the inner points of ∆ . 2 PROBLEM SETTING . 2.1 INVARIANT DEEP NEURAL NETWORK . We define a set of permutation Sn in this paper . Consider x ∈ Rn×D where n be a number of coordinates in x and D be a dimension of each coordinate . Then , an action σ ∈ Sn on x is defined as ( σ · x ) i , d = xσ−1 ( i ) , d , i = 1 , ... , n , d = 1 , ... , D , here , σ is a permutation of indexes i . Also , we define an invariant property for general functions . Definition 1 ( Sn-Invariant/Equivariant Function ) . For a set X ⊂ Rn×D , we say that a map f : X → RM is • Sn-invariant ( or simply invariant ) if f ( σ · x ) = f ( x ) for any σ ∈ Sn and any x ∈ X , • Sn-equivariant ( or simply equivariant ) if there is an Sn-action on RM and f ( σ · x ) = σ · f ( x ) for any σ ∈ Sn and any x ∈ X . In this paper , we mainly treat fully connected deep neural networks with a ReLU activation function . The ReLU activation function is defined by ReLU ( x ) = max ( 0 , x ) . Deep neural networks are built by stacking blocks which consist of a linear map and a ReLU activation . More formally , it is a function Zi : Rdi → Rdi+1 defined by Zi ( x ) = ReLU ( Wix + bi ) , where Wi ∈ Rdi+1×di and bi ∈ Rdi+1 for i = 1 , ... , H . Here , H is a depth of the deep neural network and di is a width of the i-th layer . An output of deep neural networks is formulated as f ( x ) : = ZH ◦ ZH−1 . . . Z2 ◦ Z1 ( x ) . ( 1 ) Let FDNN be a set of functions by deep neural networks . We also consider an invariant deep neural network defined as follows : Definition 2 ( Invariant Deep Neural Network ) . f ∈ FDNN is a function by a Sn-invariant deep neural network , if f is a Sn-invariant function . Let FSnDNN ⊂ FDNN be a set of functions by Sn-invariant deep neural networks . The definition is a general notion and it contains several explicit invariant deep neural networks . We provide several representative examples as follow . Example 1 ( Deep Sets ) . Zaheer et al . ( 2017 ) develops an architecture for invariant deep neural networks by utilizing layer-wise equivariance . Their architecture consists of equivariant layers ` 1 , ... , ` j , an invariant linear layer h , and a fully-connected layer f ′ . For each ` i.i = 1 , .. , j , its parameter matrix is defined as Wi = λI + γ ( 11 > ) , λ , γ ∈ R,1 = [ 1 , ... , 1 ] > , which makes ` i as a layer-wise equivariant function . They show that f = f ′ ◦ h ◦ ` j ◦ · · · ` 1 is an invariant function . Its illustration is provided in Figure 1 . Example 2 ( Invariant Feature Extraction ) . Let e is a mapping for invariant feature extraction which will be explicitly constructed by deep neural networks in Proposition 2 . Then , a function f = g ◦ e where g is a function by deep neural networks with a restricted domain . Figure 2 provides its image . 2.2 LEARNING PROBLEM WITH INVARIANT NETWORK . Problem formulation : Let I = [ 0 , 1 ] n×D be an input space with dimension p = dD . Let Y be an output space . Also , let L : Y × Y → R be a loss function which satisfies supy , y′∈Y |L ( y , y′ ) | ≤ 1 and 1-Lipschitz continuous . Let P ∗ ( x , y ) be the true unknown distribution on I×Y , and for f : I → Y , R ( f ) = E ( X , Y ) ∼P∗ [ L ( f∗ ( X ) , Y ) ] be the expected risk of f . Also , suppose we observe a training datasetDm : = { ( X1 , Y1 ) , ... , ( Xm , Ym ) } of sizem . LetRm ( f ) : = m−1 ∑m i=1 L ( f ( Xi ) , Yi ) be the empirical risk of f . A goal of this study to investigate the expected lossR ( f ) with a function f from a set of functions as a hypothesis set . Learning with Invariant Network : We consider learning with a hypothesis set by invariant deep networks . Namely , we fix an architecture of deep neural networks preserves fSn ∈ FSnDNN to be an invariant function . Then , we evaluate the expected loss R ( fSn ) .
This paper presents a derivation of a generalization bound for neural networks designed specifically to deal with permutation invariant data (such as point clouds). The heart of the contribution is that the bound includes a 1/n! (i.e. 1 / (n-factorial)) factor to the major term, where n is the number of permutable elements there are in a data example (think: number of points in a point cloud). This term goes some way towards making the bound tight.
SP:62d218e9619a8a076aa2ef20f64bb26eb8516591
Learning Representations in Reinforcement Learning: an Information Bottleneck Approach
1 Introduction . In training a reinforcement learning algorithm , an agent interacts with the environment , explores the ( possibly unknown ) state space , and learns a policy from the exploration sample data . In many cases , such samples are quite expensive to obtain ( e.g. , requires interactions with the physical environment ) . Hence , improving the sample efficiency of the learning algorithm is a key problem in RL and has been studied extensively in the literature . Popular techniques include experience reuse/replay , which leads to powerful off-policy algorithms ( e.g. , ( Mnih et al. , 2013 ; Silver et al. , 2014 ; Van Hasselt et al. , 2015 ; Nachum et al. , 2018a ; Espeholt et al. , 2018 ) ) , and model-based algorithms ( e.g. , ( Hafner et al. , 2018 ; Kaiser et al. , 2019 ) ) . Moreover , it is known that effective representations can greatly reduce the sample complexity in RL . This can be seen from the following motivating example : In the environment of a classical Atari game : Seaquest , it may take dozens of millions samples to converge to an optimal policy when the input states are raw images ( more than 28,000 dimensions ) , while it takes less samples when the inputs are 128-dimension pre-defined RAM data ( Sygnowski & Michalewski , 2016 ) . Clearly , the RAM data contain much less redundant information irrelevant to the learning process than the raw images . Thus , we argue that an efficient representation is extremely crucial to the sample efficiency . In this paper , we try to improve the sample efficiency in RL from the perspective of representation learning using the celebrated information bottleneck framework ( Tishby et al. , 2000 ) . In standard deep learning , the experiments in ( Shwartz-Ziv & Tishby , 2017 ) show that during the training process , the neural network first ” remembers ” the inputs by increasing the mutual information between the inputs and the representation variables , then compresses the inputs to efficient representation related to the learning task by discarding redundant information from inputs ( decreasing the mutual information between inputs and representation variables ) . We call this phenomena ” information extraction-compression process ” ( information E-C process ) . Our experiments shows that , similar to the results shown in ( Shwartz-Ziv & Tishby , 2017 ) , we first ( to the best of our knowledge ) observe the information extraction-compression phenomena in the context of deep RL ( we need to use MINE ( Belghazi et al. , 2018 ) for estimating the mutual information ) . This observation motivates us to adopt the information bottleneck ( IB ) framework in reinforcement learning , in order to accelerate the extraction-compression process . The IB framework is intended to explicitly enforce RL agents to learn an efficient representation , hence improving the sample efficiency , by discarding irrelevant information from raw input data . Our technical contributions can be summarized as follows : 1 . We observe that the ” information extraction-compression process ” also exists in the context of deep RL ( using MINE ( Belghazi et al. , 2018 ) to estimate the mutual information ) . 2 . We derive the optimization problem of our information bottleneck framework in RL . In order to solve the optimization problem , we construct a lower bound and use the Stein variational gradient method developed in ( Liu et al. , 2017 ) to optimize the lower bound . 3 . We show that our framework can accelerate the information extraction-compression process . Our experimental results also show that combining actor-critic algorithms ( such as A2C , PPO ) with our framework is more sample-efficient than their original versions . 4 . We analyze the relationship between our framework and MINE , through this relationship , we theoretically derive an algorithm to optimize our IB framework without constructing the lower bound . Finally , we note that our IB method is orthogonal to other methods for improving the sample efficiency , and it is an interesting future work to incorporate it in other off-policy and model-based algorithms . 2 Related Work . Information bottleneck framework was first introduced in ( Tishby et al. , 2000 ) . They solve the framework by iterative Blahut Arimoto algorithm , which is infeasible to apply to deep neural networks . ( Shwartz-Ziv & Tishby , 2017 ) tries to open the black box of deep learning from the perspective of information bottleneck , though the method they use to compute the mutual information is not precise . ( Alemi et al. , 2016 ) derives a variational information bottleneck framework , yet apart from adding prior target distribution of the representation distribution P ( Z|X ) , they also assume that P ( Z|X ) itself must be a Gaussian distribution , which limits the capabilities of the representation function . ( Peng et al. , 2018 ) extends this framework to variational discriminator bottleneck to improve GANs ( Goodfellow et al. , 2014 ) , imitation learning and inverse RL . As for improving sample-efficiency , ( Mnih et al. , 2013 ; Van Hasselt et al. , 2015 ; Nachum et al. , 2018a ) mainly utilize the experience-reuse . Besides experience-reuse , ( Silver et al. , 2014 ; Fujimoto et al. , 2018 ) tries to learn a deterministic policy , ( Espeholt et al. , 2018 ) seeks to mitigate the delay of off-policy . ( Hafner et al. , 2018 ; Kaiser et al. , 2019 ) learn the environment model . Some other powerful techniques can be found in ( Botvinick et al. , 2019 ) . State representation learning has been studied extensively , readers can find some classic works in the overview ( Lesort et al. , 2018 ) . Apart from this overview , ( Nachum et al. , 2018b ) shows a theoretical foundation of maintaining the optimality of representation space . ( Bellemare et al. , 2019 ) proposes a new perspective on representation learning in RL based on geometric properties of the space of value function . ( Abel et al. , 2019 ) learns representation via information bottleneck ( IB ) in imitation/apprenticeship learning . To the best of our knowledge , there is no work that intends to directly use IB in basic RL algorithms . 3 Preliminaries . A Markov decision process ( MDP ) is a tuple , ( X , A , R , P , µ ) , where X is the set of states , A is the set of actions , R : X ×A×X → R is the reward function , P : X ×A×X → [ 0 , 1 ] is the transition probability function ( where P ( X ′ |X , a ) is the probability of transitioning to state X ′ given that the previous state is X and the agent took action a in X ) , and µ : X → [ 0 , 1 ] is the starting state distribution . A policy π : X → P ( A ) is a map from states to probability distributions over actions , with π ( a|X ) denoting the probability of choosing action a in state X . In reinforcement learning , we aim to select a policy π which maximizes K ( π ) = Eτ∼π [ ∑∞ t=0 γ tR ( Xt , at , Xt+1 ) ] , with a slight abuse of notation we denote R ( Xt , at , Xt+1 ) = rt . Here γ ∈ [ 0 , 1 ) is a discount factor , τ denotes a trajectory ( X0 , a0 , X1 , a1 , ... ) . Define the state value function as V π ( X ) = Eτ∼π [ ∑∞ t=0 γ trt|X0 = X ] , which is the expected return by policy π in state X . And the state-action value function Qπ ( X , a ) = Eτ∼π [ ∑∞ t=0 γ trt|X0 = X , a0 = a ] is the expected return by policy π after taking action a in state X. Actor-critic algorithms take the advantage of both policy gradient methods and valuefunction-based methods such as the well-known A2C ( Mnih et al. , 2016 ) . Specifically , in the case that policy π ( a|X ; θ ) is parameterized by θ , A2C uses the following equation to approximate the real policy gradient ∇θK ( π ) = ∇θĴ ( θ ) : ∇θĴ ( θ ) ≈ ∞∑ t=0 ∇θ [ log π ( at|Xt ; θ ) ( Rt − b ( Xt ) ) + α2H ( π ( ·|Xt ) ) ] = ∞∑ t=0 ∇θĴ ( Xt ; θ ) ( 1 ) where Rt = ∑∞ i=0 γ irt+i is the accumulated return from time step t , H ( p ) is the entropy of distribution p and b ( Xt ) is a baseline function , which is commonly replaced by V π ( Xt ) . A2C also includes the minimization of the mean square error between Rt and value function V π ( Xt ) . Thus in practice , the total objective function in A2C can be written as : J ( θ ) ≈ ∞∑ t=0 log π ( at|Xt ; θ ) ( Rt − V π ( Xt ) ) − α1 ∥Rt − V π ( Xt ) ∥2 + α2H ( π ( ·|Xt ) ) = ∞∑ t=0 J ( Xt ; θ ) ( 2 ) where α1 , α2 are two coefficients . In the context of representation learning in RL , J ( Xt ; θ ) ( including V π ( Xt ) and Qπ ( Xt , at ) ) can be replaced by J ( Zt ; θ ) where Zt is a learnable low-dimensional representation of state Xt . For example , given a representation function Z ∼ Pϕ ( ·|X ) with parameter ϕ , define V π ( Zt ; Xt , ϕ ) |Zt∼Pϕ ( ·|Xt ) = V π ( Xt ) . For simplicity , we write V π ( Zt ; Xt , ϕ ) |Zt∼Pϕ ( ·|Xt ) as V π ( Zt ) . 4 Framework . 4.1 Information Bottleneck in Reinforcement Learning The information bottleneck framework is an information theoretical framework for extracting relevant information , or yielding a representation , that an input X ∈ X contains about an output Y ∈ Y . An optimal representation of X would capture the relevant factors and compress X by diminishing the irrelevant parts which do not contribute to the prediction of Y . In a Markovian structure X → Z → Y where X is the input , Z is representation of X and Y is the label of X , IB seeks an embedding distribution P ⋆ ( Z|X ) such that : P ⋆ ( Z|X ) = arg max P ( Z|X ) I ( Y , Z ) − βI ( X , Z ) = arg max P ( Z|X ) H ( Y ) −H ( Y |Z ) − βI ( X , Z ) = arg max P ( Z|X ) −H ( Y |Z ) − βI ( X , Z ) ( 3 ) for every X ∈ X , which appears as the standard cross-entropy loss1 in supervised learning with a MI-regularizer , β is a coefficient that controls the magnitude of the regularizer . Next we derive an information bottleneck framework in reinforcement learning . Just like the label Y in the context of supervised learning as showed in ( 3 ) , we assume the supervising signal Y in RL to be the accurate value Rt of a specific state Xt for a fixed policy π , which can be approximated by an n-step bootstrapping function Yt = Rt = ∑n−2 i=0 γ irt+i + γn−1V π ( Zt+n−1 ) in practice . Let P ( Y |Z ) be the following distribution : P ( Yt|Zt ) ∝ exp ( −α ( Rt − V π ( Zt ) ) 2 ) ( 4 ) .This assumption is heuristic but reasonable : If we have an input Xt and its relative label Yt = Rt , we now have Xt ’ s representation Zt , naturally we want to train our decision function V π ( Zt ) to approximate the true label Yt . If we set our target distribution to be C · exp ( −α ( Rt − V π ( Zt ) ) 2 ) , the probability decreases as V π ( Zt ) gets far from Yt while increases as V π ( Zt ) gets close to Yt . For simplicity , we just write P ( R|Z ) instead of P ( Yt|Zt ) in the following context . With this assumption , equation ( 3 ) can be written as : P ⋆ ( Z|X ) = arg max P ( Z|X ) EX , R , Z∼P ( X , R , Z ) [ logP ( R|Z ) ] − βI ( X , Z ) = arg max P ( Z|X ) EX∼P ( X ) , Z∼P ( Z|X ) , R∼P ( R|Z ) [ −α ( R− V π ( Z ) ) 2 ] − βI ( X , Z ) ( 5 ) The first term looks familiar with classic mean squared error in supervisd learning . In a network with representation parameter ϕ and policy-value parameter θ , policy loss Ĵ ( Z ; θ ) in equation ( 1 ) and IB loss in ( 5 ) can be jointly written as : L ( θ , ϕ ) = EX∼P ( X ) , Z∼Pϕ ( Z|X ) [ Ĵ ( Z ; θ ) + ER [ −α ( R− V π ( Z ; θ ) ) 2 ] ︸ ︷︷ ︸ J ( Z ; θ ) ] − βI ( X , Z ; ϕ ) ( 6 ) where I ( X , Z ; ϕ ) denotes the MI between X and Z ∼ Pϕ ( ·|X ) . Notice that J ( Z ; θ ) itself is a standard loss function in RL as showed in ( 2 ) . Finally we get the ultimate formalization of IB framework in reinforcement learning : Pϕ∗ ( Z|X ) = arg max Pϕ ( Z|X ) EX∼P ( X ) , Z∼Pϕ ( Z|X ) [ J ( Z ; θ ) ] − βI ( X , Z ; ϕ ) ( 7 ) The following theorem shows that if the mutual information I ( X , Z ) of our framework and common RL framework are close , then our framework is near-optimality . Theorem 1 ( Near-optimality theorem ) . Policy πr = πθr , parameter ϕr , optimal policy π⋆ = πθ⋆ and its relevant representation parameter ϕ⋆ are defined as following : θr , ϕr = argmin θ , ϕ EPϕ ( X , Z ) [ log Pϕ ( Z|X ) Pϕ ( Z ) − 1 β J ( Z ; θ ) ] ( 8 ) θ⋆ , ϕ⋆ = argmin θ , ϕ EPϕ ( X , Z ) [ − 1 β J ( Z ; θ ) ] ( 9 ) Define Jπr as EPϕr ( X , Z ) [ J ( Z ; θr ) ] and Jπ ⋆ as EPϕ⋆ ( X , Z ) [ J ( Z ; θ⋆ ) ] . Assume that for any ϵ > 0 , |I ( X , Z ; ϕ⋆ ) − I ( X , Z ; ϕr ) | < ϵβ , we have |Jπ r − Jπ⋆ | < ϵ . 4.2 Target Distribution Derivation and Variational Lower Bound Construction In this section we first derive the target distribution in ( 7 ) and then seek to optimize it by constructing a variational lower bound . 1Mutual information I ( X , Y ) is defined as ∫ dXdZP ( X , Z ) log P ( X , Z ) P ( X ) P ( Z ) , conditional entropy H ( Y |Z ) is defined as − ∫ dY dZP ( Y , Z ) logP ( Y |Z ) . In a binary-classification problem , − logP ( Y |Z ) = − ( 1− Y ) log ( 1− Ŷ ( Z ) ) − Y log ( Ŷ ( Z ) ) . We would like to solve the optimization problem in ( 7 ) : max Pϕ ( Z|X ) EX∼P ( X ) , Z∼Pϕ ( Z|X ) [ J ( Z ; θ ) − β logPϕ ( Z|X ) ︸ ︷︷ ︸ L1 ( θ , ϕ ) +β logPϕ ( Z ) ︸ ︷︷ ︸ L2 ( θ , ϕ ) ] ( 10 ) Combining the derivative of L1 and L2 and setting their summation to 0 , we can get that Pϕ ( Z|X ) ∝ Pϕ ( Z ) exp ( 1 β J ( Z ; θ ) ) ( 11 ) We provide a rigorous derivation of ( 11 ) in the appendix ( A.2 ) . We note that though our derivation is over the representation space instead of the whole network parameter space , the optimization problem ( 10 ) and the resulting distribution ( 11 ) are quite similar to the one studied in ( Liu et al. , 2017 ) in the context of Bayesian inference . However , we stress that our formulation follows from the information bottleneck framework , and is mathematically different from that in ( Liu et al. , 2017 ) . In particular , the difference lies in the term L2 , which depends on the the distribution Pϕ ( Z | X ) we want to optimize ( while in ( Liu et al. , 2017 ) , the corresponding term is a fixed prior ) . The following theorem shows that the distribution in ( 11 ) is an optimal target distribution ( with respect to the IB objective L ) . The proof can be found in the appendix ( A.3 ) . Theorem 2 . ( Representation Improvement Theorem ) Consider the objective function L ( θ , ϕ ) = EX∼P ( X ) , Z∼Pϕ ( Z|X ) [ J ( Z ; θ ) ] − βI ( X , Z ; ϕ ) , given a fixed policy-value parameter θ , representation distribution Pϕ ( Z|X ) and state distribution P ( X ) . Define a new representation distribution : Pϕ̂ ( Z|X ) ∝ Pϕ ( Z ) exp ( 1 βJ ( Z ; θ ) ) . We have L ( θ , ϕ̂ ) ≥ L ( θ , ϕ ) . Though we have derived the optimal target distribution , it is still difficult to compute Pϕ ( Z ) . In order to resolve this problem , we construct a variational lower bound with a distribution U ( Z ) which is independent of ϕ . Notice that ∫ dZPϕ ( Z ) logPϕ ( Z ) ≥ ∫ dZPϕ ( Z ) logU ( Z ) . Now , we can derive a lower bound of L ( θ , ϕ ) in ( 6 ) as follows : L ( θ , ϕ ) = EX , Z [ J ( Z ; θ ) − β logPϕ ( Z|X ) ] + β ∫ dZPϕ ( Z ) logPϕ ( Z ) ≥ EX , Z [ J ( Z ; θ ) − β logPϕ ( Z|X ) ] + β ∫ dZPϕ ( Z ) logU ( Z ) = EX∼P ( X ) , Z∼Pϕ ( Z|X ) [ J ( Z ; θ ) − β logPϕ ( Z|X ) + β logU ( Z ) ] = L̂ ( θ , ϕ ) ( 12 ) Naturally the target distribution of maximizing the lower bound is : Pϕ ( Z|X ) ∝ U ( Z ) exp ( 1 β J ( Z ; θ ) ) ( 13 ) 4.3 Optimization by Stein Variational Gradient Descent Next we utilize the method in ( Liu & Wang , 2016 ) ( Liu et al. , 2017 ) ( Haarnoja et al. , 2017 ) to optimize the lower bound . Stein variational gradient descent ( SVGD ) is a non-parametric variational inference algorithm that leverages efficient deterministic dynamics to transport a set of particles { Zi } ni=1 to approximate given target distributions Q ( Z ) . We choose SVGD to optimize the lower bound because of its ability to handle unnormalized target distributions such as ( 13 ) . Briefly , SVGD iteratively updates the “ particles ” { Zi } ni=1 via a direction function Φ⋆ ( · ) in the unit ball of a reproducing kernel Hilbert space ( RKHS ) H : Zi ← Zi + ϵΦ⋆ ( Zi ) ( 14 ) where Φ∗ ( · ) is chosen as a direction to maximally decrease2 the KL divergence between the particles ’ distribution P ( Z ) and the target distribution Q ( Z ) = Q̂ ( Z ) C ( Q̂ is unnormalized 2In fact , Φ∗ is chosen to maximize the directional derivative of F ( P ) = −DKL ( P ||Q ) , which appears to be the ” gradient ” of F distribution , C is normalized coefficient ) in the sense that Φ⋆ ← argmax ϕ∈H { − d dϵ DKL ( P [ ϵϕ ] ||Q ) s.t . ∥Φ∥H ≤ 1 } ( 15 ) where P [ ϵΦ ] is the distribution of Z + ϵΦ ( Z ) and P is the distribution of Z . ( Liu & Wang , 2016 ) showed a closed form of this direction : Φ ( Zi ) = EZj∼P [ K ( Zj , Zi ) ∇Ẑ log Q̂ ( Ẑ ) |Ẑ=Zj +∇ẐK ( Ẑ , Zi ) |Ẑ=Zj ] ( 16 ) where K is a kernel function ( typically an RBF kernel function ) . Notice that C has been omitted . In our case , we seek to minimize DKL ( Pϕ ( ·|X ) || U ( · ) exp ( 1β J ( · ; θ ) ) C ) |C=∫ dZU ( Z ) exp ( 1β J ( Z ; θ ) ) , which is equivalent to maximize L̂ ( θ , ϕ ) , the greedy direction yields : Φ ( Zi ) = EZj∼Pϕ ( ·|X ) [ K ( Zj , Zi ) ∇Ẑ ( 1 β J ( Ẑ ; θ ) + logU ( Ẑ ) ) |Ẑ=Zj +∇ẐK ( Ẑ , Zi ) |Ẑ=Zj ] ( 17 ) In practice we replace logU ( Ẑ ) with ζ logU ( Ẑ ) where ζ is a coefficient that controls the magnitude of ∇Ẑ logU ( Ẑ ) . Notice that Φ ( Zi ) is the greedy direction that Zi moves towards L̂ ( θ , ϕ ) ’ s target distribution as showed in ( 13 ) ( distribution that maximizes L̂ ( θ , ϕ ) ) . This means Φ ( Zi ) is the gradient of L̂ ( Zi , θ , ϕ ) : ∂L̂ ( Zi , θ , ϕ ) ∂Zi ∝ Φ ( Zi ) . Since our ultimate purpose is to update ϕ , by the chain rule , ∂L̂ ( Zi , θ , ϕ ) ∂ϕ ∝ Φ ( Zi ) ∂Zi ∂ϕ . Then for L̂ ( θ , ϕ ) = EPϕ ( X , Z ) [ L̂ ( Z , θ , ϕ ) ] : ∂L̂ ( θ , ϕ ) ∂ϕ ∝ EX∼P ( X ) , Zi∼Pϕ ( ·|X ) [ Φ ( Zi ) ∂Zi ∂ϕ ] ( 18 ) Φ ( Zi ) is given in equation ( 17 ) . In practice we update the policy-value parameter θ by common policy gradient algorithm since : ∂L̂ ( θ , ϕ ) ∂θ = EPϕ ( X , Z ) [ ∂J ( Z ; θ ) ∂θ ] ( 19 ) and update representation parameter ϕ by ( 18 ) . 4.4 Verify the information E-C process with MINE This section we verify that the information E-C process exists in deep RL with MINE and our framework accelerates this process . Mutual information neural estimation ( MINE ) is an algorithm that can compute mutual information ( MI ) between two high dimensional random variables more accurately and efficiently . Specifically , for random variables X and Z , assume T to be a function of X and Z , the calculation of I ( X , Z ) can be transformed to the following optimization problem : I ( X , Z ) = max T EP ( X , Z ) [ T ] − log ( EP ( X ) ⊗P ( Z ) [ expT ] ) ( 20 ) The optimal function T ⋆ ( X , Z ) can be approximated by updating a neural network T ( X , Z ; η ) . With the aid of this powerful tool , we would like to visualize the mutual information between input state X and its relative representation Z : Every a few update steps , we sample a batch of inputs and their relevant representations { Xi , Zi } ni=1 and compute their MI with MINE , every time we train MINE ( update η ) we just shuffle { Zi } ni=1 and roughly assume the shuffled representations { Zshuffledi } ni=1 to be independent with { Xi } ni=1 : I ( X , Z ) ≈ max η 1 n n∑ i=1 [ T ( Xi , Zi ; η ) ] − log ( 1 n n∑ i=1 [ expT ( Xi , Z shuffled i ; η ) ] ) ( 21 ) Figure ( 1 ) is the tensorboard graph of mutual information estimation between X and Z in Atari game Pong , x-axis is update steps and y-axis is MI estimation . More details and results can be found in appendix ( A.6 ) and ( A.7 ) . As we can see , in both A2C with our framework and common A2C , the MI first increases to encode more information from inputs ( ” remember ” the inputs ) , then decreases to drop irrelevant information from inputs ( ” forget ” the useless information ) . And clearly , our framework extracts faster and compresses faster than common A2C as showed in figure ( 1 ) ( b ) . After completing the visualization of MI with MINE , we analyze the relationship between our framework and MINE . According to ( Belghazi et al. , 2018 ) , the optimal function T ∗ in ( 20 ) goes as follows : expT ∗ ( X , Z ; η ) = C Pϕ ( X , Z ) P ( X ) Pϕ ( Z ) s.t . C = EP ( X ) ⊗Pϕ ( Z ) [ exp T∗ ] ( 22 ) Combining the result with Theorem ( 2 ) , we get : expT ∗ ( X , Z ; η ) = C Pϕ ( Z|X ) Pϕ ( Z ) ∝ exp ( 1 β J ( Z ; θ ) ) ( 23 ) Through this relationship , we theoretically derive an algorithm that can directly optimize our framework without constructing the lower bound , we put this derivation in the appendix ( A.5 ) .
This paper proposes a representation learning algorithm for RL based on the Information Bottleneck (IB) principle. This formulation leads to the observed state X being mapped to a latent variable Z ~ P(Z | X), in such a way that the standard loss function in actor-critic RL methods is augmented with a term minimizing the mutual information between X and Z (which can be seen as a form of regularization). This results in a loss that is difficult to optimize directly in the general case: the authors thus propose to approximate it through a variational bound, using Stein variational gradient descent (SVGD) for optimization, which is based on sampling multiple Z_i’s for a given state X, so as to compute an approximate gradient for the parameters of the function mapping X to Z. Experiments show that when augmenting the A2C algorithm with this technique, (1) the mutual information I(X, Z) decreases more quickly (better « compression » of the information), and (2) better sample efficiency is observed on 5 Atari games (with also encouraging results with PPO on 3 Atari games).
SP:0b4c5468fbb7f0caaf2645c6d5c0a2159aec311d
Learning Representations in Reinforcement Learning: an Information Bottleneck Approach
1 Introduction . In training a reinforcement learning algorithm , an agent interacts with the environment , explores the ( possibly unknown ) state space , and learns a policy from the exploration sample data . In many cases , such samples are quite expensive to obtain ( e.g. , requires interactions with the physical environment ) . Hence , improving the sample efficiency of the learning algorithm is a key problem in RL and has been studied extensively in the literature . Popular techniques include experience reuse/replay , which leads to powerful off-policy algorithms ( e.g. , ( Mnih et al. , 2013 ; Silver et al. , 2014 ; Van Hasselt et al. , 2015 ; Nachum et al. , 2018a ; Espeholt et al. , 2018 ) ) , and model-based algorithms ( e.g. , ( Hafner et al. , 2018 ; Kaiser et al. , 2019 ) ) . Moreover , it is known that effective representations can greatly reduce the sample complexity in RL . This can be seen from the following motivating example : In the environment of a classical Atari game : Seaquest , it may take dozens of millions samples to converge to an optimal policy when the input states are raw images ( more than 28,000 dimensions ) , while it takes less samples when the inputs are 128-dimension pre-defined RAM data ( Sygnowski & Michalewski , 2016 ) . Clearly , the RAM data contain much less redundant information irrelevant to the learning process than the raw images . Thus , we argue that an efficient representation is extremely crucial to the sample efficiency . In this paper , we try to improve the sample efficiency in RL from the perspective of representation learning using the celebrated information bottleneck framework ( Tishby et al. , 2000 ) . In standard deep learning , the experiments in ( Shwartz-Ziv & Tishby , 2017 ) show that during the training process , the neural network first ” remembers ” the inputs by increasing the mutual information between the inputs and the representation variables , then compresses the inputs to efficient representation related to the learning task by discarding redundant information from inputs ( decreasing the mutual information between inputs and representation variables ) . We call this phenomena ” information extraction-compression process ” ( information E-C process ) . Our experiments shows that , similar to the results shown in ( Shwartz-Ziv & Tishby , 2017 ) , we first ( to the best of our knowledge ) observe the information extraction-compression phenomena in the context of deep RL ( we need to use MINE ( Belghazi et al. , 2018 ) for estimating the mutual information ) . This observation motivates us to adopt the information bottleneck ( IB ) framework in reinforcement learning , in order to accelerate the extraction-compression process . The IB framework is intended to explicitly enforce RL agents to learn an efficient representation , hence improving the sample efficiency , by discarding irrelevant information from raw input data . Our technical contributions can be summarized as follows : 1 . We observe that the ” information extraction-compression process ” also exists in the context of deep RL ( using MINE ( Belghazi et al. , 2018 ) to estimate the mutual information ) . 2 . We derive the optimization problem of our information bottleneck framework in RL . In order to solve the optimization problem , we construct a lower bound and use the Stein variational gradient method developed in ( Liu et al. , 2017 ) to optimize the lower bound . 3 . We show that our framework can accelerate the information extraction-compression process . Our experimental results also show that combining actor-critic algorithms ( such as A2C , PPO ) with our framework is more sample-efficient than their original versions . 4 . We analyze the relationship between our framework and MINE , through this relationship , we theoretically derive an algorithm to optimize our IB framework without constructing the lower bound . Finally , we note that our IB method is orthogonal to other methods for improving the sample efficiency , and it is an interesting future work to incorporate it in other off-policy and model-based algorithms . 2 Related Work . Information bottleneck framework was first introduced in ( Tishby et al. , 2000 ) . They solve the framework by iterative Blahut Arimoto algorithm , which is infeasible to apply to deep neural networks . ( Shwartz-Ziv & Tishby , 2017 ) tries to open the black box of deep learning from the perspective of information bottleneck , though the method they use to compute the mutual information is not precise . ( Alemi et al. , 2016 ) derives a variational information bottleneck framework , yet apart from adding prior target distribution of the representation distribution P ( Z|X ) , they also assume that P ( Z|X ) itself must be a Gaussian distribution , which limits the capabilities of the representation function . ( Peng et al. , 2018 ) extends this framework to variational discriminator bottleneck to improve GANs ( Goodfellow et al. , 2014 ) , imitation learning and inverse RL . As for improving sample-efficiency , ( Mnih et al. , 2013 ; Van Hasselt et al. , 2015 ; Nachum et al. , 2018a ) mainly utilize the experience-reuse . Besides experience-reuse , ( Silver et al. , 2014 ; Fujimoto et al. , 2018 ) tries to learn a deterministic policy , ( Espeholt et al. , 2018 ) seeks to mitigate the delay of off-policy . ( Hafner et al. , 2018 ; Kaiser et al. , 2019 ) learn the environment model . Some other powerful techniques can be found in ( Botvinick et al. , 2019 ) . State representation learning has been studied extensively , readers can find some classic works in the overview ( Lesort et al. , 2018 ) . Apart from this overview , ( Nachum et al. , 2018b ) shows a theoretical foundation of maintaining the optimality of representation space . ( Bellemare et al. , 2019 ) proposes a new perspective on representation learning in RL based on geometric properties of the space of value function . ( Abel et al. , 2019 ) learns representation via information bottleneck ( IB ) in imitation/apprenticeship learning . To the best of our knowledge , there is no work that intends to directly use IB in basic RL algorithms . 3 Preliminaries . A Markov decision process ( MDP ) is a tuple , ( X , A , R , P , µ ) , where X is the set of states , A is the set of actions , R : X ×A×X → R is the reward function , P : X ×A×X → [ 0 , 1 ] is the transition probability function ( where P ( X ′ |X , a ) is the probability of transitioning to state X ′ given that the previous state is X and the agent took action a in X ) , and µ : X → [ 0 , 1 ] is the starting state distribution . A policy π : X → P ( A ) is a map from states to probability distributions over actions , with π ( a|X ) denoting the probability of choosing action a in state X . In reinforcement learning , we aim to select a policy π which maximizes K ( π ) = Eτ∼π [ ∑∞ t=0 γ tR ( Xt , at , Xt+1 ) ] , with a slight abuse of notation we denote R ( Xt , at , Xt+1 ) = rt . Here γ ∈ [ 0 , 1 ) is a discount factor , τ denotes a trajectory ( X0 , a0 , X1 , a1 , ... ) . Define the state value function as V π ( X ) = Eτ∼π [ ∑∞ t=0 γ trt|X0 = X ] , which is the expected return by policy π in state X . And the state-action value function Qπ ( X , a ) = Eτ∼π [ ∑∞ t=0 γ trt|X0 = X , a0 = a ] is the expected return by policy π after taking action a in state X. Actor-critic algorithms take the advantage of both policy gradient methods and valuefunction-based methods such as the well-known A2C ( Mnih et al. , 2016 ) . Specifically , in the case that policy π ( a|X ; θ ) is parameterized by θ , A2C uses the following equation to approximate the real policy gradient ∇θK ( π ) = ∇θĴ ( θ ) : ∇θĴ ( θ ) ≈ ∞∑ t=0 ∇θ [ log π ( at|Xt ; θ ) ( Rt − b ( Xt ) ) + α2H ( π ( ·|Xt ) ) ] = ∞∑ t=0 ∇θĴ ( Xt ; θ ) ( 1 ) where Rt = ∑∞ i=0 γ irt+i is the accumulated return from time step t , H ( p ) is the entropy of distribution p and b ( Xt ) is a baseline function , which is commonly replaced by V π ( Xt ) . A2C also includes the minimization of the mean square error between Rt and value function V π ( Xt ) . Thus in practice , the total objective function in A2C can be written as : J ( θ ) ≈ ∞∑ t=0 log π ( at|Xt ; θ ) ( Rt − V π ( Xt ) ) − α1 ∥Rt − V π ( Xt ) ∥2 + α2H ( π ( ·|Xt ) ) = ∞∑ t=0 J ( Xt ; θ ) ( 2 ) where α1 , α2 are two coefficients . In the context of representation learning in RL , J ( Xt ; θ ) ( including V π ( Xt ) and Qπ ( Xt , at ) ) can be replaced by J ( Zt ; θ ) where Zt is a learnable low-dimensional representation of state Xt . For example , given a representation function Z ∼ Pϕ ( ·|X ) with parameter ϕ , define V π ( Zt ; Xt , ϕ ) |Zt∼Pϕ ( ·|Xt ) = V π ( Xt ) . For simplicity , we write V π ( Zt ; Xt , ϕ ) |Zt∼Pϕ ( ·|Xt ) as V π ( Zt ) . 4 Framework . 4.1 Information Bottleneck in Reinforcement Learning The information bottleneck framework is an information theoretical framework for extracting relevant information , or yielding a representation , that an input X ∈ X contains about an output Y ∈ Y . An optimal representation of X would capture the relevant factors and compress X by diminishing the irrelevant parts which do not contribute to the prediction of Y . In a Markovian structure X → Z → Y where X is the input , Z is representation of X and Y is the label of X , IB seeks an embedding distribution P ⋆ ( Z|X ) such that : P ⋆ ( Z|X ) = arg max P ( Z|X ) I ( Y , Z ) − βI ( X , Z ) = arg max P ( Z|X ) H ( Y ) −H ( Y |Z ) − βI ( X , Z ) = arg max P ( Z|X ) −H ( Y |Z ) − βI ( X , Z ) ( 3 ) for every X ∈ X , which appears as the standard cross-entropy loss1 in supervised learning with a MI-regularizer , β is a coefficient that controls the magnitude of the regularizer . Next we derive an information bottleneck framework in reinforcement learning . Just like the label Y in the context of supervised learning as showed in ( 3 ) , we assume the supervising signal Y in RL to be the accurate value Rt of a specific state Xt for a fixed policy π , which can be approximated by an n-step bootstrapping function Yt = Rt = ∑n−2 i=0 γ irt+i + γn−1V π ( Zt+n−1 ) in practice . Let P ( Y |Z ) be the following distribution : P ( Yt|Zt ) ∝ exp ( −α ( Rt − V π ( Zt ) ) 2 ) ( 4 ) .This assumption is heuristic but reasonable : If we have an input Xt and its relative label Yt = Rt , we now have Xt ’ s representation Zt , naturally we want to train our decision function V π ( Zt ) to approximate the true label Yt . If we set our target distribution to be C · exp ( −α ( Rt − V π ( Zt ) ) 2 ) , the probability decreases as V π ( Zt ) gets far from Yt while increases as V π ( Zt ) gets close to Yt . For simplicity , we just write P ( R|Z ) instead of P ( Yt|Zt ) in the following context . With this assumption , equation ( 3 ) can be written as : P ⋆ ( Z|X ) = arg max P ( Z|X ) EX , R , Z∼P ( X , R , Z ) [ logP ( R|Z ) ] − βI ( X , Z ) = arg max P ( Z|X ) EX∼P ( X ) , Z∼P ( Z|X ) , R∼P ( R|Z ) [ −α ( R− V π ( Z ) ) 2 ] − βI ( X , Z ) ( 5 ) The first term looks familiar with classic mean squared error in supervisd learning . In a network with representation parameter ϕ and policy-value parameter θ , policy loss Ĵ ( Z ; θ ) in equation ( 1 ) and IB loss in ( 5 ) can be jointly written as : L ( θ , ϕ ) = EX∼P ( X ) , Z∼Pϕ ( Z|X ) [ Ĵ ( Z ; θ ) + ER [ −α ( R− V π ( Z ; θ ) ) 2 ] ︸ ︷︷ ︸ J ( Z ; θ ) ] − βI ( X , Z ; ϕ ) ( 6 ) where I ( X , Z ; ϕ ) denotes the MI between X and Z ∼ Pϕ ( ·|X ) . Notice that J ( Z ; θ ) itself is a standard loss function in RL as showed in ( 2 ) . Finally we get the ultimate formalization of IB framework in reinforcement learning : Pϕ∗ ( Z|X ) = arg max Pϕ ( Z|X ) EX∼P ( X ) , Z∼Pϕ ( Z|X ) [ J ( Z ; θ ) ] − βI ( X , Z ; ϕ ) ( 7 ) The following theorem shows that if the mutual information I ( X , Z ) of our framework and common RL framework are close , then our framework is near-optimality . Theorem 1 ( Near-optimality theorem ) . Policy πr = πθr , parameter ϕr , optimal policy π⋆ = πθ⋆ and its relevant representation parameter ϕ⋆ are defined as following : θr , ϕr = argmin θ , ϕ EPϕ ( X , Z ) [ log Pϕ ( Z|X ) Pϕ ( Z ) − 1 β J ( Z ; θ ) ] ( 8 ) θ⋆ , ϕ⋆ = argmin θ , ϕ EPϕ ( X , Z ) [ − 1 β J ( Z ; θ ) ] ( 9 ) Define Jπr as EPϕr ( X , Z ) [ J ( Z ; θr ) ] and Jπ ⋆ as EPϕ⋆ ( X , Z ) [ J ( Z ; θ⋆ ) ] . Assume that for any ϵ > 0 , |I ( X , Z ; ϕ⋆ ) − I ( X , Z ; ϕr ) | < ϵβ , we have |Jπ r − Jπ⋆ | < ϵ . 4.2 Target Distribution Derivation and Variational Lower Bound Construction In this section we first derive the target distribution in ( 7 ) and then seek to optimize it by constructing a variational lower bound . 1Mutual information I ( X , Y ) is defined as ∫ dXdZP ( X , Z ) log P ( X , Z ) P ( X ) P ( Z ) , conditional entropy H ( Y |Z ) is defined as − ∫ dY dZP ( Y , Z ) logP ( Y |Z ) . In a binary-classification problem , − logP ( Y |Z ) = − ( 1− Y ) log ( 1− Ŷ ( Z ) ) − Y log ( Ŷ ( Z ) ) . We would like to solve the optimization problem in ( 7 ) : max Pϕ ( Z|X ) EX∼P ( X ) , Z∼Pϕ ( Z|X ) [ J ( Z ; θ ) − β logPϕ ( Z|X ) ︸ ︷︷ ︸ L1 ( θ , ϕ ) +β logPϕ ( Z ) ︸ ︷︷ ︸ L2 ( θ , ϕ ) ] ( 10 ) Combining the derivative of L1 and L2 and setting their summation to 0 , we can get that Pϕ ( Z|X ) ∝ Pϕ ( Z ) exp ( 1 β J ( Z ; θ ) ) ( 11 ) We provide a rigorous derivation of ( 11 ) in the appendix ( A.2 ) . We note that though our derivation is over the representation space instead of the whole network parameter space , the optimization problem ( 10 ) and the resulting distribution ( 11 ) are quite similar to the one studied in ( Liu et al. , 2017 ) in the context of Bayesian inference . However , we stress that our formulation follows from the information bottleneck framework , and is mathematically different from that in ( Liu et al. , 2017 ) . In particular , the difference lies in the term L2 , which depends on the the distribution Pϕ ( Z | X ) we want to optimize ( while in ( Liu et al. , 2017 ) , the corresponding term is a fixed prior ) . The following theorem shows that the distribution in ( 11 ) is an optimal target distribution ( with respect to the IB objective L ) . The proof can be found in the appendix ( A.3 ) . Theorem 2 . ( Representation Improvement Theorem ) Consider the objective function L ( θ , ϕ ) = EX∼P ( X ) , Z∼Pϕ ( Z|X ) [ J ( Z ; θ ) ] − βI ( X , Z ; ϕ ) , given a fixed policy-value parameter θ , representation distribution Pϕ ( Z|X ) and state distribution P ( X ) . Define a new representation distribution : Pϕ̂ ( Z|X ) ∝ Pϕ ( Z ) exp ( 1 βJ ( Z ; θ ) ) . We have L ( θ , ϕ̂ ) ≥ L ( θ , ϕ ) . Though we have derived the optimal target distribution , it is still difficult to compute Pϕ ( Z ) . In order to resolve this problem , we construct a variational lower bound with a distribution U ( Z ) which is independent of ϕ . Notice that ∫ dZPϕ ( Z ) logPϕ ( Z ) ≥ ∫ dZPϕ ( Z ) logU ( Z ) . Now , we can derive a lower bound of L ( θ , ϕ ) in ( 6 ) as follows : L ( θ , ϕ ) = EX , Z [ J ( Z ; θ ) − β logPϕ ( Z|X ) ] + β ∫ dZPϕ ( Z ) logPϕ ( Z ) ≥ EX , Z [ J ( Z ; θ ) − β logPϕ ( Z|X ) ] + β ∫ dZPϕ ( Z ) logU ( Z ) = EX∼P ( X ) , Z∼Pϕ ( Z|X ) [ J ( Z ; θ ) − β logPϕ ( Z|X ) + β logU ( Z ) ] = L̂ ( θ , ϕ ) ( 12 ) Naturally the target distribution of maximizing the lower bound is : Pϕ ( Z|X ) ∝ U ( Z ) exp ( 1 β J ( Z ; θ ) ) ( 13 ) 4.3 Optimization by Stein Variational Gradient Descent Next we utilize the method in ( Liu & Wang , 2016 ) ( Liu et al. , 2017 ) ( Haarnoja et al. , 2017 ) to optimize the lower bound . Stein variational gradient descent ( SVGD ) is a non-parametric variational inference algorithm that leverages efficient deterministic dynamics to transport a set of particles { Zi } ni=1 to approximate given target distributions Q ( Z ) . We choose SVGD to optimize the lower bound because of its ability to handle unnormalized target distributions such as ( 13 ) . Briefly , SVGD iteratively updates the “ particles ” { Zi } ni=1 via a direction function Φ⋆ ( · ) in the unit ball of a reproducing kernel Hilbert space ( RKHS ) H : Zi ← Zi + ϵΦ⋆ ( Zi ) ( 14 ) where Φ∗ ( · ) is chosen as a direction to maximally decrease2 the KL divergence between the particles ’ distribution P ( Z ) and the target distribution Q ( Z ) = Q̂ ( Z ) C ( Q̂ is unnormalized 2In fact , Φ∗ is chosen to maximize the directional derivative of F ( P ) = −DKL ( P ||Q ) , which appears to be the ” gradient ” of F distribution , C is normalized coefficient ) in the sense that Φ⋆ ← argmax ϕ∈H { − d dϵ DKL ( P [ ϵϕ ] ||Q ) s.t . ∥Φ∥H ≤ 1 } ( 15 ) where P [ ϵΦ ] is the distribution of Z + ϵΦ ( Z ) and P is the distribution of Z . ( Liu & Wang , 2016 ) showed a closed form of this direction : Φ ( Zi ) = EZj∼P [ K ( Zj , Zi ) ∇Ẑ log Q̂ ( Ẑ ) |Ẑ=Zj +∇ẐK ( Ẑ , Zi ) |Ẑ=Zj ] ( 16 ) where K is a kernel function ( typically an RBF kernel function ) . Notice that C has been omitted . In our case , we seek to minimize DKL ( Pϕ ( ·|X ) || U ( · ) exp ( 1β J ( · ; θ ) ) C ) |C=∫ dZU ( Z ) exp ( 1β J ( Z ; θ ) ) , which is equivalent to maximize L̂ ( θ , ϕ ) , the greedy direction yields : Φ ( Zi ) = EZj∼Pϕ ( ·|X ) [ K ( Zj , Zi ) ∇Ẑ ( 1 β J ( Ẑ ; θ ) + logU ( Ẑ ) ) |Ẑ=Zj +∇ẐK ( Ẑ , Zi ) |Ẑ=Zj ] ( 17 ) In practice we replace logU ( Ẑ ) with ζ logU ( Ẑ ) where ζ is a coefficient that controls the magnitude of ∇Ẑ logU ( Ẑ ) . Notice that Φ ( Zi ) is the greedy direction that Zi moves towards L̂ ( θ , ϕ ) ’ s target distribution as showed in ( 13 ) ( distribution that maximizes L̂ ( θ , ϕ ) ) . This means Φ ( Zi ) is the gradient of L̂ ( Zi , θ , ϕ ) : ∂L̂ ( Zi , θ , ϕ ) ∂Zi ∝ Φ ( Zi ) . Since our ultimate purpose is to update ϕ , by the chain rule , ∂L̂ ( Zi , θ , ϕ ) ∂ϕ ∝ Φ ( Zi ) ∂Zi ∂ϕ . Then for L̂ ( θ , ϕ ) = EPϕ ( X , Z ) [ L̂ ( Z , θ , ϕ ) ] : ∂L̂ ( θ , ϕ ) ∂ϕ ∝ EX∼P ( X ) , Zi∼Pϕ ( ·|X ) [ Φ ( Zi ) ∂Zi ∂ϕ ] ( 18 ) Φ ( Zi ) is given in equation ( 17 ) . In practice we update the policy-value parameter θ by common policy gradient algorithm since : ∂L̂ ( θ , ϕ ) ∂θ = EPϕ ( X , Z ) [ ∂J ( Z ; θ ) ∂θ ] ( 19 ) and update representation parameter ϕ by ( 18 ) . 4.4 Verify the information E-C process with MINE This section we verify that the information E-C process exists in deep RL with MINE and our framework accelerates this process . Mutual information neural estimation ( MINE ) is an algorithm that can compute mutual information ( MI ) between two high dimensional random variables more accurately and efficiently . Specifically , for random variables X and Z , assume T to be a function of X and Z , the calculation of I ( X , Z ) can be transformed to the following optimization problem : I ( X , Z ) = max T EP ( X , Z ) [ T ] − log ( EP ( X ) ⊗P ( Z ) [ expT ] ) ( 20 ) The optimal function T ⋆ ( X , Z ) can be approximated by updating a neural network T ( X , Z ; η ) . With the aid of this powerful tool , we would like to visualize the mutual information between input state X and its relative representation Z : Every a few update steps , we sample a batch of inputs and their relevant representations { Xi , Zi } ni=1 and compute their MI with MINE , every time we train MINE ( update η ) we just shuffle { Zi } ni=1 and roughly assume the shuffled representations { Zshuffledi } ni=1 to be independent with { Xi } ni=1 : I ( X , Z ) ≈ max η 1 n n∑ i=1 [ T ( Xi , Zi ; η ) ] − log ( 1 n n∑ i=1 [ expT ( Xi , Z shuffled i ; η ) ] ) ( 21 ) Figure ( 1 ) is the tensorboard graph of mutual information estimation between X and Z in Atari game Pong , x-axis is update steps and y-axis is MI estimation . More details and results can be found in appendix ( A.6 ) and ( A.7 ) . As we can see , in both A2C with our framework and common A2C , the MI first increases to encode more information from inputs ( ” remember ” the inputs ) , then decreases to drop irrelevant information from inputs ( ” forget ” the useless information ) . And clearly , our framework extracts faster and compresses faster than common A2C as showed in figure ( 1 ) ( b ) . After completing the visualization of MI with MINE , we analyze the relationship between our framework and MINE . According to ( Belghazi et al. , 2018 ) , the optimal function T ∗ in ( 20 ) goes as follows : expT ∗ ( X , Z ; η ) = C Pϕ ( X , Z ) P ( X ) Pϕ ( Z ) s.t . C = EP ( X ) ⊗Pϕ ( Z ) [ exp T∗ ] ( 22 ) Combining the result with Theorem ( 2 ) , we get : expT ∗ ( X , Z ; η ) = C Pϕ ( Z|X ) Pϕ ( Z ) ∝ exp ( 1 β J ( Z ; θ ) ) ( 23 ) Through this relationship , we theoretically derive an algorithm that can directly optimize our framework without constructing the lower bound , we put this derivation in the appendix ( A.5 ) .
In this paper, the authors proposed to utilize variational lower bound of mutual information for learning representations in Reinforcement learning. To optimize the proposed variational lower bound with a more flexible encoding network, the author proposed to utilize stein variational gradient descent (or amortized svgd). Instead of learning the representation separately, the authors incorporate the framework into PPO or A2C, which yields a joint training framework for policy optimization.
SP:0b4c5468fbb7f0caaf2645c6d5c0a2159aec311d
Copy That! Editing Sequences by Copying Spans
1 INTRODUCTION . Intelligent systems that assist users in achieving their goals have become a focus of recent research . One class of such systems are intelligent editors that identify and correct errors in documents while they are written . Such systems are usually built on the seq2seq ( Sutskever et al. , 2014 ) framework , in which an input sequence ( the current state of the document ) is first encoded into a vector representation and a decoder then constructs a new sequence from this information . Many applications of the seq2seq framework require the decoder to copy some words in the input . An example is machine translation , in which most words are generated in the target language , but rare elements such as names are copied from the input . This can be implemented in an elegant manner by equipping the decoder with a facility that can “ point ” to words from the input , which are then copied into the output ( Vinyals et al. , 2015 ; Grave et al. , 2017 ; Gulcehre et al. , 2016 ; Merity et al. , 2017 ) . Editing sequences poses a different problem from other seq2seq tasks , as in many cases , most of the input remains unchanged and needs to be reproduced . When using existing decoders , this requires painstaking word-by-word copying of the input . In this paper , we propose to extend a decoder with a facility to copy entire spans of the input to the output in a single step , thus greatly reducing the number of decoder steps required to generate an output . This is illustrated in Figure 1 , where we show how our model inserts two new words into a sentence by copying two spans of ( more than ) twenty tokens each . However , this decoder extension exacerbates a well-known problem in training decoders with a copying facility : a target sequence can be generated in many different ways when an output token can be generated by different means . In our setting , a sequence of tokens can be copied token-bytoken , in pairs of tokens , . . . , or in just a single step . In practice , we are interested in encouraging our decoder to use as few steps as possible , both to speed up decoding at inference time as well as to reduce the potential for making mistakes . To this end , we derive a training objective that marginalises over all different generation sequences yielding the correct output , which implicitly encourages copying longer spans . At inference time , we solve this problem by a variation of beam search that “ merges ” rays in the beam that generate the same output by different means . In summary , this paper ( i ) introduces a new sequence decoder able to copy entire spans ( Sect . 2 ) ; ( ii ) derives a training objective that encourages our new decoder to copy long spans when possible , as well as an adapted beam search method approximating the exact objective ; ( iii ) includes extensive experiments showing that the span-copying decoder improves on editing tasks on natural language and program source code ( Sect . 4 ) . Input . 2 MODEL . The core of our new decoder is a span-copying mechanism that can be viewed as a generalisation of pointer networks used for copying single tokens ( Vinyals et al. , 2015 ; Grave et al. , 2017 ; Gulcehre et al. , 2016 ; Merity et al. , 2017 ) . Concretely , modern sequence decoders treat copying from the input sequence as an alternative to generating a token from the decoder vocabulary , i.e . at each step , the decoder can either generate a token t from its vocabulary or it can copy the i-th token of the input . We view these as potential actions the decoder can perform and denote them by Gen ( t ) and Copy ( i ) . Formally , given an input sequence in = in1 . . . inn , the probability of a target sequence o1 . . . om is commonly factorised into sequentially generating all tokens of the output . p ( o1 . . . om | in ) = ∏ 1≤j≤m p ( oj | in , o1 . . . oj−1 ) ( 1 ) Here , p ( oj | in , o1 . . . oj−1 ) denotes the probability of generating the token oj , which is simply the probability of the Gen ( t ) action in the absence of a copying mechanism.1 When we can additionally copy tokens from the input , this probability is the sum of probabilities of all correct actions . To formalise this , we denote evaluation of an action a into a concrete token as JaK , where JGen ( t ) K = t and JCopy ( i ) K = ini . Using q ( a | o ) to denote the probability of emitting an action a after generating the partial output o , we complete Eq . ( 1 ) by defining p ( oj | o1 . . . oj−1 ) = ∑ a , JaK=oj q ( a | o1 . . . oj−1 ) , i.e . the sum of the probabilities of all correct actions . Modelling Span Copying In this work , we are interested in copying whole subsequences of the input , introducing a sequence copying action Copy ( i : j ) with JCopy ( i : j ) K = ini . . . inj−1 ( indexing follows the Python in [ i : j ] notation here ) . This creates a problem because the number of actions required to generate an output token sequence is not equal to the length of the output sequence anymore ; indeed , there may be many action sequences of different length that can produce the correct output . As an example , consider Fig . 2 , which illustrates all action sequences generating the output a b f d e given the input a b c d e. For example , we can initially generate the token a , or copy it from the input , or copy the first two tokens . If we chose one of the first two actions , we then have the choice of either generating the token b or copying it from the input and then have to generate the token f . Alternatively , if we initially choose to copy the first two tokens , we have to generate the token f next . We can compute the probability of generating the target sequence by traversing the diagram from the right to the left . p ( | a b f d e ) is simply the probability of emitting a stop token and requires no recursion . p ( e | a b f d ) is the sum of the probabilities q ( Gen ( e ) | a b f d ) · p ( | a b f d e ) and q ( Copy ( 4 : 5 ) | a b f d ) · p ( | a b f d e ) , which re-uses the term we already computed . Following this strategy , we can compute the probability of generating the output token sequence by computing 1Note that all occurrences of p ( and q below ) are implicitly ( also ) conditioned on the input sequence in , and so we drop this in the following to improve readability . probabilities of increasing longer suffixes of it ( essentially traversing the diagram in Fig . 2 from right to left ) . Formally , we reformulate Eq . ( 1 ) into a recursive definition that marginalises over all different sequences of actions generating the correct output sequence , following the strategy illustrated in Fig . 2 . For this we define |a| , the length of the output of an action , i.e. , |Gen ( t ) | = 1 and |Copy ( i : j ) | = j − i . Note that we simply assume that actions Copy ( i : j ) with j ≤ i do not exist . p ( ok+1 . . . om | o1 . . . ok ) = ∑ a , ∃ ` .|a|= ` JaK=ok+1 ... ok+ ` q ( a | o1 . . . ok ) · p ( ok+ ` +1 . . . om | o1 . . . ok+ ` ) ( 2 ) Note that here , the probability of generating the correct suffix is only conditioned on the sequence generated so far and not on the concrete actions that yielded it . In practice , we implement this by conditioning our modelling of q at timestep k on a representation hk computed from the partial output sequence o1 . . . ok . In RNNs , this is modelled by feeding the sequence of emitted tokens into the decoder , no matter how the decoder determined to emit these , and thus , one Copy ( i : j ) action may cause the decoder RNN to take multiple timesteps to process the copied token sequence . In causal self-attentional settings , this is simply the default behaviour . We found that using the marginalisation in Eq . ( 2 ) during training is crucial for good results . In initial experiments , we tried an ablation in which we generate a per-token loss based only on the correct actions at each output token , without taking the remainder of the sequence into account ( i.e. , at each point in time , we used a “ multi-hot ” objective in which the loss encourages picking any one of the correct actions ) . In this setting , training yielded a decoder which would most often only copy sequences of length one , as the objective was not penalising the choice of long action sequences explicitly . Our marginalised objective in Eq . ( 2 ) does exactly that , as it explicitly reflects the cost of having to emit more actions than necessary , pushing the model towards copying longer subsequences . Finally , note that for numerical stability purposes our implementation works on the log-probability space as it is common for such methods , implementing the summation of probabilities with the standard log-sum-exp tricks . Modelling Action Choices It remains to explain how we model the per-step action distribution q ( a | o ) . We assume that we have per-token encoder representations r1 . . . rn of all input tokens and a decoder state hk obtained after emitting the prefix o1 . . . ok−1 . This can be the state of an RNN cell after processing the sequence o1 . . . ok ( potentially with attention over the input ) or the representation of a self-attentional model processing that sequence . As in standard sequence decoders , we use an output embedding projection applied to hk to obtain scores sk , v for all tokens in the decoder vocabulary . To compute a score for a Copy ( i : j ) action , we use a linear layer W to project the concatenation ri‖rj of the ( contextualised ) embeddings of the respective input tokens to the same dimension as hk and then compute their inner product : sk , ( i , j ) = ( W · ( ri‖rj ) ) · h > k We then concatenate all sk , v and sk , ( i , j ) and apply a softmax to obtain our action distribution q ( a | o ) . Note that for efficient computation in GPUs , we compute the sk , ( i , j ) for all i and j and mask all entries where j ≤ i. Algorithm 1 Python-like pseudocode of beam search for span-copying decoders . def beam_search ( beam_size ) beam = [ { toks : [ START_OF_SEQ ] , prob : 1 } ] out_length = 1 while unfinished_rays ( beam ) : new_rays = [ ] for ray in beam : if len ( ray.toks ) > out_length or ray.toks [ -1 ] == END_OF_SEQ : new_rays.append ( ray ) else : for ( act , act_prob ) in q ( · | ray.toks ) : new_rays.append ( { toks : ray.toks ‖ JactK , prob : ray.prob * act_prob } ) beam = top_k ( group_by_toks ( new_rays ) , k=beam_size ) out_length += 1 return beam Training Objective We train in the standard supervised sequence decoding setting , feeding to the decoder the correct output sequence independent of its decisions . We train by maximising p ( o | ) unrolled according to Eq . ( 2 ) . One special case to note is that we make a minor but important modification to handle generation of out-of-vocabulary words : iff the correct token can be copied from the input , Gen ( UNK ) is considered to be an incorrect action ; otherwise only Gen ( UNK ) is correct . This is necessary to avoid pathological cases in which there is no action sequence to generate the target sequence correctly . Beam Decoding Our approach to efficiently evaluate Eq . ( 2 ) at training time relies on knowledge of the ground truth sequence and so we need to employ another approach at inference time . We use a variation of standard beam search which handles the fact that action sequences of the same length can lead to sequences of different lengths . For this , we consider a forward version of Eq . ( 2 ) in which we assume to have a set of action sequences A and compute a lower bound on the true probability of a sequence o1 . . . ok by considering all action sequences in A that evaluate to o1 . . . ok : p ( o1 . . . ok ) ≥ ∑ [ a1 ... an ] ∈A Ja1K‖ ... ‖JanK=o1 ... ok ∏ 1≤i≤n q ( ai | Ja1K‖ . . . ‖Jai−1K ) . ( 3 ) If A contains the set of all action sequences generating the output sequence o1 . . . ok , Eq . ( 3 ) is an equality . At inference time , we under-approximate A by generating likely action sequences using beam search . However , we have to explicitly implement the summation of the probabilities of action sequences yielding the same output sequence . This could be achieved by a final post-processing step ( as in Eq . ( 3 ) ) , but we found that it is more effective to “ merge ” rays generating the same sequence during the search . In the example shown in Fig . 2 , this means to sum up the probabilities of ( for example ) the action sequences Gen ( a ) Gen ( b ) and Copy ( 0 : 2 ) , as they generate the same output . To achieve this grouping of action sequences of different lengths , our search procedure is explicitly considering the length of the generated token sequence and “ pauses ” the expansion of action sequences that have generated longer outputs . We show the pseudocode for this procedure in Alg . 1 , where merging of different rays generating the same output is done using group_by_toks .
In this work, the authors tackle the problem of span-based copying in sequence-based neural models. In particular, they extend the standard copying techniques of (Vinyals et. al., Gulcehre et. al., etc.) which only allow for single-token copy actions. Their span-based copy mechanism allows for multiple tokens to be copied at a time during decoding via a recursive formulation that defines the output sequence distribution as a marginal over the complete set of action combinations that result in the sequence being produced. The authors also propose a span-based beam decoding algorithm that scores output sequences via a sum over the probabilities of action sequences that produce the same output.
SP:781c51554cfd04222ef6c6c92648d1824e054ae1
Copy That! Editing Sequences by Copying Spans
1 INTRODUCTION . Intelligent systems that assist users in achieving their goals have become a focus of recent research . One class of such systems are intelligent editors that identify and correct errors in documents while they are written . Such systems are usually built on the seq2seq ( Sutskever et al. , 2014 ) framework , in which an input sequence ( the current state of the document ) is first encoded into a vector representation and a decoder then constructs a new sequence from this information . Many applications of the seq2seq framework require the decoder to copy some words in the input . An example is machine translation , in which most words are generated in the target language , but rare elements such as names are copied from the input . This can be implemented in an elegant manner by equipping the decoder with a facility that can “ point ” to words from the input , which are then copied into the output ( Vinyals et al. , 2015 ; Grave et al. , 2017 ; Gulcehre et al. , 2016 ; Merity et al. , 2017 ) . Editing sequences poses a different problem from other seq2seq tasks , as in many cases , most of the input remains unchanged and needs to be reproduced . When using existing decoders , this requires painstaking word-by-word copying of the input . In this paper , we propose to extend a decoder with a facility to copy entire spans of the input to the output in a single step , thus greatly reducing the number of decoder steps required to generate an output . This is illustrated in Figure 1 , where we show how our model inserts two new words into a sentence by copying two spans of ( more than ) twenty tokens each . However , this decoder extension exacerbates a well-known problem in training decoders with a copying facility : a target sequence can be generated in many different ways when an output token can be generated by different means . In our setting , a sequence of tokens can be copied token-bytoken , in pairs of tokens , . . . , or in just a single step . In practice , we are interested in encouraging our decoder to use as few steps as possible , both to speed up decoding at inference time as well as to reduce the potential for making mistakes . To this end , we derive a training objective that marginalises over all different generation sequences yielding the correct output , which implicitly encourages copying longer spans . At inference time , we solve this problem by a variation of beam search that “ merges ” rays in the beam that generate the same output by different means . In summary , this paper ( i ) introduces a new sequence decoder able to copy entire spans ( Sect . 2 ) ; ( ii ) derives a training objective that encourages our new decoder to copy long spans when possible , as well as an adapted beam search method approximating the exact objective ; ( iii ) includes extensive experiments showing that the span-copying decoder improves on editing tasks on natural language and program source code ( Sect . 4 ) . Input . 2 MODEL . The core of our new decoder is a span-copying mechanism that can be viewed as a generalisation of pointer networks used for copying single tokens ( Vinyals et al. , 2015 ; Grave et al. , 2017 ; Gulcehre et al. , 2016 ; Merity et al. , 2017 ) . Concretely , modern sequence decoders treat copying from the input sequence as an alternative to generating a token from the decoder vocabulary , i.e . at each step , the decoder can either generate a token t from its vocabulary or it can copy the i-th token of the input . We view these as potential actions the decoder can perform and denote them by Gen ( t ) and Copy ( i ) . Formally , given an input sequence in = in1 . . . inn , the probability of a target sequence o1 . . . om is commonly factorised into sequentially generating all tokens of the output . p ( o1 . . . om | in ) = ∏ 1≤j≤m p ( oj | in , o1 . . . oj−1 ) ( 1 ) Here , p ( oj | in , o1 . . . oj−1 ) denotes the probability of generating the token oj , which is simply the probability of the Gen ( t ) action in the absence of a copying mechanism.1 When we can additionally copy tokens from the input , this probability is the sum of probabilities of all correct actions . To formalise this , we denote evaluation of an action a into a concrete token as JaK , where JGen ( t ) K = t and JCopy ( i ) K = ini . Using q ( a | o ) to denote the probability of emitting an action a after generating the partial output o , we complete Eq . ( 1 ) by defining p ( oj | o1 . . . oj−1 ) = ∑ a , JaK=oj q ( a | o1 . . . oj−1 ) , i.e . the sum of the probabilities of all correct actions . Modelling Span Copying In this work , we are interested in copying whole subsequences of the input , introducing a sequence copying action Copy ( i : j ) with JCopy ( i : j ) K = ini . . . inj−1 ( indexing follows the Python in [ i : j ] notation here ) . This creates a problem because the number of actions required to generate an output token sequence is not equal to the length of the output sequence anymore ; indeed , there may be many action sequences of different length that can produce the correct output . As an example , consider Fig . 2 , which illustrates all action sequences generating the output a b f d e given the input a b c d e. For example , we can initially generate the token a , or copy it from the input , or copy the first two tokens . If we chose one of the first two actions , we then have the choice of either generating the token b or copying it from the input and then have to generate the token f . Alternatively , if we initially choose to copy the first two tokens , we have to generate the token f next . We can compute the probability of generating the target sequence by traversing the diagram from the right to the left . p ( | a b f d e ) is simply the probability of emitting a stop token and requires no recursion . p ( e | a b f d ) is the sum of the probabilities q ( Gen ( e ) | a b f d ) · p ( | a b f d e ) and q ( Copy ( 4 : 5 ) | a b f d ) · p ( | a b f d e ) , which re-uses the term we already computed . Following this strategy , we can compute the probability of generating the output token sequence by computing 1Note that all occurrences of p ( and q below ) are implicitly ( also ) conditioned on the input sequence in , and so we drop this in the following to improve readability . probabilities of increasing longer suffixes of it ( essentially traversing the diagram in Fig . 2 from right to left ) . Formally , we reformulate Eq . ( 1 ) into a recursive definition that marginalises over all different sequences of actions generating the correct output sequence , following the strategy illustrated in Fig . 2 . For this we define |a| , the length of the output of an action , i.e. , |Gen ( t ) | = 1 and |Copy ( i : j ) | = j − i . Note that we simply assume that actions Copy ( i : j ) with j ≤ i do not exist . p ( ok+1 . . . om | o1 . . . ok ) = ∑ a , ∃ ` .|a|= ` JaK=ok+1 ... ok+ ` q ( a | o1 . . . ok ) · p ( ok+ ` +1 . . . om | o1 . . . ok+ ` ) ( 2 ) Note that here , the probability of generating the correct suffix is only conditioned on the sequence generated so far and not on the concrete actions that yielded it . In practice , we implement this by conditioning our modelling of q at timestep k on a representation hk computed from the partial output sequence o1 . . . ok . In RNNs , this is modelled by feeding the sequence of emitted tokens into the decoder , no matter how the decoder determined to emit these , and thus , one Copy ( i : j ) action may cause the decoder RNN to take multiple timesteps to process the copied token sequence . In causal self-attentional settings , this is simply the default behaviour . We found that using the marginalisation in Eq . ( 2 ) during training is crucial for good results . In initial experiments , we tried an ablation in which we generate a per-token loss based only on the correct actions at each output token , without taking the remainder of the sequence into account ( i.e. , at each point in time , we used a “ multi-hot ” objective in which the loss encourages picking any one of the correct actions ) . In this setting , training yielded a decoder which would most often only copy sequences of length one , as the objective was not penalising the choice of long action sequences explicitly . Our marginalised objective in Eq . ( 2 ) does exactly that , as it explicitly reflects the cost of having to emit more actions than necessary , pushing the model towards copying longer subsequences . Finally , note that for numerical stability purposes our implementation works on the log-probability space as it is common for such methods , implementing the summation of probabilities with the standard log-sum-exp tricks . Modelling Action Choices It remains to explain how we model the per-step action distribution q ( a | o ) . We assume that we have per-token encoder representations r1 . . . rn of all input tokens and a decoder state hk obtained after emitting the prefix o1 . . . ok−1 . This can be the state of an RNN cell after processing the sequence o1 . . . ok ( potentially with attention over the input ) or the representation of a self-attentional model processing that sequence . As in standard sequence decoders , we use an output embedding projection applied to hk to obtain scores sk , v for all tokens in the decoder vocabulary . To compute a score for a Copy ( i : j ) action , we use a linear layer W to project the concatenation ri‖rj of the ( contextualised ) embeddings of the respective input tokens to the same dimension as hk and then compute their inner product : sk , ( i , j ) = ( W · ( ri‖rj ) ) · h > k We then concatenate all sk , v and sk , ( i , j ) and apply a softmax to obtain our action distribution q ( a | o ) . Note that for efficient computation in GPUs , we compute the sk , ( i , j ) for all i and j and mask all entries where j ≤ i. Algorithm 1 Python-like pseudocode of beam search for span-copying decoders . def beam_search ( beam_size ) beam = [ { toks : [ START_OF_SEQ ] , prob : 1 } ] out_length = 1 while unfinished_rays ( beam ) : new_rays = [ ] for ray in beam : if len ( ray.toks ) > out_length or ray.toks [ -1 ] == END_OF_SEQ : new_rays.append ( ray ) else : for ( act , act_prob ) in q ( · | ray.toks ) : new_rays.append ( { toks : ray.toks ‖ JactK , prob : ray.prob * act_prob } ) beam = top_k ( group_by_toks ( new_rays ) , k=beam_size ) out_length += 1 return beam Training Objective We train in the standard supervised sequence decoding setting , feeding to the decoder the correct output sequence independent of its decisions . We train by maximising p ( o | ) unrolled according to Eq . ( 2 ) . One special case to note is that we make a minor but important modification to handle generation of out-of-vocabulary words : iff the correct token can be copied from the input , Gen ( UNK ) is considered to be an incorrect action ; otherwise only Gen ( UNK ) is correct . This is necessary to avoid pathological cases in which there is no action sequence to generate the target sequence correctly . Beam Decoding Our approach to efficiently evaluate Eq . ( 2 ) at training time relies on knowledge of the ground truth sequence and so we need to employ another approach at inference time . We use a variation of standard beam search which handles the fact that action sequences of the same length can lead to sequences of different lengths . For this , we consider a forward version of Eq . ( 2 ) in which we assume to have a set of action sequences A and compute a lower bound on the true probability of a sequence o1 . . . ok by considering all action sequences in A that evaluate to o1 . . . ok : p ( o1 . . . ok ) ≥ ∑ [ a1 ... an ] ∈A Ja1K‖ ... ‖JanK=o1 ... ok ∏ 1≤i≤n q ( ai | Ja1K‖ . . . ‖Jai−1K ) . ( 3 ) If A contains the set of all action sequences generating the output sequence o1 . . . ok , Eq . ( 3 ) is an equality . At inference time , we under-approximate A by generating likely action sequences using beam search . However , we have to explicitly implement the summation of the probabilities of action sequences yielding the same output sequence . This could be achieved by a final post-processing step ( as in Eq . ( 3 ) ) , but we found that it is more effective to “ merge ” rays generating the same sequence during the search . In the example shown in Fig . 2 , this means to sum up the probabilities of ( for example ) the action sequences Gen ( a ) Gen ( b ) and Copy ( 0 : 2 ) , as they generate the same output . To achieve this grouping of action sequences of different lengths , our search procedure is explicitly considering the length of the generated token sequence and “ pauses ” the expansion of action sequences that have generated longer outputs . We show the pseudocode for this procedure in Alg . 1 , where merging of different rays generating the same output is done using group_by_toks .
This paper study the problem of editing sequences, such as natural language or code source, by copying large spans of the original sequence. A simple baseline solution to this problem is to learn a sequence to sequence neural network, which generates the edited sequence conditioned on the original one. This method can be improved by adding a copying mechanism, based on pointer networks, to copy tokens from the input. However, most of such existing approaches can only copy one input token at a time, which is a limitation when most of the input should be copied, which is the case for most editing tasks. In this paper, the authors propose a mechanism that can copy entire spans of the input instead of just individual tokens. In that case, a particular sequence can often be generated by many different actions (eg. copying individual tokens, pairs of tokens, or the whole span). It is thus important to marginalize over all the actions that generated a particular sequence. This can be done efficiently, using dynamic programming, if the probability of an action depends on the generated tokens only, but not on the sequence of actions used to generate them. In the case of neural network, this means that the decoder of the model takes the tokens as input, instead of the spans. To represent spans, the authors propose to use the concatenation of the hidden states corresponding to the beginning and end of the span. Then the probability of copying a span is obtained by taking the dot product between this representation and the current hidden state of the decoder, and applying the softmax. The authors evaluate the proposed approach on the following tasks: code repair, grammar error correction and edit representations (on wikipedia and c# code).
SP:781c51554cfd04222ef6c6c92648d1824e054ae1
PDP: A General Neural Framework for Learning SAT Solvers
1 INTRODUCTION . Constraint satisfaction problems ( CSP ) ( Kumar , 1992 ) and Boolean Satisfiability ( SAT ) , in particular , are the most fundamental NP-complete problems in Computer Science with a wide range of applications from verification to planning and scheduling . There have been huge efforts in Computer Science ( Biere et al. , 2009a ; Knuth , 2015 ; Nudelman et al. , 2004 ; Ansótegui et al. , 2008 ; 2012 ; Newsham et al. , 2014 ) as well as Physics and Information Theory ( Mezard & Montanari , 2009 ; Krzakała et al. , 2007 ) to both understand the theoretical aspects of SAT and develop efficient search algorithms to solve it . Furthermore , since in many real applications , the problem instances are often drawn from a narrow distribution , using Machine Learning to build data-driven solvers which can learn domain-specific search strategies is a natural choice . In that vein , Machine Learning has been used for different aspects of CSP and SAT solving , from branch prediction ( Liang et al. , 2016 ) to algorithm and hyper-parameter selection ( Xu et al. , 2008 ; Hutter et al. , 2011 ) . While most of these models rely on carefully-crafted features , more recent methods have incorporated techniques from Representation Learning and particularly Geometric Deep Learning ( Bronstein et al. , 2017 ; Wu et al. , 2019 ) to capture the underlying discrete structure of the CSP ( and SAT ) problems . Along the latter direction , Graph Neural Networks ( Li et al. , 2015 ; Defferrard et al. , 2016 ) have been the cornerstone of many recent deep learning approaches to CSP – e.g. , the NeuroSAT framework ( Selsam et al. , 2019 ) , the Circuit-SAT framework ( Amizadeh et al. , 2019 ) , and Recurrent Relational Networks for Sudoku ( Palm et al. , 2018 ) . These frameworks have been quite successful in capturing the inherent structure of the problem instances and embedding it into traditional vector spaces that are suitable for Machine Learning models . Nevertheless , a major issue with these pure embedding frameworks is that it is not clear how the learned model works , which in turn begs the question whether the model actually learns to search for a solution or simply adapts to statistical biases in the training data . As an alternative , researchers have used deep neural networks within classical search frameworks for tackling combinatorial optimization problems – e.g . Khalil et al . ( 2017 ) . In these hybrid , neuro-symbolic frameworks , deep learning is typically used to learn optimal search heuristics for a generic search algorithm – e.g . greedy search . Despite the clear search strategy , the performance of the resulted models is bounded by the effectiveness of the imposed strategy which is not learned . In this paper , we take a middle way and propose a neural framework for learning SAT solvers which enjoys the benefits of both of the methodologies above . In particular , we take the formulation of solving CSPs as probabilistic inference ( Montanari et al. , 2007 ; Mezard & Montanari , 2009 ; Braunstein et al. , 2005 ; Grover et al. , 2018 ; Gableske et al. , 2013 ) and propose a neural version of it that is capable of learning efficient inference strategy ( i.e . the search strategy in this context ) for specific problem domains . Our general framework is a design pattern which consists of three main operations : Propagation , Decimation and Prediction , and hence referred as the PDP framework . In general , these operations can be implemented either as fixed algorithms or as trainable neural networks . In this light , PDP can be seen as probabilistic inference in the latent space , and as a result , it is somewhat straightforward to establish how the search strategy in the neural PDP works , unlike pure embedding methods . On the other hand , due to the distributed nature of its decimation component , PDP is not restricted by the greedy strategy of the classical decimation process , meaning that it can potentially learn search strategies which are not greedy . And this would distinguishes it from the neuro-symbolic methods that are defined within the greedy strategy . Furthermore , we propose an unsupervised , fully differentiable training mechanism based on energy minimization which trains PDP directly toward solving SAT via end-to-end backpropagation . The unsupervised nature of our proposed training mechanism enables PDP to train on ( infinite ) stream of unlabeled data . Our experimental results show the superiority of the PDP framework compared to both state-of-the-art neural and inference-based solvers . We further demonstrate our model ’ s capability in adapting to problem distributions with distinct structure which is common in industrial settings . Lastly , we take an ambitious step further and compare our neural framework against the well-known class of Conflict-Driven Clause Learning ( CDCL ) industrial solvers . Despite the fact that neural solvers ( including ours ) are still in their infancy and not really comparable to industrial SAT solvers , we were pleasantly surprised to see that our trained models could come close to the performance of the CDCL solver . This is significant in the sense that it shows that even though our neural model at this point lacks some powerful classical search techniques such as backtracking and clause learning , its adaptive neural constraint propagation strategy is nevertheless very effective . 2 RELATED WORK . Classical Machine Learning has been incorporated in solving combinatorial optimization problems ( Bengio et al. , 2018 ) and SAT in particular : from SAT classification ( Xu et al. , 2012 ) , and solver selection ( Xu et al. , 2008 ) to configuration tuning ( Haim & Walsh , 2009 ; Hutter et al. , 2011 ; Singh et al. , 2009 ) and branching prediction ( Liang et al. , 2016 ; Grozea & Popescu , 2014 ; Flint & Blaschko , 2012 ) . However , more recently , researchers have used Deep Learning to train full-stack solvers . There are two main categories of Deep Learning methodologies proposed recently . In the first category , neural networks are used to embed the input CSP instances into a latent vector space where a predictive model can be trained . Palm et al . ( 2018 ) used Recurrent Relational Networks to train Sudoku solvers . Their framework relies on provided solutions at the training time as opposed to our framework , which is completely unsupervised . Selsam et al . ( 2019 ) proposed to use Graph Neural Networks ( Li et al. , 2015 ) to embed CNF instances for the SAT classification problem . They also proposed to use a post-processing clustering approach to decode SAT solutions . In contrast , our method is fully unsupervised and is directly trained toward solving SAT . Amizadeh et al . ( 2019 ) proposed a DAG Neural Network to embed logical circuits for solving the Circuit-SAT problem . Their framework is also unsupervised but it is mainly for circuit inputs with DAG structure . Prates et al . ( 2018 ) proposed a convolutional embedding-based method for solving TSP . In the second category , neural networks are used to learn useful search heuristics within an algorithmic search framework – typically the greedy search ( Vinyals et al. , 2015 ; Bello et al. , 2016 ; Khalil et al. , 2017 ) , branch-and-bound search ( He et al. , 2014 ) or tree search ( Li et al. , 2018 ) . While these methods enjoy a strong inductive bias in learning the optimization algorithm as well as some proof of correctness , their effectiveness is essentially bounded by sub-optimality of the imposed search strategy . Our proposed framework effectively belongs to this category but at the same time , its performance is not bounded by any search strategy . Our framework can also be seen as learning optimal message passing strategy on probabilistic graphical models . There have been some efforts in this direction ( Ross et al. , 2011 ; Lin et al. , 2015 ; Heess et al. , 2013 ; Johnson et al. , 2016 ; Yoon et al. , 2018 ) , but most are focused on merely message passing , whereas our framework learns both optimal message passing and decimation strategies , concurrently . 3 BACKGROUND . A Constraint Satisfaction Problem , denoted by CSP〈X , C〉 , aims at finding an assignment to a set of N discrete variables X = { xi : i ∈ 1 .. N } each defined on a set of discrete values X such that it satisfies all M constraints C = { ca ( x∂a ) : a ∈ 1 .. M } . Here , ∂a is a subset of variable indices that constraint ca depends on ; similarly , by ∂i , we denote the subset of constraint indices that variable i participates in . Each constraint ca : X |∂a| 7→ { 0 , 1 } is a Boolean function that takes value 1 iff x∂a satisfies the constraint ca . In this paper , we focus on Boolean Satisfiability problem ( SAT ) where the variables take values from X = { 0 , 1 } and each constraint ( or clause ) is a disjunction of a subset of variables or their negations . Furthermore , any CSP instance CSP〈X , C〉 can be represented as a factor graph probabilistic graphical model FG〈X , C〉 ( Koller et al. , 2009 ) . A factor graph FG〈X , C〉 is a bipartite graph where each variable xi corresponds to a variable node in FG and each constraint ca corresponds to a factor node in FG . There is an edge between the i-th variable node and the a-th factor node if i ∈ ∂a . Then , one may define a measure on FG as : P ( X ) = 1 Z M∏ a=1 φa ( x∂a ) ( 1 ) where φa are the factor functions such that φa ( x∂a ) : = max ( ca ( x∂a ) , ) for some very small , positive . Z is the normalization constant . In the special case of SAT , we extend the FG representation by assigning a binary eia ∈ { −1 , 1 } attribute to each edge such that eia = −1 if variable xi appears negated in the clause ca , and eia = 1 otherwise . This way the factor functions take the same functional form ( i.e . conjunction ) independent of the factor index a ; that is , φa ( x∂a ) = φ ( x∂a , e∂a ) , where e∂a are all the edges connected to the a-th factor . Using this formalism , the solutions of the original CSP〈X , C〉 correspond to the modes of P ( X ) . Given FG〈X , C〉 , one can compute the marginal distribution of each variable node by probabilistic inference on the factor graph using the Belief Propagation ( BP ) algorithm ( aka the Sum-Product algorithm ) ( Koller et al. , 2009 ) . But the actual optimization problem can be solved by computing the max-marginals of P ( X ) via algorithms such as Max-Product , Min-Sum ( Koller et al. , 2009 ) and Warning-Propagation ( Braunstein et al. , 2005 ) . All of the above algorithms including BP can be seen as special cases of the General Message Passing ( GMP ) algorithm on factor graphs ( Mezard & Montanari , 2009 ) , where the outgoing messages from the graph nodes are computed as a deterministic function of the incoming messages in an iterative fashion . If GMP converges , at the fixed-point , the messages often contain valuable information regarding variable assignments that maximizes the marginal distributions and eventually solve the CSP . In particular , the basic procedure to solve CSPs via probabilistic inference is ( 1 ) run a specific GMP algorithm on the factor graph until convergence , ( 2 ) based on the incoming fixed-point messages to each variable node , pick the variable with the highest certainty regarding a satisfying assignment , ( 3 ) set the most certain variable to the corresponding value , simplify the factor graph if possible and repeat the entire process over and over until all variables are set . We refer to this process as GMP-guided sequential decimation or decimation for short . The most famous algorithms in this class are BP-guided decimation ( Montanari et al. , 2007 ) and SP-guided decimation , based on Survey Propagation ( SP ) ( Aurell et al. , 2005 ; Chavas et al. , 2005 ) .
This paper investigates the well-studied problem of solving satisfiability problems using deep learning approaches. In this setting, the authors propose a neural architecture inspired by message passing operations in deep probabilistic graphical models. Namely, the architecture takes as input a CNF formula represented as a factor graph, and returns as output a set of soft assignments for the variable nodes in the graph. The internal layers of the architecture consist of propagation, decimation and prediction steps. Notably, decimation operations take an important role in learning non-greedy search strategies. Besides PDP operations, the architecture incorporates parallelization and batch replication techniques. The learning model is trained in a non-supervised way, using a cumulative (discounted) log-likelihood loss that penalizes the non-satisfying assignments returned by the algorithm.
SP:d58f24e421f2022d85f95291fcf910262a76f590
PDP: A General Neural Framework for Learning SAT Solvers
1 INTRODUCTION . Constraint satisfaction problems ( CSP ) ( Kumar , 1992 ) and Boolean Satisfiability ( SAT ) , in particular , are the most fundamental NP-complete problems in Computer Science with a wide range of applications from verification to planning and scheduling . There have been huge efforts in Computer Science ( Biere et al. , 2009a ; Knuth , 2015 ; Nudelman et al. , 2004 ; Ansótegui et al. , 2008 ; 2012 ; Newsham et al. , 2014 ) as well as Physics and Information Theory ( Mezard & Montanari , 2009 ; Krzakała et al. , 2007 ) to both understand the theoretical aspects of SAT and develop efficient search algorithms to solve it . Furthermore , since in many real applications , the problem instances are often drawn from a narrow distribution , using Machine Learning to build data-driven solvers which can learn domain-specific search strategies is a natural choice . In that vein , Machine Learning has been used for different aspects of CSP and SAT solving , from branch prediction ( Liang et al. , 2016 ) to algorithm and hyper-parameter selection ( Xu et al. , 2008 ; Hutter et al. , 2011 ) . While most of these models rely on carefully-crafted features , more recent methods have incorporated techniques from Representation Learning and particularly Geometric Deep Learning ( Bronstein et al. , 2017 ; Wu et al. , 2019 ) to capture the underlying discrete structure of the CSP ( and SAT ) problems . Along the latter direction , Graph Neural Networks ( Li et al. , 2015 ; Defferrard et al. , 2016 ) have been the cornerstone of many recent deep learning approaches to CSP – e.g. , the NeuroSAT framework ( Selsam et al. , 2019 ) , the Circuit-SAT framework ( Amizadeh et al. , 2019 ) , and Recurrent Relational Networks for Sudoku ( Palm et al. , 2018 ) . These frameworks have been quite successful in capturing the inherent structure of the problem instances and embedding it into traditional vector spaces that are suitable for Machine Learning models . Nevertheless , a major issue with these pure embedding frameworks is that it is not clear how the learned model works , which in turn begs the question whether the model actually learns to search for a solution or simply adapts to statistical biases in the training data . As an alternative , researchers have used deep neural networks within classical search frameworks for tackling combinatorial optimization problems – e.g . Khalil et al . ( 2017 ) . In these hybrid , neuro-symbolic frameworks , deep learning is typically used to learn optimal search heuristics for a generic search algorithm – e.g . greedy search . Despite the clear search strategy , the performance of the resulted models is bounded by the effectiveness of the imposed strategy which is not learned . In this paper , we take a middle way and propose a neural framework for learning SAT solvers which enjoys the benefits of both of the methodologies above . In particular , we take the formulation of solving CSPs as probabilistic inference ( Montanari et al. , 2007 ; Mezard & Montanari , 2009 ; Braunstein et al. , 2005 ; Grover et al. , 2018 ; Gableske et al. , 2013 ) and propose a neural version of it that is capable of learning efficient inference strategy ( i.e . the search strategy in this context ) for specific problem domains . Our general framework is a design pattern which consists of three main operations : Propagation , Decimation and Prediction , and hence referred as the PDP framework . In general , these operations can be implemented either as fixed algorithms or as trainable neural networks . In this light , PDP can be seen as probabilistic inference in the latent space , and as a result , it is somewhat straightforward to establish how the search strategy in the neural PDP works , unlike pure embedding methods . On the other hand , due to the distributed nature of its decimation component , PDP is not restricted by the greedy strategy of the classical decimation process , meaning that it can potentially learn search strategies which are not greedy . And this would distinguishes it from the neuro-symbolic methods that are defined within the greedy strategy . Furthermore , we propose an unsupervised , fully differentiable training mechanism based on energy minimization which trains PDP directly toward solving SAT via end-to-end backpropagation . The unsupervised nature of our proposed training mechanism enables PDP to train on ( infinite ) stream of unlabeled data . Our experimental results show the superiority of the PDP framework compared to both state-of-the-art neural and inference-based solvers . We further demonstrate our model ’ s capability in adapting to problem distributions with distinct structure which is common in industrial settings . Lastly , we take an ambitious step further and compare our neural framework against the well-known class of Conflict-Driven Clause Learning ( CDCL ) industrial solvers . Despite the fact that neural solvers ( including ours ) are still in their infancy and not really comparable to industrial SAT solvers , we were pleasantly surprised to see that our trained models could come close to the performance of the CDCL solver . This is significant in the sense that it shows that even though our neural model at this point lacks some powerful classical search techniques such as backtracking and clause learning , its adaptive neural constraint propagation strategy is nevertheless very effective . 2 RELATED WORK . Classical Machine Learning has been incorporated in solving combinatorial optimization problems ( Bengio et al. , 2018 ) and SAT in particular : from SAT classification ( Xu et al. , 2012 ) , and solver selection ( Xu et al. , 2008 ) to configuration tuning ( Haim & Walsh , 2009 ; Hutter et al. , 2011 ; Singh et al. , 2009 ) and branching prediction ( Liang et al. , 2016 ; Grozea & Popescu , 2014 ; Flint & Blaschko , 2012 ) . However , more recently , researchers have used Deep Learning to train full-stack solvers . There are two main categories of Deep Learning methodologies proposed recently . In the first category , neural networks are used to embed the input CSP instances into a latent vector space where a predictive model can be trained . Palm et al . ( 2018 ) used Recurrent Relational Networks to train Sudoku solvers . Their framework relies on provided solutions at the training time as opposed to our framework , which is completely unsupervised . Selsam et al . ( 2019 ) proposed to use Graph Neural Networks ( Li et al. , 2015 ) to embed CNF instances for the SAT classification problem . They also proposed to use a post-processing clustering approach to decode SAT solutions . In contrast , our method is fully unsupervised and is directly trained toward solving SAT . Amizadeh et al . ( 2019 ) proposed a DAG Neural Network to embed logical circuits for solving the Circuit-SAT problem . Their framework is also unsupervised but it is mainly for circuit inputs with DAG structure . Prates et al . ( 2018 ) proposed a convolutional embedding-based method for solving TSP . In the second category , neural networks are used to learn useful search heuristics within an algorithmic search framework – typically the greedy search ( Vinyals et al. , 2015 ; Bello et al. , 2016 ; Khalil et al. , 2017 ) , branch-and-bound search ( He et al. , 2014 ) or tree search ( Li et al. , 2018 ) . While these methods enjoy a strong inductive bias in learning the optimization algorithm as well as some proof of correctness , their effectiveness is essentially bounded by sub-optimality of the imposed search strategy . Our proposed framework effectively belongs to this category but at the same time , its performance is not bounded by any search strategy . Our framework can also be seen as learning optimal message passing strategy on probabilistic graphical models . There have been some efforts in this direction ( Ross et al. , 2011 ; Lin et al. , 2015 ; Heess et al. , 2013 ; Johnson et al. , 2016 ; Yoon et al. , 2018 ) , but most are focused on merely message passing , whereas our framework learns both optimal message passing and decimation strategies , concurrently . 3 BACKGROUND . A Constraint Satisfaction Problem , denoted by CSP〈X , C〉 , aims at finding an assignment to a set of N discrete variables X = { xi : i ∈ 1 .. N } each defined on a set of discrete values X such that it satisfies all M constraints C = { ca ( x∂a ) : a ∈ 1 .. M } . Here , ∂a is a subset of variable indices that constraint ca depends on ; similarly , by ∂i , we denote the subset of constraint indices that variable i participates in . Each constraint ca : X |∂a| 7→ { 0 , 1 } is a Boolean function that takes value 1 iff x∂a satisfies the constraint ca . In this paper , we focus on Boolean Satisfiability problem ( SAT ) where the variables take values from X = { 0 , 1 } and each constraint ( or clause ) is a disjunction of a subset of variables or their negations . Furthermore , any CSP instance CSP〈X , C〉 can be represented as a factor graph probabilistic graphical model FG〈X , C〉 ( Koller et al. , 2009 ) . A factor graph FG〈X , C〉 is a bipartite graph where each variable xi corresponds to a variable node in FG and each constraint ca corresponds to a factor node in FG . There is an edge between the i-th variable node and the a-th factor node if i ∈ ∂a . Then , one may define a measure on FG as : P ( X ) = 1 Z M∏ a=1 φa ( x∂a ) ( 1 ) where φa are the factor functions such that φa ( x∂a ) : = max ( ca ( x∂a ) , ) for some very small , positive . Z is the normalization constant . In the special case of SAT , we extend the FG representation by assigning a binary eia ∈ { −1 , 1 } attribute to each edge such that eia = −1 if variable xi appears negated in the clause ca , and eia = 1 otherwise . This way the factor functions take the same functional form ( i.e . conjunction ) independent of the factor index a ; that is , φa ( x∂a ) = φ ( x∂a , e∂a ) , where e∂a are all the edges connected to the a-th factor . Using this formalism , the solutions of the original CSP〈X , C〉 correspond to the modes of P ( X ) . Given FG〈X , C〉 , one can compute the marginal distribution of each variable node by probabilistic inference on the factor graph using the Belief Propagation ( BP ) algorithm ( aka the Sum-Product algorithm ) ( Koller et al. , 2009 ) . But the actual optimization problem can be solved by computing the max-marginals of P ( X ) via algorithms such as Max-Product , Min-Sum ( Koller et al. , 2009 ) and Warning-Propagation ( Braunstein et al. , 2005 ) . All of the above algorithms including BP can be seen as special cases of the General Message Passing ( GMP ) algorithm on factor graphs ( Mezard & Montanari , 2009 ) , where the outgoing messages from the graph nodes are computed as a deterministic function of the incoming messages in an iterative fashion . If GMP converges , at the fixed-point , the messages often contain valuable information regarding variable assignments that maximizes the marginal distributions and eventually solve the CSP . In particular , the basic procedure to solve CSPs via probabilistic inference is ( 1 ) run a specific GMP algorithm on the factor graph until convergence , ( 2 ) based on the incoming fixed-point messages to each variable node , pick the variable with the highest certainty regarding a satisfying assignment , ( 3 ) set the most certain variable to the corresponding value , simplify the factor graph if possible and repeat the entire process over and over until all variables are set . We refer to this process as GMP-guided sequential decimation or decimation for short . The most famous algorithms in this class are BP-guided decimation ( Montanari et al. , 2007 ) and SP-guided decimation , based on Survey Propagation ( SP ) ( Aurell et al. , 2005 ; Chavas et al. , 2005 ) .
The authors develop an unsupervised method for solving SAT problems. The method consists of an energy-based loss function which is optimized by a three-stage architecture that performs propagation, decimation, and prediction (PDP). The authors show that on uniform random 4-SAT problems, their PDP system outperforms two classical methods, a prior neural method, and performs favorably in comparison to a heavily developed industrial solver. Further, they show that a PDP system trained on modular 4-SAT problems performs better on modular 4-SAT problems that one trained on uniform random 4-SAT problems.
SP:d58f24e421f2022d85f95291fcf910262a76f590
Online and stochastic optimization beyond Lipschitz continuity: A Riemannian approach
Motivated by applications to machine learning and imaging science , we study a class of online and stochastic optimization problems with loss functions that are not Lipschitz continuous ; in particular , the loss functions encountered by the optimizer could exhibit gradient singularities or be singular themselves . Drawing on tools and techniques from Riemannian geometry , we examine a Riemann–Lipschitz ( RL ) continuity condition which is tailored to the singularity landscape of the problem ’ s loss functions . In this way , we are able to tackle cases beyond the Lipschitz framework provided by a global norm , and we derive optimal regret bounds and last iterate convergence results through the use of regularized learning methods ( such as online mirror descent ) . These results are subsequently validated in a class of stochastic Poisson inverse problems that arise in imaging science . 1 Introduction The surge of recent breakthroughs in machine learning and artificial intelligence has reaffirmed the prominence of first-order methods in solving large-scale optimization problems . One of the main reasons for this is that the computation of higher-order derivatives of functions with thousands – if not millions – of variables quickly becomes prohibitive ; another is that gradient calculations are typically easier to distribute and parallelize , especially in large-scale problems . In view of this , first-order methods have met with prolific success in many diverse fields , from machine learning and signal processing to wireless communications , nuclear medicine , and many others [ 10 , 34 , 37 ] . This success is especially pronounced in the field of online optimization , i.e. , when the optimizer faces a sequence of time-varying loss functions ft , t = 1 , 2 , . . . , one at a time – for instance , when drawing different sample points from a large training set [ 11 , 35 ] . In this general framework , first-order methods have proven extremely flexible and robust , and the attained performance guarantees are well known to be optimal [ 1 , 11 , 35 ] . Specifically , if the optimizer faces a sequence of G-Lipschitz convex losses , the incurred min-max regret after T rounds is Ω ( GT 1/2 ) , and this bound can be achieved by inexpensive first-order methods – such as online mirror descent and its variants [ 11 , 35 , 36 , 41 ] . Nevertheless , in many machine learning problems ( support vector machines , Poisson inverse problems , quantum tomography , etc . ) , the loss landscape is not Lipschitz continuous , so the results mentioned above do not apply . Thus , a natural question that emerges is the following : Is it possible to apply online optimization tools and techniques beyond the standard Lipschitz framework ? And , if so , how ? Our approach and contributions . Our point of departure is the observation that Lipschitz continuity is a property of metric spaces – not normed spaces . Indeed , in convex optimization , Lipschitz continuity is typically stated in terms of a global norm ( e.g. , the Euclidean norm ) , but such a norm is de facto independent of the point in space at which it is calculated . Because of this , the standard Lipschitz framework is oblivious to the finer aspects of the problem ’ s loss landscape – and , in particular , any singularities that may arise at the boundary of the problem ’ s feasible region . On the other hand , in general metric spaces , this is no longer the case : the distance between two points is no longer given by a global norm , so it is much more sensitive to the geometry of the feasible region . For this reason , if the ( Riemannian ) distance dist ( x , x′ ) between two points x and x′ becomes larger and larger as the points approach the boundary of the feasible region , a condition of the form | f ( x ) − f ( x′ ) | = O ( dist ( x′ , x ) ) may still hold even if f becomes singular at the boundary . We leverage this observation by introducing the notion of Riemann–Lipschitz ( RL ) continuity , an extension of “ vanilla ” Lipschitz continuity to general spaces endowed with a Riemannian metric . We show that this metric can be chosen in a principled manner based solely on the singularity landscape of the problem ’ s loss functions – i.e. , their growth rate at infinity and/or the boundary of the feasible region . Subsequently , using a similar mechanism to choose a Riemannian regularizer , we provide an optimal O ( T 1/2 ) regret guarantee through the use of regularized learning methods – namely , “ follow the regularized leader ” ( FTRL ) and online mirror descent ( OMD ) . Our second contribution concerns an extension of this framework to stochastic programming . First , in the context of stochastic convex optimization , we show that an online-to-batch conversion yields an O ( T−1/2 ) value convergence rate . Second , motivated by applications to nonconvex stochastic programming ( where averaging is not a priori beneficial ) , we also establish the convergence of the method ’ s last iterate in a class of nonconvex problems satisfying a weak secant inequality . Finally , we supplement our theoretical analysis with numerical experiments in Poisson inverse problems . Related work . To the best of our knowledge , the first treatment of a similar question was undertaken by Bauschke et al . [ 3 ] who focused on deterministic , offline convex programs ( ft = f for all t ) without a Lipschitz smoothness assumption ( i.e. , Lipschitz continuity of the gradient , as opposed to Lipschitz continuity of the objective ) . To tackle this issue , Bauschke et al . [ 3 ] introduced a second-order “ Lipschitz-like ” condition of the form ∇2 f 4 β∇2h for some suitable Bregman function h , and they showed that Bregman proximal methods achieve an O ( 1/T ) value convergence rate in offline convex problems with perfect gradient feedback . Always in the context of deterministic optimization , Bolte et al . [ 8 ] extended the results of Bauschke et al . [ 3 ] to unconstrained non-convex problems and established trajectory convergence to critical points for functions satisfying the Kurdyka–Łojasiewicz ( KL ) inequality . In a slightly different vein , Lu et al . [ 25 ] considered functions that are also strongly convex relative to the Bregman function defining the Lipschitz-like condition for the gradients , and they showed that mirror descent achieves a geometric convergence rate in this context . Finally , in a very recent preprint , Hanzely et al . [ 17 ] examined the rate of convergence of an accelerated variant of mirror descent under the same Lipschitz-like smoothness assumption . Importantly , all these works concern offline , deterministic optimization problems with perfect gradient feedback and regularity assumptions that can not be exploited in an online optimization setting ( such as the KL inequality ) . Beyond offline , deterministic optimization problems , Lu [ 24 ] established the ergodic convergence of mirror descent in stochastic non-adversarial convex problems under a “ relative continuity ” condition of the form ‖∇ f ( x ) ‖ ≤ G infx′ √ 2D ( x′ , x ) /‖x′ − x‖ ( with D denoting the divergence of an underlying “ reference ” Bregman function h ) . More recently , Hanzely and Richtárik [ 16 ] examined the performance of stochastic mirror descent under a combination of relative strong convexity and relative smoothness / Lipschitz-like conditions , and established a series of convergence rate guarantees that mirror the corresponding rates for ordinary ( Euclidean ) stochastic gradient descent . Except for trivial cases , these conditions are not related to Riemann–Lipschitz continuity , so there is no overlap in our results our methodology . Finally , in a very recent paper , Bécigneul and Ganea [ 5 ] established the convergence of a class of adaptive Riemannian methods in geodesically convex problems ( extending in this way classical results for AdaGrad to a manifold setting ) . Importantly , the Riemannian methodology of [ 5 ] involves the exponential mapping of the underlying metric and focuses on geodesic convexity , so it concerns an orthogonal class of problems . The only overlap would be in the case of flat Riemannian manifolds : however , even though the manifolds we consider here are topologically simple , they are not flat.1 In view of this , there is no overlap with the analysis and results of [ 5 ] . 2 Problem setup We begin by presenting the core online optimization framework that we will consider throughout the rest of our paper . This can be described by the following sequence of events : 1 . At each round t = 1 , 2 , . . . , the optimizer chooses an action Xt from a convex – but not necessarily closed or compact – subset X of an ambient normed space V d. 2 . The optimizer incurs a loss ft ( Xt ) based on some ( unknown ) convex loss function ft : X → . 3 . The optimizer updates their action and the process repeats . Remark 1 . For posterity , we note that if X is not closed , ft ( or its derivatives ) could become singular at a residual point x ∈ bd ( X ) \ X ; in particular , we do not assume here that ft admits a smooth extension to the closure cl ( X ) of X ( or even that it is bounded over bounded subsets of X ) . In this broad framework , the most widely used figure of merit is the minimization of the agent ’ s regret . Formally , the regret of a policy Xt ∈ X , t = 1 , 2 , . . . , is defined as Regx ( T ) = T∑ t=1 [ ft ( Xt ) − ft ( x ) ] , ( 1 ) for all x ∈ X . We then say that the policy Xt leads to no regret if Regx ( T ) = o ( T ) for all x ∈ X . In addition to convexity , the standard assumption in the literature for the problem ’ s loss functions is Lipschitz continuity , i.e. , | ft ( x′ ) − ft ( x ) | ≤ Gt‖x′ − x‖ ( LC ) for some Gt ≥ 0 , t = 1 , 2 , . . . , and for all x , x′ ∈ X . Under ( LC ) , if the agent observes at each stage t an element vt of ∂ ft ( Xt ) , straightforward online policies based on gradient descent enjoy a bound of the form Regx ( T ) = O ( ḠT T 1/2 ) , with Ḡ2T = T−1 ∑T t=1 G 2 t [ 11 , 35 , 41 ] . In particular , if G ≡ lim supT→∞ ḠT < ∞ ( e.g. , if each ft is G-Lipschitz continuous over X ) , we have the bound Regx ( T ) = O ( GT 1/2 ) ( 2 ) which is well known to be min-max optimal in this setting [ 1 ] . A note on notation . Throughout our paper , we make a clear distinction between V and its dual , and we use Dirac ’ s notation 〈v|x〉 for the duality pairing between v ∈ V∗ and x ∈ V ( not to be confused with the notation 〈· , ·〉 for a scalar product on V ) . Also , unless mentioned otherwise , all notions of boundary and interior should be interpreted in the relative ( as opposed to topological ) sense . We also make the blanket assumption that the subdifferential ∂ ft of ft admits a continuous selection ∇ ft ( x ) ∈ ∂ ft ( x ) for all x ∈ dom ∂ ft ≡ { x ∈ X : ∂ ft ( x ) , ∅ } .
The paper establishes optimal regret bounds of the order O(\sqrt{T}) for Follow The Regularised Leader (FTRL) and Online Mirror Descent (OMD) for convex loss functions and potentials (a.k.a. Riemannian regularizers) that are, respectively, Lipschitz continuous and strongly convex with respect to a given Riemannian metric. These conditions naturally generalize the classical conditions typically considered in the literature, which are defined with respect to a global norm and, as such, are not well-suited to problems where the loss functions and its gradient present singularities at the boundary of the feasibility region. The authors suggest a principled way to choose both the Riemannian metric and the potential function based on the singularity landscape of the gradient of the loss function. Via standard online-to-batch conversion, the authors also address the offline setting and give O(1/\sqrt{T}) error bounds for ergodic averages in convex problems and for last iterates in non-convex problems satisfying a weak secant inequality. The authors include numerical experiments involving a Poisson inverse problem.
SP:893cf4309c06b75e6891831e684f59e4806d35b3
Online and stochastic optimization beyond Lipschitz continuity: A Riemannian approach
Motivated by applications to machine learning and imaging science , we study a class of online and stochastic optimization problems with loss functions that are not Lipschitz continuous ; in particular , the loss functions encountered by the optimizer could exhibit gradient singularities or be singular themselves . Drawing on tools and techniques from Riemannian geometry , we examine a Riemann–Lipschitz ( RL ) continuity condition which is tailored to the singularity landscape of the problem ’ s loss functions . In this way , we are able to tackle cases beyond the Lipschitz framework provided by a global norm , and we derive optimal regret bounds and last iterate convergence results through the use of regularized learning methods ( such as online mirror descent ) . These results are subsequently validated in a class of stochastic Poisson inverse problems that arise in imaging science . 1 Introduction The surge of recent breakthroughs in machine learning and artificial intelligence has reaffirmed the prominence of first-order methods in solving large-scale optimization problems . One of the main reasons for this is that the computation of higher-order derivatives of functions with thousands – if not millions – of variables quickly becomes prohibitive ; another is that gradient calculations are typically easier to distribute and parallelize , especially in large-scale problems . In view of this , first-order methods have met with prolific success in many diverse fields , from machine learning and signal processing to wireless communications , nuclear medicine , and many others [ 10 , 34 , 37 ] . This success is especially pronounced in the field of online optimization , i.e. , when the optimizer faces a sequence of time-varying loss functions ft , t = 1 , 2 , . . . , one at a time – for instance , when drawing different sample points from a large training set [ 11 , 35 ] . In this general framework , first-order methods have proven extremely flexible and robust , and the attained performance guarantees are well known to be optimal [ 1 , 11 , 35 ] . Specifically , if the optimizer faces a sequence of G-Lipschitz convex losses , the incurred min-max regret after T rounds is Ω ( GT 1/2 ) , and this bound can be achieved by inexpensive first-order methods – such as online mirror descent and its variants [ 11 , 35 , 36 , 41 ] . Nevertheless , in many machine learning problems ( support vector machines , Poisson inverse problems , quantum tomography , etc . ) , the loss landscape is not Lipschitz continuous , so the results mentioned above do not apply . Thus , a natural question that emerges is the following : Is it possible to apply online optimization tools and techniques beyond the standard Lipschitz framework ? And , if so , how ? Our approach and contributions . Our point of departure is the observation that Lipschitz continuity is a property of metric spaces – not normed spaces . Indeed , in convex optimization , Lipschitz continuity is typically stated in terms of a global norm ( e.g. , the Euclidean norm ) , but such a norm is de facto independent of the point in space at which it is calculated . Because of this , the standard Lipschitz framework is oblivious to the finer aspects of the problem ’ s loss landscape – and , in particular , any singularities that may arise at the boundary of the problem ’ s feasible region . On the other hand , in general metric spaces , this is no longer the case : the distance between two points is no longer given by a global norm , so it is much more sensitive to the geometry of the feasible region . For this reason , if the ( Riemannian ) distance dist ( x , x′ ) between two points x and x′ becomes larger and larger as the points approach the boundary of the feasible region , a condition of the form | f ( x ) − f ( x′ ) | = O ( dist ( x′ , x ) ) may still hold even if f becomes singular at the boundary . We leverage this observation by introducing the notion of Riemann–Lipschitz ( RL ) continuity , an extension of “ vanilla ” Lipschitz continuity to general spaces endowed with a Riemannian metric . We show that this metric can be chosen in a principled manner based solely on the singularity landscape of the problem ’ s loss functions – i.e. , their growth rate at infinity and/or the boundary of the feasible region . Subsequently , using a similar mechanism to choose a Riemannian regularizer , we provide an optimal O ( T 1/2 ) regret guarantee through the use of regularized learning methods – namely , “ follow the regularized leader ” ( FTRL ) and online mirror descent ( OMD ) . Our second contribution concerns an extension of this framework to stochastic programming . First , in the context of stochastic convex optimization , we show that an online-to-batch conversion yields an O ( T−1/2 ) value convergence rate . Second , motivated by applications to nonconvex stochastic programming ( where averaging is not a priori beneficial ) , we also establish the convergence of the method ’ s last iterate in a class of nonconvex problems satisfying a weak secant inequality . Finally , we supplement our theoretical analysis with numerical experiments in Poisson inverse problems . Related work . To the best of our knowledge , the first treatment of a similar question was undertaken by Bauschke et al . [ 3 ] who focused on deterministic , offline convex programs ( ft = f for all t ) without a Lipschitz smoothness assumption ( i.e. , Lipschitz continuity of the gradient , as opposed to Lipschitz continuity of the objective ) . To tackle this issue , Bauschke et al . [ 3 ] introduced a second-order “ Lipschitz-like ” condition of the form ∇2 f 4 β∇2h for some suitable Bregman function h , and they showed that Bregman proximal methods achieve an O ( 1/T ) value convergence rate in offline convex problems with perfect gradient feedback . Always in the context of deterministic optimization , Bolte et al . [ 8 ] extended the results of Bauschke et al . [ 3 ] to unconstrained non-convex problems and established trajectory convergence to critical points for functions satisfying the Kurdyka–Łojasiewicz ( KL ) inequality . In a slightly different vein , Lu et al . [ 25 ] considered functions that are also strongly convex relative to the Bregman function defining the Lipschitz-like condition for the gradients , and they showed that mirror descent achieves a geometric convergence rate in this context . Finally , in a very recent preprint , Hanzely et al . [ 17 ] examined the rate of convergence of an accelerated variant of mirror descent under the same Lipschitz-like smoothness assumption . Importantly , all these works concern offline , deterministic optimization problems with perfect gradient feedback and regularity assumptions that can not be exploited in an online optimization setting ( such as the KL inequality ) . Beyond offline , deterministic optimization problems , Lu [ 24 ] established the ergodic convergence of mirror descent in stochastic non-adversarial convex problems under a “ relative continuity ” condition of the form ‖∇ f ( x ) ‖ ≤ G infx′ √ 2D ( x′ , x ) /‖x′ − x‖ ( with D denoting the divergence of an underlying “ reference ” Bregman function h ) . More recently , Hanzely and Richtárik [ 16 ] examined the performance of stochastic mirror descent under a combination of relative strong convexity and relative smoothness / Lipschitz-like conditions , and established a series of convergence rate guarantees that mirror the corresponding rates for ordinary ( Euclidean ) stochastic gradient descent . Except for trivial cases , these conditions are not related to Riemann–Lipschitz continuity , so there is no overlap in our results our methodology . Finally , in a very recent paper , Bécigneul and Ganea [ 5 ] established the convergence of a class of adaptive Riemannian methods in geodesically convex problems ( extending in this way classical results for AdaGrad to a manifold setting ) . Importantly , the Riemannian methodology of [ 5 ] involves the exponential mapping of the underlying metric and focuses on geodesic convexity , so it concerns an orthogonal class of problems . The only overlap would be in the case of flat Riemannian manifolds : however , even though the manifolds we consider here are topologically simple , they are not flat.1 In view of this , there is no overlap with the analysis and results of [ 5 ] . 2 Problem setup We begin by presenting the core online optimization framework that we will consider throughout the rest of our paper . This can be described by the following sequence of events : 1 . At each round t = 1 , 2 , . . . , the optimizer chooses an action Xt from a convex – but not necessarily closed or compact – subset X of an ambient normed space V d. 2 . The optimizer incurs a loss ft ( Xt ) based on some ( unknown ) convex loss function ft : X → . 3 . The optimizer updates their action and the process repeats . Remark 1 . For posterity , we note that if X is not closed , ft ( or its derivatives ) could become singular at a residual point x ∈ bd ( X ) \ X ; in particular , we do not assume here that ft admits a smooth extension to the closure cl ( X ) of X ( or even that it is bounded over bounded subsets of X ) . In this broad framework , the most widely used figure of merit is the minimization of the agent ’ s regret . Formally , the regret of a policy Xt ∈ X , t = 1 , 2 , . . . , is defined as Regx ( T ) = T∑ t=1 [ ft ( Xt ) − ft ( x ) ] , ( 1 ) for all x ∈ X . We then say that the policy Xt leads to no regret if Regx ( T ) = o ( T ) for all x ∈ X . In addition to convexity , the standard assumption in the literature for the problem ’ s loss functions is Lipschitz continuity , i.e. , | ft ( x′ ) − ft ( x ) | ≤ Gt‖x′ − x‖ ( LC ) for some Gt ≥ 0 , t = 1 , 2 , . . . , and for all x , x′ ∈ X . Under ( LC ) , if the agent observes at each stage t an element vt of ∂ ft ( Xt ) , straightforward online policies based on gradient descent enjoy a bound of the form Regx ( T ) = O ( ḠT T 1/2 ) , with Ḡ2T = T−1 ∑T t=1 G 2 t [ 11 , 35 , 41 ] . In particular , if G ≡ lim supT→∞ ḠT < ∞ ( e.g. , if each ft is G-Lipschitz continuous over X ) , we have the bound Regx ( T ) = O ( GT 1/2 ) ( 2 ) which is well known to be min-max optimal in this setting [ 1 ] . A note on notation . Throughout our paper , we make a clear distinction between V and its dual , and we use Dirac ’ s notation 〈v|x〉 for the duality pairing between v ∈ V∗ and x ∈ V ( not to be confused with the notation 〈· , ·〉 for a scalar product on V ) . Also , unless mentioned otherwise , all notions of boundary and interior should be interpreted in the relative ( as opposed to topological ) sense . We also make the blanket assumption that the subdifferential ∂ ft of ft admits a continuous selection ∇ ft ( x ) ∈ ∂ ft ( x ) for all x ∈ dom ∂ ft ≡ { x ∈ X : ∂ ft ( x ) , ∅ } .
This paper investigates online and stochastic convex optimization problems in which the objective function is not Lipschitz continuous. The originality of this study lies in the use of Riemannian geometry. Specifically, the standard condition of Lipschitz continuity is replaced with a more general condition involving Riemannian distances and called Riemann-Lipschitz Continuity (RLC). Based on an appropriate definition of Riemannian regularizer and a generalization of Fenchel coupling to Riemannian geometry, the authors provide $O(\sqrt T)$ regret (resp. risk) bounds for the online (resp. stochastic) mirror descent algorithm, under the Riemann-Lipchitz condition. The performance of the algorithm is validated on Poisson inverse problems.
SP:893cf4309c06b75e6891831e684f59e4806d35b3
Economy Statistical Recurrent Units For Inferring Nonlinear Granger Causality
Granger causality is a widely-used criterion for analyzing interactions in largescale networks . As most physical interactions are inherently nonlinear , we consider the problem of inferring the existence of pairwise Granger causality between nonlinearly interacting stochastic processes from their time series measurements . Our proposed approach relies on modeling the embedded nonlinearities in the measurements using a component-wise time series prediction model based on Statistical Recurrent Units ( SRUs ) . We make a case that the network topology of Granger causal relations is directly inferrable from a structured sparse estimate of the internal parameters of the SRU networks trained to predict the processes ’ time series measurements . We propose a variant of SRU , called economy-SRU , which , by design has considerably fewer trainable parameters , and therefore less prone to overfitting . The economy-SRU computes a low-dimensional sketch of its high-dimensional hidden state in the form of random projections to generate the feedback for its recurrent processing . Additionally , the internal weight parameters of the economy-SRU are strategically regularized in a group-wise manner to facilitate the proposed network in extracting meaningful predictive features that are highly time-localized to mimic real-world causal events . Extensive experiments are carried out to demonstrate that the proposed economy-SRU based time series prediction model outperforms the MLP , LSTM and attention-gated CNN-based time series models considered previously for inferring Granger causality . 1 INTRODUCTION . The physical mechanisms behind the functioning of any large-scale system can be understood in terms of the networked interactions between the underlying system processes . Granger causality is one widely-accepted criterion used in building network models of interactions between large ensembles of stochastic processes . While Granger causality may not necessarily imply true causality , it has proven effective in qualifying pairwise interactions between stochastic processes in a variety of system identification problems , e.g. , gene regulatory network mapping ( Fujita et al . ( 2007 ) ) , and the mapping of human brain connectome ( Seth et al . ( 2015 ) ) . This perspective has given rise to the canonical problem of inferring pairwise Granger causal relationships between a set of stochastic processes from their time series measurements . At present , the vast majority of Granger causal inference methods adopt a model-based inference approach whereby the measured time series data is modeled using with a suitable parameterized data generative model whose inferred parameters ultimately reveal the true topology of pairwise Granger causal relationships . Such methods typically rely on using linear regression models for inference . However , as illustrated in the classical bivariate example by Baek & Brock ( 1992 ) , linear model-based Granger causality tests can fail catastrophically in the presence of even mild nonlinearities in the measurements , thus making a strong case for our work which tackles the nonlinearities in the measurements by exploring new generative models of the time series measurements based on recurrent neural networks . 2 PROBLEM FORMULATION . Consider a multivariate dynamical system whose evolution from an initial state is fully characterized by n distinct stochastic processes which can potentially interact nonlinearly among themselves . Our goal here is to unravel the unknown nonlinear system dynamics by mapping the entire network of pairwise interactions between the system-defining stochastic processes , using Granger causality as the qualifier of the individual pairwise interactions . In order to detect the pairwise Granger causal relations between the stochastic processes , we assume access to their concurrent , uniformly-sampled measurements presented as an n-variate time series x = { xt : t ∈ N } ⊂ Rn . Let xt , i denote the ith component of the n-dimensional vector measurement xt , representing the measured value of process i at time t. Motivated by the framework proposed in Tank et al . ( 2017 ) , we assume that the measurement samples xt , t ∈ N are generated sequentially according to the following nonlinear , component-wise autoregressive model : xt , i = fi ( xt−p : t−1,1 , xt−p : t−1,2 , . . . , xt−p : t−1 , n ) + et , i , i = 1 , 2 , . . . n , ( 1 ) where xt−p : t−1 , j , { xt−1 , j , xt−2 , j , . . . , xt−p , j } represents the most recent p measurements of the jth component of x in the immediate past relative to current time t. The scalar-valued component generative function fi captures all of the linear and nonlinear interactions between the n stochastic processes up to time t− 1 that decide the measured value of the ith stochastic process at time t. The residual ei , t encapsulates the combined effect of all instantaneous and exogenous factors influencing the measurement of process i at time t , as well as any imperfections in the presumed model . Equation 1 may be viewed as a generalization of the linear vector autoregressive ( VAR ) model in the sense that the components of x can be nonlinearly dependent on one another across time . The value p is loosely interpreted to be the order of the above nonlinear autoregressive model . 2.1 GRANGER CAUSALITY IN NONLINEAR DYNAMICAL SYSTEMS . We now proceed to interpret Granger causality in the context of the above component-wise time series model . Recalling the standard definition by Granger ( 1969 ) , a time series v is said to Granger cause another time series u if the past of v contains new information above and beyond the past of u that can improve the predictions of current or future values of u . For x with its n components generated according to equation 1 , the concept of Granger causality can be extended as suggested by Tank et al . ( 2018 ) as follows . We say that series j does not Granger cause series i if the componentwise generative function fi does not depend on the past measurements in series j , i.e. , for all t ≥ 1 and all distinct pairs xt−p : t−1 , j and x′t−p : t−1 , j , fi ( xt−p : t−1,1 , . . . , xt−p : t−1 , j , . . . , xt−p : t−1 , n ) = fi ( xt−p : t−1,1 , . . . , x ′ t−p : t−1 , j , . . . , xt−p : t−1 , n ) . ( 2 ) From equation 1 , it is immediately evident that under the constraint in equation 2 , the past of series j does not assert any causal influence on series i , in alignment with the core principle behind Granger causality . Based on the above implication of equation 2 , the detection of Granger noncausality between the components of x translates to identifying those components of x whose past is irrelevant to the functional description of each individual fi featured in equation 1 . Note that any reliable inference of pairwise Granger causality between the components of x is feasible only if there are no unobserved confounding factors in the system which could potentially influence x . In this work , we assume that the system of interest is causally sufficient ( Spirtes & Zhang ( 2016 ) ) , i.e. , none of the n stochastic processes ( whose measurements are available ) have a common Granger-causing-ancestor that is unobserved . 2.2 INFERRING GRANGER CAUSALITY USING COMPONENT-WISE RECURRENT MODELS . We undertake a model-based inference approach wherein the time series measurements are used as observations to learn an autoregressive model which is anatomically similar to the componentwise generative model described in equation 1 except for the unknown functions fi replaced with their respective parameterized approximations denoted by gi . Let Θi , 1 ≤ i ≤ n denote the complete set of parameters encoding the functional description of the approximating functions { gi } ni=1 . Then , the pairwise Granger causality between series i and the components of x is deduced from Θi which is estimated by fitting gi ’ s output to the ordered measurements in series i . Specifically , if the estimated Θi suggests that gi ’ s output is independent of the past measurements in series j , then we declare that series j is Granger noncausal for series i . We aim to design the approximation function gi to be highly expressive and capable of well-approximating any intricate causal coupling between the components of x induced by the component-wise function fi , while simultaneously being easily identifiable from underdetermined measurements . By virtue of their universal approximation property ( Schäfer & Zimmermann ( 2006 ) ) , recurrent neural networks or RNNs are a particularly ideal choice for gi towards inferring the pairwise Granger causal relationships in x . In this work , we investigate the use of a special type of RNN called the statistical recurrent unit ( SRU ) for inferring pairwise Granger causality between multiple nonlinearly interacting stochastic processes . Introduced by Oliva et al . ( 2017 ) , an SRU is a highly expressive recurrent neural network designed specifically for modeling multivariate time series data with complex-nonlinear dependencies spanning multiple time lags . Unlike the popular gated RNNs ( e.g. , long short-term memory ( LSTM ) ( Hochreiter & Schmidhuber ( 1997 ) ) and gated recurrent unit ( GRU ) ) ( Chung et al . ( 2014 ) ) , the SRU ’ s design is completely devoid of the highly nonlinear sigmoid gating functions and thus less affected by the vanishing/exploding gradient issue during training . Despite its simpler ungated architecture , an SRU can model both short and long-term temporal dependencies in a multivariate time series . It does so by maintaining multi-time scale summary statistics of the time series data in the past , which are preferentially sensitive to different older portions of the time series x . By taking appropriate linear combinations of the summary statistics at different time scales , an SRU is able to construct predictive causal features which can be both highly component-specific and lag-specific at the same time . From the causal inference perspective , this dual-specificity of the SRU ’ s predictive features is its most desirable feature , as one would argue that causal effects in reality also tend to be highly localized in both space and time . The main contributions of this paper can be summarized as follows : 1 . We propose the use of statistical recurrent units ( SRUs ) for detecting pairwise Granger causality between the nonlinearly interacting stochastic processes . We show that the entire network of pairwise Granger causal relationships can be inferred directly from the regularized block-sparse estimate of the input-layer weight parameters of the SRUs trained to predict the time series measurements of the individual processes . 2 . We propose a modified SRU architecture called economy SRU or eSRU in short . The first of the two proposed modifications is aimed at substantially reducing the number of trainable parameters in the standard SRU model without sacrificing its expressiveness . The second modification entails regularizing the SRU ’ s internal weight parameters to enhance the interpretability of its learned predictive features . Compared to the standard SRU , the proposed eSRU model is considerably less likely to overfit the time series measurements . 3 . We conduct extensive numerical experiments to demonstrate that eSRU is a compelling model for inferring pairwise Granger causality . The proposed model is found to outperform the multi-layer perceptron ( MLP ) , LSTM and attention-gated convolutional neural network ( AG-CNN ) based models considered in the earlier works . 3 PROPOSED GRANGER CAUSAL INFERENCE FRAMEWORK . In the proposed scheme , each of the unknown generative functions fi , 1 ≤ i ≤ n in the presumed component-wise model of x in ( 1 ) is individually approximated by a distinct SRU network . The ith SRU network sequentially processes the time series measurements x and outputs a next-step prediction sequence x̂+i = { x̂i,2 , x̂i,3 , . . . , x̂i , t+1 , . . . } ⊂ R , where x̂i , t+1 denotes the predicted value of component series i at time t + 1 . The prediction x̂i , t+1 is computed in a recurrent fashion by combining the current input sample xt at time t with the summary statistics of past samples of x up to and including time t− 1 as illustrated in Figure 1 . The following update equations describe the sequential processing of the input time series x within the ith SRU network in order to generate a prediction of xi , t+1 . Feedback : ri , t = h ( W ( i ) r ui , t−1 + b ( i ) r ) ∈ Rdr . ( 3a ) Recurrent statistics : φi , t = h ( W ( i ) in xt + W ( i ) f ri , t−1 + b ( i ) in ) ∈ Rdφ . ( 3b ) Multi-scale summary statistics : ui , t = [ ( uα1i , t ) T ( uα2i , t ) T . . . ( uαmi , t ) T ] T ∈ Rmdφ , αj ∈ A , ∀j . ( 3c ) Single-scale summary statistics : uαji , t = ( 1− αj ) u αj i , t−1 + αjφi , t , ∈ R dφ , αj ∈ [ 0 , 1 ] . ( 3d ) Output features : oi , t = h ( W ( i ) o ui , t + b ( i ) o ) ∈ Rdo . ( 3e ) Output prediction : x̂i , t+1 = ( w ( i ) y ) T oi , t + b ( i ) y ∈ R. ( 3f ) The function h in the above updates is the elementwise Rectified Linear Unit ( ReLU ) operator , h ( · ) : = max ( · , 0 ) , which serves as the nonlinear activation in the three dedicated single layer neural networks that generate the recurrent statistics φi , t , the feedback ri , t and the output features oi , t in the ith SRU network . In order to generate the next-step prediction of series i at time t , the ith SRU network first prepares the feedback ri , t by nonlinearly transforming its last hidden state ui , t−1 . As stated in equation 3a , a single layer ReLU network parameterized by weight matrix W ( i ) r and bias b ( i ) r maps the hidden state ui , t−1 to the feedback ri , t . Another single layer ReLU network parameterized by weight matrices W ( i ) in , W ( i ) f and bias b ( i ) in takes the input xt and the feedback ri , t and tranforms them into the recurrent statistics φi , t as described in equation 3b . Equation 3d describes how the network ’ s multi-timescale hidden states uαi , t for α ∈ A = { α1 , α2 , . . . , αm } ⊂ [ 0 , 1 ] are updated in parallel by taking exponentially weighted moving averages of the recurrent statistics φi , t corresponding to m different scales in A . A third single layer ReLU network parameterized by W ( i ) o and b ( i ) o transforms the concatenated multi- timescale summary statistics ui , t = [ ( uα1i , t ) T ( uα2i , t ) T . . . ( uαmi , m ) T ] T to generate the nonlinear causal features oi , t which , according to Oliva et al . ( 2017 ) , are arguably highly sensitive to the input time series measurements at specific lags . Finally , the network generates the next-step prediction of series i as x̂i , t+1 by linearly combining the nonlinear output features in oi , t , as depicted in equation 3f . For values of scale α ≈ 1 , the single-scale summary statistic uαi , t in equation 3d is more sensitive to the recent past measurements in x . On the other hand , α ≈ 0 yields a summary statistic that is more representative of the older portions of the input time series . Oliva et al . ( 2017 ) elaborates on how the SRU is able to generate output features ( oi , t , 1 ≤ i ≤ n ) that are preferentially sensitive to the measurements from specific past segments of x by taking appropriate linear combinations of the summary statistics corresponding to different values of α in A .
In this paper the authors propose using Statistical Recurrent Units to predict the network for Granger causality. They motivate this choice by the high representation power of SRUs for multivariate time series, the good performance they usually enjoy and as a way to alleviate the vanishing gradient problem. More importantly the particular form of the SRUs gives a very simple predictor and therefore explanability for Granger causality: the authors propose to simply mark serie $i$ as Granger caused by $j$ if the $j$th column of the input mixing matrix of the $g_i$ is non-zero.
SP:467735fd49561cd06342bb38a921e541553c6633
Economy Statistical Recurrent Units For Inferring Nonlinear Granger Causality
Granger causality is a widely-used criterion for analyzing interactions in largescale networks . As most physical interactions are inherently nonlinear , we consider the problem of inferring the existence of pairwise Granger causality between nonlinearly interacting stochastic processes from their time series measurements . Our proposed approach relies on modeling the embedded nonlinearities in the measurements using a component-wise time series prediction model based on Statistical Recurrent Units ( SRUs ) . We make a case that the network topology of Granger causal relations is directly inferrable from a structured sparse estimate of the internal parameters of the SRU networks trained to predict the processes ’ time series measurements . We propose a variant of SRU , called economy-SRU , which , by design has considerably fewer trainable parameters , and therefore less prone to overfitting . The economy-SRU computes a low-dimensional sketch of its high-dimensional hidden state in the form of random projections to generate the feedback for its recurrent processing . Additionally , the internal weight parameters of the economy-SRU are strategically regularized in a group-wise manner to facilitate the proposed network in extracting meaningful predictive features that are highly time-localized to mimic real-world causal events . Extensive experiments are carried out to demonstrate that the proposed economy-SRU based time series prediction model outperforms the MLP , LSTM and attention-gated CNN-based time series models considered previously for inferring Granger causality . 1 INTRODUCTION . The physical mechanisms behind the functioning of any large-scale system can be understood in terms of the networked interactions between the underlying system processes . Granger causality is one widely-accepted criterion used in building network models of interactions between large ensembles of stochastic processes . While Granger causality may not necessarily imply true causality , it has proven effective in qualifying pairwise interactions between stochastic processes in a variety of system identification problems , e.g. , gene regulatory network mapping ( Fujita et al . ( 2007 ) ) , and the mapping of human brain connectome ( Seth et al . ( 2015 ) ) . This perspective has given rise to the canonical problem of inferring pairwise Granger causal relationships between a set of stochastic processes from their time series measurements . At present , the vast majority of Granger causal inference methods adopt a model-based inference approach whereby the measured time series data is modeled using with a suitable parameterized data generative model whose inferred parameters ultimately reveal the true topology of pairwise Granger causal relationships . Such methods typically rely on using linear regression models for inference . However , as illustrated in the classical bivariate example by Baek & Brock ( 1992 ) , linear model-based Granger causality tests can fail catastrophically in the presence of even mild nonlinearities in the measurements , thus making a strong case for our work which tackles the nonlinearities in the measurements by exploring new generative models of the time series measurements based on recurrent neural networks . 2 PROBLEM FORMULATION . Consider a multivariate dynamical system whose evolution from an initial state is fully characterized by n distinct stochastic processes which can potentially interact nonlinearly among themselves . Our goal here is to unravel the unknown nonlinear system dynamics by mapping the entire network of pairwise interactions between the system-defining stochastic processes , using Granger causality as the qualifier of the individual pairwise interactions . In order to detect the pairwise Granger causal relations between the stochastic processes , we assume access to their concurrent , uniformly-sampled measurements presented as an n-variate time series x = { xt : t ∈ N } ⊂ Rn . Let xt , i denote the ith component of the n-dimensional vector measurement xt , representing the measured value of process i at time t. Motivated by the framework proposed in Tank et al . ( 2017 ) , we assume that the measurement samples xt , t ∈ N are generated sequentially according to the following nonlinear , component-wise autoregressive model : xt , i = fi ( xt−p : t−1,1 , xt−p : t−1,2 , . . . , xt−p : t−1 , n ) + et , i , i = 1 , 2 , . . . n , ( 1 ) where xt−p : t−1 , j , { xt−1 , j , xt−2 , j , . . . , xt−p , j } represents the most recent p measurements of the jth component of x in the immediate past relative to current time t. The scalar-valued component generative function fi captures all of the linear and nonlinear interactions between the n stochastic processes up to time t− 1 that decide the measured value of the ith stochastic process at time t. The residual ei , t encapsulates the combined effect of all instantaneous and exogenous factors influencing the measurement of process i at time t , as well as any imperfections in the presumed model . Equation 1 may be viewed as a generalization of the linear vector autoregressive ( VAR ) model in the sense that the components of x can be nonlinearly dependent on one another across time . The value p is loosely interpreted to be the order of the above nonlinear autoregressive model . 2.1 GRANGER CAUSALITY IN NONLINEAR DYNAMICAL SYSTEMS . We now proceed to interpret Granger causality in the context of the above component-wise time series model . Recalling the standard definition by Granger ( 1969 ) , a time series v is said to Granger cause another time series u if the past of v contains new information above and beyond the past of u that can improve the predictions of current or future values of u . For x with its n components generated according to equation 1 , the concept of Granger causality can be extended as suggested by Tank et al . ( 2018 ) as follows . We say that series j does not Granger cause series i if the componentwise generative function fi does not depend on the past measurements in series j , i.e. , for all t ≥ 1 and all distinct pairs xt−p : t−1 , j and x′t−p : t−1 , j , fi ( xt−p : t−1,1 , . . . , xt−p : t−1 , j , . . . , xt−p : t−1 , n ) = fi ( xt−p : t−1,1 , . . . , x ′ t−p : t−1 , j , . . . , xt−p : t−1 , n ) . ( 2 ) From equation 1 , it is immediately evident that under the constraint in equation 2 , the past of series j does not assert any causal influence on series i , in alignment with the core principle behind Granger causality . Based on the above implication of equation 2 , the detection of Granger noncausality between the components of x translates to identifying those components of x whose past is irrelevant to the functional description of each individual fi featured in equation 1 . Note that any reliable inference of pairwise Granger causality between the components of x is feasible only if there are no unobserved confounding factors in the system which could potentially influence x . In this work , we assume that the system of interest is causally sufficient ( Spirtes & Zhang ( 2016 ) ) , i.e. , none of the n stochastic processes ( whose measurements are available ) have a common Granger-causing-ancestor that is unobserved . 2.2 INFERRING GRANGER CAUSALITY USING COMPONENT-WISE RECURRENT MODELS . We undertake a model-based inference approach wherein the time series measurements are used as observations to learn an autoregressive model which is anatomically similar to the componentwise generative model described in equation 1 except for the unknown functions fi replaced with their respective parameterized approximations denoted by gi . Let Θi , 1 ≤ i ≤ n denote the complete set of parameters encoding the functional description of the approximating functions { gi } ni=1 . Then , the pairwise Granger causality between series i and the components of x is deduced from Θi which is estimated by fitting gi ’ s output to the ordered measurements in series i . Specifically , if the estimated Θi suggests that gi ’ s output is independent of the past measurements in series j , then we declare that series j is Granger noncausal for series i . We aim to design the approximation function gi to be highly expressive and capable of well-approximating any intricate causal coupling between the components of x induced by the component-wise function fi , while simultaneously being easily identifiable from underdetermined measurements . By virtue of their universal approximation property ( Schäfer & Zimmermann ( 2006 ) ) , recurrent neural networks or RNNs are a particularly ideal choice for gi towards inferring the pairwise Granger causal relationships in x . In this work , we investigate the use of a special type of RNN called the statistical recurrent unit ( SRU ) for inferring pairwise Granger causality between multiple nonlinearly interacting stochastic processes . Introduced by Oliva et al . ( 2017 ) , an SRU is a highly expressive recurrent neural network designed specifically for modeling multivariate time series data with complex-nonlinear dependencies spanning multiple time lags . Unlike the popular gated RNNs ( e.g. , long short-term memory ( LSTM ) ( Hochreiter & Schmidhuber ( 1997 ) ) and gated recurrent unit ( GRU ) ) ( Chung et al . ( 2014 ) ) , the SRU ’ s design is completely devoid of the highly nonlinear sigmoid gating functions and thus less affected by the vanishing/exploding gradient issue during training . Despite its simpler ungated architecture , an SRU can model both short and long-term temporal dependencies in a multivariate time series . It does so by maintaining multi-time scale summary statistics of the time series data in the past , which are preferentially sensitive to different older portions of the time series x . By taking appropriate linear combinations of the summary statistics at different time scales , an SRU is able to construct predictive causal features which can be both highly component-specific and lag-specific at the same time . From the causal inference perspective , this dual-specificity of the SRU ’ s predictive features is its most desirable feature , as one would argue that causal effects in reality also tend to be highly localized in both space and time . The main contributions of this paper can be summarized as follows : 1 . We propose the use of statistical recurrent units ( SRUs ) for detecting pairwise Granger causality between the nonlinearly interacting stochastic processes . We show that the entire network of pairwise Granger causal relationships can be inferred directly from the regularized block-sparse estimate of the input-layer weight parameters of the SRUs trained to predict the time series measurements of the individual processes . 2 . We propose a modified SRU architecture called economy SRU or eSRU in short . The first of the two proposed modifications is aimed at substantially reducing the number of trainable parameters in the standard SRU model without sacrificing its expressiveness . The second modification entails regularizing the SRU ’ s internal weight parameters to enhance the interpretability of its learned predictive features . Compared to the standard SRU , the proposed eSRU model is considerably less likely to overfit the time series measurements . 3 . We conduct extensive numerical experiments to demonstrate that eSRU is a compelling model for inferring pairwise Granger causality . The proposed model is found to outperform the multi-layer perceptron ( MLP ) , LSTM and attention-gated convolutional neural network ( AG-CNN ) based models considered in the earlier works . 3 PROPOSED GRANGER CAUSAL INFERENCE FRAMEWORK . In the proposed scheme , each of the unknown generative functions fi , 1 ≤ i ≤ n in the presumed component-wise model of x in ( 1 ) is individually approximated by a distinct SRU network . The ith SRU network sequentially processes the time series measurements x and outputs a next-step prediction sequence x̂+i = { x̂i,2 , x̂i,3 , . . . , x̂i , t+1 , . . . } ⊂ R , where x̂i , t+1 denotes the predicted value of component series i at time t + 1 . The prediction x̂i , t+1 is computed in a recurrent fashion by combining the current input sample xt at time t with the summary statistics of past samples of x up to and including time t− 1 as illustrated in Figure 1 . The following update equations describe the sequential processing of the input time series x within the ith SRU network in order to generate a prediction of xi , t+1 . Feedback : ri , t = h ( W ( i ) r ui , t−1 + b ( i ) r ) ∈ Rdr . ( 3a ) Recurrent statistics : φi , t = h ( W ( i ) in xt + W ( i ) f ri , t−1 + b ( i ) in ) ∈ Rdφ . ( 3b ) Multi-scale summary statistics : ui , t = [ ( uα1i , t ) T ( uα2i , t ) T . . . ( uαmi , t ) T ] T ∈ Rmdφ , αj ∈ A , ∀j . ( 3c ) Single-scale summary statistics : uαji , t = ( 1− αj ) u αj i , t−1 + αjφi , t , ∈ R dφ , αj ∈ [ 0 , 1 ] . ( 3d ) Output features : oi , t = h ( W ( i ) o ui , t + b ( i ) o ) ∈ Rdo . ( 3e ) Output prediction : x̂i , t+1 = ( w ( i ) y ) T oi , t + b ( i ) y ∈ R. ( 3f ) The function h in the above updates is the elementwise Rectified Linear Unit ( ReLU ) operator , h ( · ) : = max ( · , 0 ) , which serves as the nonlinear activation in the three dedicated single layer neural networks that generate the recurrent statistics φi , t , the feedback ri , t and the output features oi , t in the ith SRU network . In order to generate the next-step prediction of series i at time t , the ith SRU network first prepares the feedback ri , t by nonlinearly transforming its last hidden state ui , t−1 . As stated in equation 3a , a single layer ReLU network parameterized by weight matrix W ( i ) r and bias b ( i ) r maps the hidden state ui , t−1 to the feedback ri , t . Another single layer ReLU network parameterized by weight matrices W ( i ) in , W ( i ) f and bias b ( i ) in takes the input xt and the feedback ri , t and tranforms them into the recurrent statistics φi , t as described in equation 3b . Equation 3d describes how the network ’ s multi-timescale hidden states uαi , t for α ∈ A = { α1 , α2 , . . . , αm } ⊂ [ 0 , 1 ] are updated in parallel by taking exponentially weighted moving averages of the recurrent statistics φi , t corresponding to m different scales in A . A third single layer ReLU network parameterized by W ( i ) o and b ( i ) o transforms the concatenated multi- timescale summary statistics ui , t = [ ( uα1i , t ) T ( uα2i , t ) T . . . ( uαmi , m ) T ] T to generate the nonlinear causal features oi , t which , according to Oliva et al . ( 2017 ) , are arguably highly sensitive to the input time series measurements at specific lags . Finally , the network generates the next-step prediction of series i as x̂i , t+1 by linearly combining the nonlinear output features in oi , t , as depicted in equation 3f . For values of scale α ≈ 1 , the single-scale summary statistic uαi , t in equation 3d is more sensitive to the recent past measurements in x . On the other hand , α ≈ 0 yields a summary statistic that is more representative of the older portions of the input time series . Oliva et al . ( 2017 ) elaborates on how the SRU is able to generate output features ( oi , t , 1 ≤ i ≤ n ) that are preferentially sensitive to the measurements from specific past segments of x by taking appropriate linear combinations of the summary statistics corresponding to different values of α in A .
the paper attempts to infer Granger causality between nonlinearly interacting stochastic processes from their time series measurements. instead of using MLP/LSTM etc to to model time series measurement, the paper proposed to use component-wise time series prediction model with Statistical Recurrent Units to model the measurements. they consider a low-dimensional version of SRU, which they call economy-SRU. in particular, they use group-wise regularizing to accompany the particular structure of the model to aid interpretability. they compared the performance with existing models with MLP/LSTM and show some gains in a few examples (but not all.) the proposal is interesting, but the experiment section might need further strengthening. currently, the experimental results do not immediately pop out as showing eSRU particularly useful.
SP:467735fd49561cd06342bb38a921e541553c6633
Situating Sentence Embedders with Nearest Neighbor Overlap
1 INTRODUCTION . Continuous embeddings—of words and of larger linguistic units—are now ubiquitious in NLP . The success of self-supervised pretraining methods that deliver embeddings from raw corpora has led to a proliferation of embedding methods , with an eye toward “ universality ” across NLP tasks . Our focus here is on sentence embedders , and specifically their evaluation . As with most NLP components , intrinsic ( e.g. , Conneau et al. , 2018 ) and extrinsic ( e.g. , GLUE ; Wang et al. , 2019 ) evaluations have emerged for sentence embedders . Our approach , nearest neighbor overlap ( N2O ) , is different : it compares a pair of embedders in a linguistics- and task-agnostic manner , using only a large unannotated corpus . The central idea is that two embedders are more similar if , for a fixed query sentence , they tend to find nearest neighbor sets that overlap to a large degree . By drawing a random sample of queries from the corpus itself , N2O can be computed on in-domain data without additional annotation , and therefore can help inform embedder choices in applications such as text clustering ( Cutting et al. , 1992 ) , information retrieval ( Salton & Buckley , 1988 ) , and open-domain question answering ( Seo et al. , 2018 ) , among others . After motivating and explaining the N2O method ( §2 ) , we apply it to 21 sentence embedders ( §3-4 ) . Our findings ( §5 ) reveal relatively high functional similarity among averaged static ( noncontextual ) word type embeddings , a strong effect of the use of subword information , and that BERT and GPT are distant outliers . In §6 , we demonstrate the robustness of N2O across different query samples and probe sizes . We also illustrate additional analyses made possible by N2O : identifying embeddingspace neighbors of a query sentence that are stable across embedders , and those that are not ( §7 ) ; and probing the abilities of embedders to find a known paraphrase ( §8 ) . The latter reveals considerable variance across embedders ’ ability to identify semantically similar sentences from a broader corpus . 2 CORPUS-BASED EMBEDDING COMPARISON . We first motivate and introduce our nearest neighbor overlap ( N2O ) procedure for comparing embedders ( maps from objects to vectors ) . Although we experiment with sentence embedders in this paper , we note that this comparison procedure can be applied to other types of embedders ( e.g. , phrase-level or document-level ) .1 1We also note that nearest neighbor search has been frequently used on word embeddings ( e.g. , word analogy tasks ) . function N2O ( eA , eB , C , k ) for each query qj ∈ { qi } ni=1 do neighborsA ← nearest ( eA , qj , C , k ) neighborsB ← nearest ( eB , qj , C , k ) o [ j ] ← |neighborsA ∩ neighborsB | end for return ∑ j o [ j ] / ( k × n ) end function Figure 2 : Computation of N2O for two embedders , eA and eB , using a corpus C ; the number of nearest neighbors is given by k. n is the number of queries ( q1 . . .qn ) , which are sampled uniformly from the corpus without replacement . The output is in [ 0 , 1 ] , where 0 indicates no overlap between nearest neighbors for all queries , and 1 indicates perfect overlap . 2.1 DESIDERATA . We would like to quantify the extent to which sentence embedders vary in their treatment of “ similarity. ” For example , given the sentence Mary gave the book to John , embedders based on bag-of-words will treat John gave the book to Mary as being maximally similar to the first sentence , whereas other embedders may treat Mary gave the dictionary to John as more similar ; our comparison should reflect this intuition . We would also like to focus on using naturally-occuring text for our comparison . Although there is merit in expert-constructed examples ( see linguistic probing tasks referenced in §9 ) , we have little understanding of how these models will generalize to text from real documents ; many application settings involve computing similarity across texts in a corpus . Finally , we would like our evaluation to be task-agnostic , since we expect embeddings learned from large unannotated corpora in a self-supervised ( and task-agnostic ) manner to continue to play an important role in NLP . As a result , we base our comparison on nearest neighbors : first , because similarity is often assumed to correspond to nearness in embedding space ( e.g. , Figure 1 ) ; second , because nearest neighbor methods are used directly for retrieval and other applications ; and finally , because the nearest neighbors of a sentence can be computed for any embedder on any corpus without additional annotation . 2.2 ALGORITHM . Suppose we want to compare two sentence embedders , eA ( · ) and eB ( · ) , where each embedding method takes as input a natural language sentence s and outputs a d-dimensional vector . For our purposes , we consider variants trained on different data or using different hyperparameters , even with the same parameter estimation procedure , to be different sentence embedders . Take a corpus C , which is likely to have some semantic overlap in its sentences , and segment it into sentences s1 , . . . , s|C| . Randomly select a small subset of the sentences in C as “ queries ” ( q1 , . . . , qn ) . To see how similar eA and eB are , we compute the overlap in nearest neighbor sentences , averaged across multiple queries ; the algorithm is in Figure 2. nearest ( ei , qj , C , k ) returns the k nearest neighbor sentences in corpus C to the query sentence qj , where all sentences are embedded with ei.2 There are different ways to define nearness and distance in embedding spaces ( e.g. , using cosine similarity or Euclidean distance ) ; in this paper we use cosine similarity . We can think about this procedure as randomly probing the sentence vector space ( through the n query sentences ) from the larger space of the embedded corpus , under a sentence embedder ei ; in some sense , k controls the depth of the probe . The N2O procedure then compares the sets of sentences recovered by the probes . 2One of these will be the query sentence itself , since we sampled it from the corpus ; we assume nearest ignores it when computing the k-nearest-neighbor lists . 3 SENTENCE EMBEDDING METHODS . In the previous section , we noted that we consider a “ sentence embedder ” to encompass how it was trained , which data it was trained on , and any other hyperparameters involved in its creation . In this section , we first review the broader methods behind these embedders , turning to implementation decisions in §4 . 3.1 TF-IDF . We consider tf-idf , which has been clasically used in information retrieval settings . The tf-idf of a word token is based off two statistics : term frequency ( how often a term appears in a document ) and inverse document frequency ( how rare the term is across all documents ) . The vector representation of the document is the idf-scaled term frequencies of its words ; in this work we treat each sentence as a “ document ” and the vocabulary-length tf-idf vector as its embedding . 3.2 WORD EMBEDDINGS . Because sentence embeddings are often built from word embeddings ( through initialization when training or other composition functions ) , we briefly review notable word embedding methods . Static embeddings . We define “ static embeddings ” to be fixed representations of every word type in the vocabulary , regardless of its context . We consider three popular methods : word2vec ( Mikolov et al. , 2013 ) embeddings optimized to be predictive of a word given its context ( continuous bag of words ) or vice versa ( skipgram ) ; GloVe ( Pennington et al. , 2014 ) embeddings learned based on global cooccurrence counts ; and FastText ( Conneau et al. , 2017 ) , an extension of word2vec which includes character n-grams ( for computing representations of out-of-vocabulary words ) . Contextual embeddings . Contextual word embeddings , where a word token ’ s representation is dependent on its context , have become popular due to improvements over state-of-the-art on a wide variety of tasks . We consider : • ELMo ( Peters et al. , 2018 ) embeddings are generated from a multi-layer , bidirectional recurrent language model that incorporates character-level information . • GPT ( Radford et al. , 2018 ) embeddings are generated from a unidirectional language model with multi-layer transformer decoder ; subword information is included via byte-pair encoding ( BPE ; Sennrich et al. , 2016 ) . • BERT ( Devlin et al. , 2019 ) embeddings are generated from a transformer-based model trained to predict ( a ) a word given both left and right context , and ( b ) whether a sentence is the “ next sentence ” given a previous sentence . Subword information is incorporated using the WordPiece model ( Schuster & Nakajima , 2012 ) . Composition of word embeddings . The simplest way to obtain a sentence ’ s embedding from its sequence of words is to average the word embeddings.3 Despite the fact that averaging discards word order , it performs surprisingly well on sentence similarity , NLI , and other downstream tasks ( Wieting et al. , 2016 ; Arora et al. , 2017 ) .4 In the case of contextual embeddings , there may be other conventions for obtaining the sentence embedding , such as using the embedding for a special token or position in the sequence . With BERT , the [ CLS ] token representation ( normally used as input for classification ) is also sometimes used as a sentence representation ; similarly , the last token ’ s representation may be used for GPT . 3In the case of GPT and BERT , which yield subword embeddings , we treat those as we would standard word embeddings . 4Arora et al . ( 2017 ) also suggest including a PCA-based projection with word embedding averaging to further improve downstream performance . However , because our focus is on behavior of the embeddings themselves , we do not apply this projection here . 3.3 ENCODERS . A more direct way to obtain sentence embeddings is to learn an encoding function that takes in a sequence of tokens and outputs a single embedding ; often this is trained using a relevant supervised task . We consider two encoder-based methods : • InferSent ( Conneau et al. , 2017 ) : supervised training on the Stanford Natural Language Inference ( SNLI ; Bowman et al. , 2015 ) dataset ; the sentence encoder provides representations for the premise and hypothesis sentences , which are then fed into a clasifier . • Universal Sentence Encoder ( USE ; Cer et al. , 2018 ) : supervised , multi-task training on several semantic tasks ( including semantic textual similarity ) ; sentences are encoded either with a deep averaging network or a transformer . 4 EXPERIMENTAL DETAILS . Our main experiment is a broad comparison , using N2O , of the embedders discussed above and listed in Table 1 . Despite the vast differences in methods , N2O allows us to situate each in terms of its functional similarity to the others . N2O computation . We describe a N2O sample as , for a given random sample of n queries , the computation of N2O ( eA , eB , C , k ) for every pair of sentence embedders as described in §2 , using cosine similarity to determine nearest neighbors . The results in §5 are with k ( the number of sentences retrieved ) set to 50 , averaged across five samples of n = 100 queries . We illustrate the effects of different k and N2O samples in §6 . Corpus . For our corpus , we draw from the English Gigaword ( Parker et al. , 2011 ) , which contains newswire text from seven news sources . For computational feasibility , we use the articles from 2010 , for a total of approximately 8 million unique sentences.5 We note preprocessing details ( segmentation , tokenization ) in Appendix A. Queries . For each N2O sample , we randomly select 100 ledes ( opening sentences ) from the news articles of our corpus , and use the same ones across all embedders . Because the Gigaword corpus contains text from multiple news sources covering events over the same time period , it is likely that the corpus will contain semantically similar sentences for a given lede . The average query length is 30.7 tokens ( s.d . 10.2 ) ; an example query is : “ Sandra Kiriasis and brakewoman Stephanie Schneider of Germany have won the World Cup bobsled race at Lake Placid. ” Sentence embedders . Table 1 details the sentence embedders we use in our experiments . In general , we use popular pretrained versions of the methods described in §3 . We also select pretrained variations of the same method ( e.g. , FastText embeddings trained from different corpora ) to permit more controlled comparisons . In a couple of cases , we train/finetune models of our own : for tf-idf , we compute frequency statistics using our corpus , with each sentence as its own “ document ” ; for BERT , we use the Hugging Face implementation with default hyperparameters,6 and finetune using the matched subset of MultiNLI ( Williams et al. , 2018 ) for three epochs ( dev . accuracy 84.1 % ) . We note that additional embedders are easily situated among the ones tested in this paper by first computing nearest neighbors of the same query sentences , and then computing overlap with the nearest neighbors obtained in this paper . To enable this , the code , query sentences , and nearest neighbors per embedder and query will be publicly available .
The paper proposes N2O, a tool for probing the similarity among sentence embedders. Given two sentence embedders, N2O measures the amount of overlap of the k-nearest neighbor sets reported by the two embedders, averaged over a sample of probing queries. Cosine similarity is used as the similarity metric. The paper computes all-pair N2O scores for common sentence embedders and analyzes the results.
SP:8f949e0793cdee7336cf2c40803cb47202fef232
Situating Sentence Embedders with Nearest Neighbor Overlap
1 INTRODUCTION . Continuous embeddings—of words and of larger linguistic units—are now ubiquitious in NLP . The success of self-supervised pretraining methods that deliver embeddings from raw corpora has led to a proliferation of embedding methods , with an eye toward “ universality ” across NLP tasks . Our focus here is on sentence embedders , and specifically their evaluation . As with most NLP components , intrinsic ( e.g. , Conneau et al. , 2018 ) and extrinsic ( e.g. , GLUE ; Wang et al. , 2019 ) evaluations have emerged for sentence embedders . Our approach , nearest neighbor overlap ( N2O ) , is different : it compares a pair of embedders in a linguistics- and task-agnostic manner , using only a large unannotated corpus . The central idea is that two embedders are more similar if , for a fixed query sentence , they tend to find nearest neighbor sets that overlap to a large degree . By drawing a random sample of queries from the corpus itself , N2O can be computed on in-domain data without additional annotation , and therefore can help inform embedder choices in applications such as text clustering ( Cutting et al. , 1992 ) , information retrieval ( Salton & Buckley , 1988 ) , and open-domain question answering ( Seo et al. , 2018 ) , among others . After motivating and explaining the N2O method ( §2 ) , we apply it to 21 sentence embedders ( §3-4 ) . Our findings ( §5 ) reveal relatively high functional similarity among averaged static ( noncontextual ) word type embeddings , a strong effect of the use of subword information , and that BERT and GPT are distant outliers . In §6 , we demonstrate the robustness of N2O across different query samples and probe sizes . We also illustrate additional analyses made possible by N2O : identifying embeddingspace neighbors of a query sentence that are stable across embedders , and those that are not ( §7 ) ; and probing the abilities of embedders to find a known paraphrase ( §8 ) . The latter reveals considerable variance across embedders ’ ability to identify semantically similar sentences from a broader corpus . 2 CORPUS-BASED EMBEDDING COMPARISON . We first motivate and introduce our nearest neighbor overlap ( N2O ) procedure for comparing embedders ( maps from objects to vectors ) . Although we experiment with sentence embedders in this paper , we note that this comparison procedure can be applied to other types of embedders ( e.g. , phrase-level or document-level ) .1 1We also note that nearest neighbor search has been frequently used on word embeddings ( e.g. , word analogy tasks ) . function N2O ( eA , eB , C , k ) for each query qj ∈ { qi } ni=1 do neighborsA ← nearest ( eA , qj , C , k ) neighborsB ← nearest ( eB , qj , C , k ) o [ j ] ← |neighborsA ∩ neighborsB | end for return ∑ j o [ j ] / ( k × n ) end function Figure 2 : Computation of N2O for two embedders , eA and eB , using a corpus C ; the number of nearest neighbors is given by k. n is the number of queries ( q1 . . .qn ) , which are sampled uniformly from the corpus without replacement . The output is in [ 0 , 1 ] , where 0 indicates no overlap between nearest neighbors for all queries , and 1 indicates perfect overlap . 2.1 DESIDERATA . We would like to quantify the extent to which sentence embedders vary in their treatment of “ similarity. ” For example , given the sentence Mary gave the book to John , embedders based on bag-of-words will treat John gave the book to Mary as being maximally similar to the first sentence , whereas other embedders may treat Mary gave the dictionary to John as more similar ; our comparison should reflect this intuition . We would also like to focus on using naturally-occuring text for our comparison . Although there is merit in expert-constructed examples ( see linguistic probing tasks referenced in §9 ) , we have little understanding of how these models will generalize to text from real documents ; many application settings involve computing similarity across texts in a corpus . Finally , we would like our evaluation to be task-agnostic , since we expect embeddings learned from large unannotated corpora in a self-supervised ( and task-agnostic ) manner to continue to play an important role in NLP . As a result , we base our comparison on nearest neighbors : first , because similarity is often assumed to correspond to nearness in embedding space ( e.g. , Figure 1 ) ; second , because nearest neighbor methods are used directly for retrieval and other applications ; and finally , because the nearest neighbors of a sentence can be computed for any embedder on any corpus without additional annotation . 2.2 ALGORITHM . Suppose we want to compare two sentence embedders , eA ( · ) and eB ( · ) , where each embedding method takes as input a natural language sentence s and outputs a d-dimensional vector . For our purposes , we consider variants trained on different data or using different hyperparameters , even with the same parameter estimation procedure , to be different sentence embedders . Take a corpus C , which is likely to have some semantic overlap in its sentences , and segment it into sentences s1 , . . . , s|C| . Randomly select a small subset of the sentences in C as “ queries ” ( q1 , . . . , qn ) . To see how similar eA and eB are , we compute the overlap in nearest neighbor sentences , averaged across multiple queries ; the algorithm is in Figure 2. nearest ( ei , qj , C , k ) returns the k nearest neighbor sentences in corpus C to the query sentence qj , where all sentences are embedded with ei.2 There are different ways to define nearness and distance in embedding spaces ( e.g. , using cosine similarity or Euclidean distance ) ; in this paper we use cosine similarity . We can think about this procedure as randomly probing the sentence vector space ( through the n query sentences ) from the larger space of the embedded corpus , under a sentence embedder ei ; in some sense , k controls the depth of the probe . The N2O procedure then compares the sets of sentences recovered by the probes . 2One of these will be the query sentence itself , since we sampled it from the corpus ; we assume nearest ignores it when computing the k-nearest-neighbor lists . 3 SENTENCE EMBEDDING METHODS . In the previous section , we noted that we consider a “ sentence embedder ” to encompass how it was trained , which data it was trained on , and any other hyperparameters involved in its creation . In this section , we first review the broader methods behind these embedders , turning to implementation decisions in §4 . 3.1 TF-IDF . We consider tf-idf , which has been clasically used in information retrieval settings . The tf-idf of a word token is based off two statistics : term frequency ( how often a term appears in a document ) and inverse document frequency ( how rare the term is across all documents ) . The vector representation of the document is the idf-scaled term frequencies of its words ; in this work we treat each sentence as a “ document ” and the vocabulary-length tf-idf vector as its embedding . 3.2 WORD EMBEDDINGS . Because sentence embeddings are often built from word embeddings ( through initialization when training or other composition functions ) , we briefly review notable word embedding methods . Static embeddings . We define “ static embeddings ” to be fixed representations of every word type in the vocabulary , regardless of its context . We consider three popular methods : word2vec ( Mikolov et al. , 2013 ) embeddings optimized to be predictive of a word given its context ( continuous bag of words ) or vice versa ( skipgram ) ; GloVe ( Pennington et al. , 2014 ) embeddings learned based on global cooccurrence counts ; and FastText ( Conneau et al. , 2017 ) , an extension of word2vec which includes character n-grams ( for computing representations of out-of-vocabulary words ) . Contextual embeddings . Contextual word embeddings , where a word token ’ s representation is dependent on its context , have become popular due to improvements over state-of-the-art on a wide variety of tasks . We consider : • ELMo ( Peters et al. , 2018 ) embeddings are generated from a multi-layer , bidirectional recurrent language model that incorporates character-level information . • GPT ( Radford et al. , 2018 ) embeddings are generated from a unidirectional language model with multi-layer transformer decoder ; subword information is included via byte-pair encoding ( BPE ; Sennrich et al. , 2016 ) . • BERT ( Devlin et al. , 2019 ) embeddings are generated from a transformer-based model trained to predict ( a ) a word given both left and right context , and ( b ) whether a sentence is the “ next sentence ” given a previous sentence . Subword information is incorporated using the WordPiece model ( Schuster & Nakajima , 2012 ) . Composition of word embeddings . The simplest way to obtain a sentence ’ s embedding from its sequence of words is to average the word embeddings.3 Despite the fact that averaging discards word order , it performs surprisingly well on sentence similarity , NLI , and other downstream tasks ( Wieting et al. , 2016 ; Arora et al. , 2017 ) .4 In the case of contextual embeddings , there may be other conventions for obtaining the sentence embedding , such as using the embedding for a special token or position in the sequence . With BERT , the [ CLS ] token representation ( normally used as input for classification ) is also sometimes used as a sentence representation ; similarly , the last token ’ s representation may be used for GPT . 3In the case of GPT and BERT , which yield subword embeddings , we treat those as we would standard word embeddings . 4Arora et al . ( 2017 ) also suggest including a PCA-based projection with word embedding averaging to further improve downstream performance . However , because our focus is on behavior of the embeddings themselves , we do not apply this projection here . 3.3 ENCODERS . A more direct way to obtain sentence embeddings is to learn an encoding function that takes in a sequence of tokens and outputs a single embedding ; often this is trained using a relevant supervised task . We consider two encoder-based methods : • InferSent ( Conneau et al. , 2017 ) : supervised training on the Stanford Natural Language Inference ( SNLI ; Bowman et al. , 2015 ) dataset ; the sentence encoder provides representations for the premise and hypothesis sentences , which are then fed into a clasifier . • Universal Sentence Encoder ( USE ; Cer et al. , 2018 ) : supervised , multi-task training on several semantic tasks ( including semantic textual similarity ) ; sentences are encoded either with a deep averaging network or a transformer . 4 EXPERIMENTAL DETAILS . Our main experiment is a broad comparison , using N2O , of the embedders discussed above and listed in Table 1 . Despite the vast differences in methods , N2O allows us to situate each in terms of its functional similarity to the others . N2O computation . We describe a N2O sample as , for a given random sample of n queries , the computation of N2O ( eA , eB , C , k ) for every pair of sentence embedders as described in §2 , using cosine similarity to determine nearest neighbors . The results in §5 are with k ( the number of sentences retrieved ) set to 50 , averaged across five samples of n = 100 queries . We illustrate the effects of different k and N2O samples in §6 . Corpus . For our corpus , we draw from the English Gigaword ( Parker et al. , 2011 ) , which contains newswire text from seven news sources . For computational feasibility , we use the articles from 2010 , for a total of approximately 8 million unique sentences.5 We note preprocessing details ( segmentation , tokenization ) in Appendix A. Queries . For each N2O sample , we randomly select 100 ledes ( opening sentences ) from the news articles of our corpus , and use the same ones across all embedders . Because the Gigaword corpus contains text from multiple news sources covering events over the same time period , it is likely that the corpus will contain semantically similar sentences for a given lede . The average query length is 30.7 tokens ( s.d . 10.2 ) ; an example query is : “ Sandra Kiriasis and brakewoman Stephanie Schneider of Germany have won the World Cup bobsled race at Lake Placid. ” Sentence embedders . Table 1 details the sentence embedders we use in our experiments . In general , we use popular pretrained versions of the methods described in §3 . We also select pretrained variations of the same method ( e.g. , FastText embeddings trained from different corpora ) to permit more controlled comparisons . In a couple of cases , we train/finetune models of our own : for tf-idf , we compute frequency statistics using our corpus , with each sentence as its own “ document ” ; for BERT , we use the Hugging Face implementation with default hyperparameters,6 and finetune using the matched subset of MultiNLI ( Williams et al. , 2018 ) for three epochs ( dev . accuracy 84.1 % ) . We note that additional embedders are easily situated among the ones tested in this paper by first computing nearest neighbors of the same query sentences , and then computing overlap with the nearest neighbors obtained in this paper . To enable this , the code , query sentences , and nearest neighbors per embedder and query will be publicly available .
The paper proposes a method to estimate the similarity of sentence embedders called N2O with the goal to better inform embedder choice in downstream applications. For two embedders A and B, N2O samples sentences called queries from a corpus, uses A and B to compute embeddings for each sentence, determines the k nearest neighbors (= other sentences from the corpus) for each sentence, and computes the overlap of the resulting sets of neighbors. Nearest neighbors are computed with Cosine similarity.
SP:8f949e0793cdee7336cf2c40803cb47202fef232
Learning the Arrow of Time for Problems in Reinforcement Learning
1 INTRODUCTION . The asymmetric progression of time has a profound effect on how we , as agents , perceive , process and manipulate our environment . Given a sequence of observations of our familiar surroundings ( e.g . as video frames ) , we possess the innate ability to predict whether the said observations are ordered correctly . We use this ability not just to perceive , but also to act : for instance , we know to be cautious about dropping a vase , guided by the intuition that the act of breaking a vase can not be undone . This profound intuition reflects some fundamental properties of the world in which we dwell , and in this work we ask whether and how these properties can be exploited to learn a representation that functionally mimics our understanding of the asymmetric nature of time . The term Arrow of Time was coined by the British astronomer Eddington ( 1929 ) to denote this inherent asymmetry , which he attributed to the non-decreasing nature of the total thermodynamic entropy of an isolated system , as required by the second law of thermodynamics . Since then , the notion of an arrow of time has been formalized and explored in various contexts , spanning not only physics , but also algorithmic information theory ( Zurek , 1989 ) , causal inference ( Janzing et al. , 2016 ) and time-series analysis ( Janzing , 2010 ; Bauer et al. , 2016 ) . Broadly , an arrow of time can be thought of as a function that monotonously increases as a system evolves in time . Expectedly , the notion of irreversibility plays a central role in the discourse . In statistical physics , it is posited that the arrow of time ( i.e . entropy production ) is driven by irreversible processes ( Prigogine , 1978 ; Seifert , 2012 ) . To understand how a notion of an arrow of time can be useful in the reinforcement learning context , consider the example of a cleaning robot tasked with moving a box across a room ( Amodei et al. , 2016 ) . The optimal way of successfully completing the task might involve the robot doing something disruptive , like knocking a vase over ( Fig 1 ) . Now on the one hand , such disruptions – or side-effects – might be difficult to recover from . In the extreme case , they might be virtually irreversible – say when the vase is broken . On the other hand , irreversibility implies that states with a larger number of broken vases tend to occur in the future , and one should therefore expect an arrow of time ( as a scalar function of the state ) to assign larger values to states with larger number of broken vases . An arrow of time should therefore quantify the amount of disorder in the environment , analogous to the entropy for isolated thermodynamical systems . Now , one possible application could be to detect and preempt such side-effects , for instance by penalizing policies that significantly increment the arrow of time by executing difficult-to-reverse transitions . But the utility of an arrow of time is more general : it serves as a directed measure of reachability . This can be seen by observing that it is more difficult to obtain order from disorder : it is , after all , difficult to reach a state with a vase intact from one with it broken , rather than vice versa . In this sense , we may say that a state is relatively unreachable from another state if an arrow of time assigns a lower value to the former . Further , a directed measure of reachability afforded by an arrow of time can be utilized for deriving an intrinsic reward signal to enable agents to learn complex skills in the absence of external rewards . To see how , consider that an agent tasked with reversing the arrow of time ( by creating order from disorder ) must in general learn complex skills to achieve its goal . Indeed , gluing together a broken vase will require the agent to learn an array of complex planning and motor skills , which is the ultimate goal of such intrinsic rewards . In summary , our contributions are the following . ( a ) We propose a simple objective to learn an arrow of time for a Markov ( Decision ) Process in a self-supervised manner , i.e . entirely from sampled environment trajectories and without external rewards . We call the resulting function ( acting on the state ) the h-potential , and demonstrate its utility and caveats for a selection of discrete and continuous environments . Moreover , we compare the learned h-potential to the free-energy functional of stochastic processes – the latter being a well-known notion of an arrow of time ( Jordan et al. , 1998 ) . While there exist prior work on detecting the arrow of time in videos ( Pickup et al. , 2014 ; Wei et al. , 2018 ) and time-series data ( Peters et al. , 2009 ; Bauer et al. , 2016 ) , we believe our work to be the first towards measuring it in the context of reinforcement learning . ( b ) We critically and transparently discuss the conceptually rich subtleties that arise before an arrow of time can be practically useful in the RL context . ( c ) We expose how the notions of reachability , safety and curiosity can be unified under the common framework afforded by a learned arrow of time . 2 THE h-POTENTIAL Motivated by the preceding discussion , our goal is to learn a function that quantifies the amount of disorder in a given environment state , where we say that irreversible state transitions increase disorder . In this sense , we seek a function ( of the state ) that is constant in expectation along fully reversible state transitions , but increase in expectation along state transitions that are less reversible . To that end , we begin by formally introducing this function , which we call the h-potential , as the solution to a functional optimization problem . Subsequently , we critically discuss a few conceptual roadblocks that must be cleared before such a function can be useful in the RL setting . 2.1 FORMALISM . Consider a Markov Decision Process ( a MDP , i.e . environment ) , and let S and A be its state and action spaces respectively . A policy π is a mapping from the state space to the space of distributions over actions . Given a state s ∈ S sampled from some initial state distribution p0 , we may sample an action a ∈ A from the policy π ( a|s ) , which in turn can be used to sample another state s′ ∈ S from the environment dynamics p ( s′|a , s ) . Iterating N more times for a fixed π , one obtains a sequence of states ( s0 , ... , st , ... , sN ) , which is a realization of the Markov chain ( a trajectory ) with transition probabilities pπ ( st+1|st ) = ∑ a∈A p ( st+1|st , a ) π ( a|st ) . We may now define a function hπ : S → R as the solution to the following functional objective : Jπ [ ĥ ] = Et∼U ( { 0 , ... , N−1 } ) EstEst+1|st [ ĥ ( st+1 ) − ĥ ( st ) |st ] + λT [ ĥ ] ; hπ = arg max ĥ Jπ [ ĥ ] ( 1 ) whereU ( A ) is the uniform distribution over any setA , EtEstEst+1|st is the expectation over all state transitions , λ is a scalar coefficient and T [ ĥ ] is a regularizing term that prevents ĥ from diverging within a finite domain . In words : the first term on the right hand side of the first equation above encourages hπ to increase in expectation along the sampled trajectories , whereas the second term controls this increase ; the two terms are balanced with a coefficient λ . Informally : if a state transition s → s′ is fully reversible , the probability of sampling it equals that of sampling the corresponding reverse transition , s′ → s. For such transitions , the pressure on hπ to increase along the forward transition ( s → s′ ) is compensated by the counter-pressure for it to increase along the reverse transition ( s′ → s ) , or equivalently , decrease along the forward transition . Along such transitions , we should therefore expect hπ to remain constant ( in expectation ) . Accordingly , if the forward transition were to be more likely ( i.e . if the transition is not fully reversible ) , we should expect hπ to increase ( in expectation ) in order to satisfy its objective . The regularizer T must be chosen to suit the problem at hand , and different choices result in solutions that have different characteristics1 . Possible choices for T include ( any combination of ) the negative of L2 norm −‖ĥ‖2 , and/or the following trajectory regularizer : T [ ĥ ] = −Et∼U ( { 0 , ... , N−1 } ) EstEst+1|st [ |ĥ ( st+1 ) − ĥ ( st ) | 2|st ] ( 2 ) Intuitively : while the solution hπ is required to increase in expectation along trajectories , the trajectory regularizer acts as an contrastive term by penalizing hπ for changing at all . With some effort , the problem defined in Eqn 1 can be approached analytically for toy Markov chains ( interested readers may refer to App A for a technical discussion ) . However , such analytical treatment becomes infeasible for more complex and larger-scale environments with unknown transition probabilities . To tackle such environments , we will cast the functional optimization problem in Eqn 1 to an optimization problem over the parameters of a deep neural network and solve it for a variety of discrete and continuous environments . 2.2 SUBTLETIES . In this section , we discuss two conceptually rich subtleties that determine the conditions under which the learned arrow of time ( h-potential ) can be useful in practice . The Role of a Policy . The first subtlety is rooted in the observation that the trajectories ( s0 , ... , sN ) are collected by a given but arbitrary policy . However , there may exist policies for which the resulting arrow of time is unnatural , perhaps even misleading . Consider for instance the actions of a practitioner of Kintsugi , the ancient Japanese art of repairing broken pottery . The corresponding policy2 might cause the environment to transition from a state where the vase is broken to one where it is not . If we learn the h-potential on such trajectories , it might be the case that counter to our intuition , states with a larger number of broken vases are assigned smaller values ( and the vice versa ) . Now , one may choose to resolve this conundrum by defining : J [ h ] = Eπ∼U ( Π ) Jπ [ h ] ( 3 ) where Π is the set of all policies defined on S , and U ( Π ) denotes a uniform distribution over Π . The resulting function h∗ = arg max { J [ h ] + λT [ h ] } would characterize the arrow of time with respect to all possible policies , and one would expect that for a vast majority of such policies , the transition from broken vase to a intact vase is rather unlikely and/or requires highly specialized policies . 1This is not unlike the case for linear regression : for instance , using Lasso instead of ridge-regression will generally yield solutions that have different properties . 2This is analogous to Maxwell ’ s demon in classical thermodynamics . Unfortunately , determining h∗ is not feasible for most interesting applications , given the outer expectation over all possible policies . As a compromise , we use ( uniformly ) random actions to gather trajectories . The simplicity of the corresponding random policy justifies its adoption , since one would expect a policy resembling ( say ) a Kintsugi artist to be rather complex and not implementable with random actions . In this sense , we ensure that the learned arrow of time characterizes the underlying dynamics of the environment , and not the peculiarities of a particular agent3 . The price we pay is the lack of adequate exploration in complex enough environments , although this problem plagues most model-based reinforcement learning approaches4 ( cf . Ha & Schmidhuber ( 2018 ) ) . In the following , we assume π to be uniformly random and use hπ interchangeably with h. Dissipative Environments . The second subtlety concerns what we require of environments in which the arrow of time is informative . To illustrate the matter , consider the class of systems5 , a typical instance of which could be a billiard ball moving on a frictionless arena and bouncing ( elastically ) off the edges ( Bunimovich , 2007 ) . The state space comprises the ball ’ s velocity and its position constrained to a billiard table ( without holes ! ) , where the ball is initialized at a random position on the table . For such a system , it can be seen by time-reversal symmetry that when averaged over a large number of trajectories , the state transition s → s′ is just as likely as the reverse transition s′ → s. In this case , recall that the arrow of time is expected to remain constant . A similar argument can be made for systems that identically follow closed trajectories in their respective state space ( e.g . a frictionless and undriven pendulum ) . It follows that the h-potential must remain constant along the trajectory and that the arrow of time is uninformative . However , for so-called dissipative systems , the notion of an arrow of time is pronounced and well studied ( Willems , 1972 ; Prigogine , 1978 ) . In MDPs , dissipative behaviour may arise in situations where certain transitions are irreversible by design ( e.g . bricks disappearing in Atari Breakout ) , or due to partial observability , e.g . for a damped pendulum , the state space does not track the microscopic processes that give rise to friction6 . Therefore , a central premise underlying the practical utility of learning the arrow of time is that the considered MDP is indeed dissipative , which we shall assume in the following ; in Sec 5 ( Fig 5b ) , we will empirically investigate the case where this assumption is violated .
This work proposes the h-potential, which is a solution to an objective that measures state-transition asymmetry in an MDP. Roughly speaking, in many situations some state transitions (s-->s’) are more probable than their converse (s’-->s), and if we have a function that assigns a higher value to a more probable transition (compared to its converse), then we can use it as a measure of the “reversibility” of that transition. This function can then be used, for example, as an intrinsic reward signal; indeed, there may be cases where state transitions should be avoided if they are not reversible.
SP:4e110cb77b848272f468030bfe05014d08d7b838
Learning the Arrow of Time for Problems in Reinforcement Learning
1 INTRODUCTION . The asymmetric progression of time has a profound effect on how we , as agents , perceive , process and manipulate our environment . Given a sequence of observations of our familiar surroundings ( e.g . as video frames ) , we possess the innate ability to predict whether the said observations are ordered correctly . We use this ability not just to perceive , but also to act : for instance , we know to be cautious about dropping a vase , guided by the intuition that the act of breaking a vase can not be undone . This profound intuition reflects some fundamental properties of the world in which we dwell , and in this work we ask whether and how these properties can be exploited to learn a representation that functionally mimics our understanding of the asymmetric nature of time . The term Arrow of Time was coined by the British astronomer Eddington ( 1929 ) to denote this inherent asymmetry , which he attributed to the non-decreasing nature of the total thermodynamic entropy of an isolated system , as required by the second law of thermodynamics . Since then , the notion of an arrow of time has been formalized and explored in various contexts , spanning not only physics , but also algorithmic information theory ( Zurek , 1989 ) , causal inference ( Janzing et al. , 2016 ) and time-series analysis ( Janzing , 2010 ; Bauer et al. , 2016 ) . Broadly , an arrow of time can be thought of as a function that monotonously increases as a system evolves in time . Expectedly , the notion of irreversibility plays a central role in the discourse . In statistical physics , it is posited that the arrow of time ( i.e . entropy production ) is driven by irreversible processes ( Prigogine , 1978 ; Seifert , 2012 ) . To understand how a notion of an arrow of time can be useful in the reinforcement learning context , consider the example of a cleaning robot tasked with moving a box across a room ( Amodei et al. , 2016 ) . The optimal way of successfully completing the task might involve the robot doing something disruptive , like knocking a vase over ( Fig 1 ) . Now on the one hand , such disruptions – or side-effects – might be difficult to recover from . In the extreme case , they might be virtually irreversible – say when the vase is broken . On the other hand , irreversibility implies that states with a larger number of broken vases tend to occur in the future , and one should therefore expect an arrow of time ( as a scalar function of the state ) to assign larger values to states with larger number of broken vases . An arrow of time should therefore quantify the amount of disorder in the environment , analogous to the entropy for isolated thermodynamical systems . Now , one possible application could be to detect and preempt such side-effects , for instance by penalizing policies that significantly increment the arrow of time by executing difficult-to-reverse transitions . But the utility of an arrow of time is more general : it serves as a directed measure of reachability . This can be seen by observing that it is more difficult to obtain order from disorder : it is , after all , difficult to reach a state with a vase intact from one with it broken , rather than vice versa . In this sense , we may say that a state is relatively unreachable from another state if an arrow of time assigns a lower value to the former . Further , a directed measure of reachability afforded by an arrow of time can be utilized for deriving an intrinsic reward signal to enable agents to learn complex skills in the absence of external rewards . To see how , consider that an agent tasked with reversing the arrow of time ( by creating order from disorder ) must in general learn complex skills to achieve its goal . Indeed , gluing together a broken vase will require the agent to learn an array of complex planning and motor skills , which is the ultimate goal of such intrinsic rewards . In summary , our contributions are the following . ( a ) We propose a simple objective to learn an arrow of time for a Markov ( Decision ) Process in a self-supervised manner , i.e . entirely from sampled environment trajectories and without external rewards . We call the resulting function ( acting on the state ) the h-potential , and demonstrate its utility and caveats for a selection of discrete and continuous environments . Moreover , we compare the learned h-potential to the free-energy functional of stochastic processes – the latter being a well-known notion of an arrow of time ( Jordan et al. , 1998 ) . While there exist prior work on detecting the arrow of time in videos ( Pickup et al. , 2014 ; Wei et al. , 2018 ) and time-series data ( Peters et al. , 2009 ; Bauer et al. , 2016 ) , we believe our work to be the first towards measuring it in the context of reinforcement learning . ( b ) We critically and transparently discuss the conceptually rich subtleties that arise before an arrow of time can be practically useful in the RL context . ( c ) We expose how the notions of reachability , safety and curiosity can be unified under the common framework afforded by a learned arrow of time . 2 THE h-POTENTIAL Motivated by the preceding discussion , our goal is to learn a function that quantifies the amount of disorder in a given environment state , where we say that irreversible state transitions increase disorder . In this sense , we seek a function ( of the state ) that is constant in expectation along fully reversible state transitions , but increase in expectation along state transitions that are less reversible . To that end , we begin by formally introducing this function , which we call the h-potential , as the solution to a functional optimization problem . Subsequently , we critically discuss a few conceptual roadblocks that must be cleared before such a function can be useful in the RL setting . 2.1 FORMALISM . Consider a Markov Decision Process ( a MDP , i.e . environment ) , and let S and A be its state and action spaces respectively . A policy π is a mapping from the state space to the space of distributions over actions . Given a state s ∈ S sampled from some initial state distribution p0 , we may sample an action a ∈ A from the policy π ( a|s ) , which in turn can be used to sample another state s′ ∈ S from the environment dynamics p ( s′|a , s ) . Iterating N more times for a fixed π , one obtains a sequence of states ( s0 , ... , st , ... , sN ) , which is a realization of the Markov chain ( a trajectory ) with transition probabilities pπ ( st+1|st ) = ∑ a∈A p ( st+1|st , a ) π ( a|st ) . We may now define a function hπ : S → R as the solution to the following functional objective : Jπ [ ĥ ] = Et∼U ( { 0 , ... , N−1 } ) EstEst+1|st [ ĥ ( st+1 ) − ĥ ( st ) |st ] + λT [ ĥ ] ; hπ = arg max ĥ Jπ [ ĥ ] ( 1 ) whereU ( A ) is the uniform distribution over any setA , EtEstEst+1|st is the expectation over all state transitions , λ is a scalar coefficient and T [ ĥ ] is a regularizing term that prevents ĥ from diverging within a finite domain . In words : the first term on the right hand side of the first equation above encourages hπ to increase in expectation along the sampled trajectories , whereas the second term controls this increase ; the two terms are balanced with a coefficient λ . Informally : if a state transition s → s′ is fully reversible , the probability of sampling it equals that of sampling the corresponding reverse transition , s′ → s. For such transitions , the pressure on hπ to increase along the forward transition ( s → s′ ) is compensated by the counter-pressure for it to increase along the reverse transition ( s′ → s ) , or equivalently , decrease along the forward transition . Along such transitions , we should therefore expect hπ to remain constant ( in expectation ) . Accordingly , if the forward transition were to be more likely ( i.e . if the transition is not fully reversible ) , we should expect hπ to increase ( in expectation ) in order to satisfy its objective . The regularizer T must be chosen to suit the problem at hand , and different choices result in solutions that have different characteristics1 . Possible choices for T include ( any combination of ) the negative of L2 norm −‖ĥ‖2 , and/or the following trajectory regularizer : T [ ĥ ] = −Et∼U ( { 0 , ... , N−1 } ) EstEst+1|st [ |ĥ ( st+1 ) − ĥ ( st ) | 2|st ] ( 2 ) Intuitively : while the solution hπ is required to increase in expectation along trajectories , the trajectory regularizer acts as an contrastive term by penalizing hπ for changing at all . With some effort , the problem defined in Eqn 1 can be approached analytically for toy Markov chains ( interested readers may refer to App A for a technical discussion ) . However , such analytical treatment becomes infeasible for more complex and larger-scale environments with unknown transition probabilities . To tackle such environments , we will cast the functional optimization problem in Eqn 1 to an optimization problem over the parameters of a deep neural network and solve it for a variety of discrete and continuous environments . 2.2 SUBTLETIES . In this section , we discuss two conceptually rich subtleties that determine the conditions under which the learned arrow of time ( h-potential ) can be useful in practice . The Role of a Policy . The first subtlety is rooted in the observation that the trajectories ( s0 , ... , sN ) are collected by a given but arbitrary policy . However , there may exist policies for which the resulting arrow of time is unnatural , perhaps even misleading . Consider for instance the actions of a practitioner of Kintsugi , the ancient Japanese art of repairing broken pottery . The corresponding policy2 might cause the environment to transition from a state where the vase is broken to one where it is not . If we learn the h-potential on such trajectories , it might be the case that counter to our intuition , states with a larger number of broken vases are assigned smaller values ( and the vice versa ) . Now , one may choose to resolve this conundrum by defining : J [ h ] = Eπ∼U ( Π ) Jπ [ h ] ( 3 ) where Π is the set of all policies defined on S , and U ( Π ) denotes a uniform distribution over Π . The resulting function h∗ = arg max { J [ h ] + λT [ h ] } would characterize the arrow of time with respect to all possible policies , and one would expect that for a vast majority of such policies , the transition from broken vase to a intact vase is rather unlikely and/or requires highly specialized policies . 1This is not unlike the case for linear regression : for instance , using Lasso instead of ridge-regression will generally yield solutions that have different properties . 2This is analogous to Maxwell ’ s demon in classical thermodynamics . Unfortunately , determining h∗ is not feasible for most interesting applications , given the outer expectation over all possible policies . As a compromise , we use ( uniformly ) random actions to gather trajectories . The simplicity of the corresponding random policy justifies its adoption , since one would expect a policy resembling ( say ) a Kintsugi artist to be rather complex and not implementable with random actions . In this sense , we ensure that the learned arrow of time characterizes the underlying dynamics of the environment , and not the peculiarities of a particular agent3 . The price we pay is the lack of adequate exploration in complex enough environments , although this problem plagues most model-based reinforcement learning approaches4 ( cf . Ha & Schmidhuber ( 2018 ) ) . In the following , we assume π to be uniformly random and use hπ interchangeably with h. Dissipative Environments . The second subtlety concerns what we require of environments in which the arrow of time is informative . To illustrate the matter , consider the class of systems5 , a typical instance of which could be a billiard ball moving on a frictionless arena and bouncing ( elastically ) off the edges ( Bunimovich , 2007 ) . The state space comprises the ball ’ s velocity and its position constrained to a billiard table ( without holes ! ) , where the ball is initialized at a random position on the table . For such a system , it can be seen by time-reversal symmetry that when averaged over a large number of trajectories , the state transition s → s′ is just as likely as the reverse transition s′ → s. In this case , recall that the arrow of time is expected to remain constant . A similar argument can be made for systems that identically follow closed trajectories in their respective state space ( e.g . a frictionless and undriven pendulum ) . It follows that the h-potential must remain constant along the trajectory and that the arrow of time is uninformative . However , for so-called dissipative systems , the notion of an arrow of time is pronounced and well studied ( Willems , 1972 ; Prigogine , 1978 ) . In MDPs , dissipative behaviour may arise in situations where certain transitions are irreversible by design ( e.g . bricks disappearing in Atari Breakout ) , or due to partial observability , e.g . for a damped pendulum , the state space does not track the microscopic processes that give rise to friction6 . Therefore , a central premise underlying the practical utility of learning the arrow of time is that the considered MDP is indeed dissipative , which we shall assume in the following ; in Sec 5 ( Fig 5b ) , we will empirically investigate the case where this assumption is violated .
This paper proposes that we learn the “arrow of time” for an MDP: that is, a function (called the h-potential) that tends to increase as the MDP steps forward. Such an arrow should automatically capture notions such as irreversibility, and so can be used to define a measure of reachability, which previous work has shown can be used to penalize the agent for causing negative side effects. In addition, it can be used as intrinsic motivation for the agent: in particular, the agent can be rewarded for trajectories that decrease the h-potential (i.e. are “like” going backwards in time, or reducing entropy), which is hard to do and should lead to interesting skills. They propose that we learn the arrow of time by optimizing a function to grow over time along trajectories take from a random policy. Experiments demonstrate that in simple environments the learned function has the properties we would expect it to given results from physics.
SP:4e110cb77b848272f468030bfe05014d08d7b838
Improving Gradient Estimation in Evolutionary Strategies With Past Descent Directions
1 INTRODUCTION . Evolutionary Strategies ( ES ) ( 1 ; 2 ; 3 ) are a black-box optimization technique , that estimate the gradient of some objective function with respect to the parameters by evaluating parameter perturbations in random directions . The benefits of using ES in Reinforcement Learning ( RL ) were exhibited in ( 4 ) . ES approaches are highly parallelizable and account for robust learning , while having decent data-efficiency . Moreover , black-box optimization techniques like ES do not require propagation of gradients , are tolerant to long time horizons , and do not suffer from sparse reward distributions ( 4 ) . This lead to a successful application of ES in variety of different RL settings ( 5 ; 6 ; 7 ; 8 ) . Applications of ES outside RL include for example meta learning ( 9 ) . In many scenarios , the true gradient is impossible to compute , however surrogate gradients are available . Here , we use the term surrogate gradients for directions that are correlated but usually not equal to the true gradient , e.g . they might be biased or unbiased approximations of the gradient . Such scenarios include models with discrete stochastic variables ( 10 ) , learned models in RL like Q-learning ( 11 ) , truncated backpropagation through time ( 12 ) and feedback alignment ( 13 ) , see ( 14 ) for a detailed exhibition . If surrogate gradients are available , it is beneficial to preferentially sample parameter perturbations from the subspace defined by these directions ( 14 ) . The proposed algorithm ( 14 ) requires knowing in advance the quality of the surrogate gradient , does not always provide a descent direction that is better than the surrogate gradient , and it remains open how to obtain such surrogate gradients in general settings . In deep learning in general , experimental evidence has established that higher order derivatives are usually `` well behaved '' , in which case gradients of consecutive parameter updates correlate and applying momentum speeds up convergence ( 15 ; 16 ; 17 ) . These observations suggest that past update directions are promising candidates for surrogate gradients . In this work , we extend the line of research of ( 14 ) . Our contribution is threefold : • First , we show theoretically how to optimally combine the surrogate gradient directions with random search directions . More precisely , our approach computes the direction of the subspace spanned by the evaluated search directions that is most aligned with the true gradient . Our gradient estimator does not need to know the quality of the surrogate gradients and always provides a descent direction that is more aligned with the true gradient than the surrogate gradient . • Second , above properties of our gradient estimator allow us to iteratively use the last update direction as a surrogate gradient for our gradient estimator . Repeatedly using the last update direction as a surrogate gradient will aggregate information about the gradient over time and results in improved gradient estimates . In order to demonstrate how the gradient estimate improves over time , we prove fast convergence to the true gradient for linear functions and show , that under simplifying assumptions , it offers an improvement over ES that depends on the Hessian for general functions . • Third , we validate experimentally that these results transfer to practice , that is , the proposed approach computes more accurate gradients than standard ES . We observe that our algorithm considerably improves gradient estimation on the MNIST task compared to standard ES and that it improves convergence speed and performance on the tested Roboschool reinforcement learning environments . 2 RELATED WORK . Evolutionary strategies ( 1 ; 2 ; 3 ) are black box optimization techniques that approximate the gradient by sampling finite differences in random directions in parameter space . Promising potential of ES for the optimization of neural networks used for RL was demonstrated in ( 4 ) . They showed that ES gives rise to efficient training despite the noisy gradient estimates that are generated from a much smaller number of samples than the dimensionality of parameter space . This placed ES on a prominent spot in the RL tool kit ( 5 ; 6 ; 7 ; 8 ) . The history of descent directions was previously used to adapt the search distribution in covariance matrix adaptation ES ( CMA-ES ) ( 18 ) . CMA-ES constructs a second-order model of the underlying objective function and samples search directions and adapts step size according to it . However , maintaining the full covariance matrix makes the algorithm quadratic in the number of parameters , and thus impractical for high-dimensional spaces . Linear time approximations of CMA-ES like diagonal approximations of the covariance matrix ( 19 ) often do not work well , in the sense that their gradient estimates do not converge to the true gradient even if the true gradient does not change over time . Our approach differs as we simply improve the gradient estimation and then feed the gradient estimate to a first-order optimization algorithm . Our work is inspired by the line of research of ( 14 ) , where surrogate gradient directions are used to improve gradient estimations by ’ elongating ’ the search space along these directions . That approach has two shortcomings . First , the bias of the surrogate gradients needs to be known to adapt the covariance matrix . Second , once the bias of the surrogate gradient is too small , the algorithm will not find a better descent direction than the surrogate gradient . Another related area of research investigates how to use momentum for the optimization of deep neural networks . Applying different kinds of momentum has become one of the standard tools in current deep learning and it has been shown to speed-up learning in a very wide range of tasks ( 20 ; 16 ; 17 ) . This hints , that for many problems the higher-order terms in deep learning models are `` well-behaved '' and thus , the gradients do not change too drastically after parameter updates . While these approaches use momentum for parameter updates , our approach can be seen as a form of momentum when sampling directions from the search space of ES . 3 GRADIENT ESTIMATION . We aim at minimizing a function f : Rn → R by steepest descent . In scenarios where the gradient 5f does not exist or is inefficient to compute , we are interested in obtaining some estimate of the ( smoothed ) gradient of f that provides a good parameter update direction . 3.1 THE ES GRADIENT ESTIMATOR . ES considers the function fσ that is obtained by Gaussian smoothing fσ ( θ ) = E ∼N ( 0 , I ) [ f ( θ + σ ) ] , where σ is a parameter modulating the size of the smoothing area and N ( 0 , I ) is the n-dimensional Gaussian distribution with 0 being the all 0 vector and I being the n-dimensional identity matrix . The gradient of fσ with respect to parameters θ is given by 5fσ = 1 σ E ∼N ( 0 , I ) [ f ( θ + σ ) ] , which can be sampled by a Monte Carlo estimator , see ( 5 ) . Often antithetic sampling is used , as it reduces variance ( 5 ) . The antithetic ES gradient estimator using P samples is given by gES = 1 2σP P∑ i=1 ( f ( θ + σ i ) − f ( θ − σ i ) ) i , ( 1 ) where i are independently sampled from N ( 0 , I ) for i ∈ { 1 , . . . , P } . This gradient estimator has been shown to be effective in RL settings ( 4 ) . 3.2 OUR ONE STEP GRADIENT ESTIMATOR . We first give some intuition before presenting our gradient estimator formally . Given one surrogate gradient direction ζ , our one step gradient estimator applies the following sampling strategy . First , it estimates how much the gradient points into the direction of ζ by antithetically evaluating f in the direction of ζ . Second , it estimates the part of5f that is orthogonal to ζ by evaluating random , pairwise orthogonal search directions that are orthogonal to ζ . In this way , our estimator detects the optimal lengths of the parameter update step into both the surrogate direction and the evaluated orthogonal directions ( e.g . if ζ and 5f are parallel , the update step is parallel to ζ , and if they are orthogonal the step into direction ζ has length 0 ) . Additionally , if the surrogate direction and the gradient are not perfectly aligned , then the gradient estimate almost surely improves over the surrogate direction due to the contribution from the evaluated directions orthogonal to ζ . In the following we define our estimator formally and prove that the estimated direction possesses best possible alignment with the gradient that can be achieved with our sampling scheme . We assume that k pairwise orthogonal surrogate gradient directions ζ1 , . . . , ζk are given to our estimator . Denote by Rζ the subspace of Rn that is spanned by the ζi , and by R⊥ζ the subspace that is orthogonal to Rζ . Further , for vectors v and5f , we denote by v̂ and 5̂f the normalized vector v ‖v‖ and 5f ‖5f‖ , respectively . Let ̂ 1 , . . . , ̂P be random orthogonal unit vectors from R⊥ζ . Then , our estimator is defined as g⊥ = k∑ i=1 f ( θ + σζ̂i ) − f ( θ − σζ̂i ) 2σ ζ̂i + P∑ i=1 f ( θ + σ̂i ) − f ( θ − σ̂i ) 2σ ̂i . ( 2 ) We write5f = 5f‖ζ +5f⊥ζ , where5f‖ζ and5f⊥ζ are the projections of5f on Rζ and R⊥ζ , respectively . In essence , the first sum in ( 2 ) computes5f‖ζ by assessing the quality of each surrogate gradient direction , and the second sum estimates5f⊥ζ similar to an orthogonalized antithetic ES gradient estimator , that samples directions from R⊥ζ , see ( 5 ) . We remark that we require pairwise orthogonal unit directions ̂i for the optimality proof . Due to the orthogonality of the directions , no normalization factor like the 1/P factor in ( 1 ) is required in ( 2 ) . In practice , the dimensionality n is often much larger than P . Then , sampling pairwise orthogonal unit vecotrs i is nearly identical to sampling the is from a N ( 0 , I ) distribution , because in high-dimensional space the norm of i ∼ N ( 0 , I ) is highly concentrated around 1 and the cosine of two such random vectors is highly concentrated around 0 . For the sake of analysis , we assume that f is differentiable and that the second order approximation f ( x + ) ≈ f ( x ) + 〈 ,5f ( x ) 〉 + TH ( x ) , where H ( x ) denotes the Hessian matrix of f at x , is exact . This assumption implies that f ( θ + σ̂ ) − f ( θ − σ̂ ) 2σ = 〈5f ( θ ) , ̂〉 , ( 3 ) because the even terms cancel for antithetic sampling . The following proposition and theorems provide theoretical understanding , how our gradient estimation scheme improves gradient estimation in this smooth , noise-free setting . In the following , we will omit the θ in5f ( θ ) . Our first proposition states that g⊥ computes the direction in the subspace spanned by ζ1 , . . . , ζk , 1 , . . . , P that is most aligned with5f . Proposition 1 ( Optimality of g⊥ ) . Let ζ1 , . . . , ζk , 1 , . . . , P be pairwise orthogonal vectors in Rn . Then , g⊥ = ∑k i=1〈5f , ζ̂i〉ζ̂i + ∑P i=1〈5f , ̂i〉̂i computes the projection of 5f on the subspace spanned by ζ1 , . . . , ζk , 1 , . . . , P . Especially , = g⊥ is the vector of that subspace that maximizes the cosine 〈5̂f , ̂〉 between5f and . Moreover , the squared cosine between g⊥ and5f is given by 〈5̂f , ĝour〉2 = k∑ i=1 〈5̂f , ζ̂i〉2 + P∑ i=1 〈5̂f , ̂i〉2 . ( 4 ) We remark that when evaluating 〈5f , vi〉 for arbitrary directions vi , no information about search directions orthogonal to the subspace spanned by the vis is obtained . Therefore , one can only hope for finding the best approximation of 5f lying within the subspace spanned by the vis , which is accomplished by g⊥ . The proof of Proposition 1 follows easily from the Cauchy-Schwarz inequality and is given in the appendix .
This paper addresses the issue of noisy gradient estimation in a type of evolution strategies popularized by the open AI's reinforcement learning paper. It is a follow-up paper of reference [14], and try to analyze the optimality of the gradient estimation. The goal of the paper is well stated and well motivated. The paper itself is well-organized. However, the novelty of this work is not sufficiently high and its usefulness is questionable.
SP:ce211e46a1eac8bd3e35ccc30621bfdd53ba9a82
Improving Gradient Estimation in Evolutionary Strategies With Past Descent Directions
1 INTRODUCTION . Evolutionary Strategies ( ES ) ( 1 ; 2 ; 3 ) are a black-box optimization technique , that estimate the gradient of some objective function with respect to the parameters by evaluating parameter perturbations in random directions . The benefits of using ES in Reinforcement Learning ( RL ) were exhibited in ( 4 ) . ES approaches are highly parallelizable and account for robust learning , while having decent data-efficiency . Moreover , black-box optimization techniques like ES do not require propagation of gradients , are tolerant to long time horizons , and do not suffer from sparse reward distributions ( 4 ) . This lead to a successful application of ES in variety of different RL settings ( 5 ; 6 ; 7 ; 8 ) . Applications of ES outside RL include for example meta learning ( 9 ) . In many scenarios , the true gradient is impossible to compute , however surrogate gradients are available . Here , we use the term surrogate gradients for directions that are correlated but usually not equal to the true gradient , e.g . they might be biased or unbiased approximations of the gradient . Such scenarios include models with discrete stochastic variables ( 10 ) , learned models in RL like Q-learning ( 11 ) , truncated backpropagation through time ( 12 ) and feedback alignment ( 13 ) , see ( 14 ) for a detailed exhibition . If surrogate gradients are available , it is beneficial to preferentially sample parameter perturbations from the subspace defined by these directions ( 14 ) . The proposed algorithm ( 14 ) requires knowing in advance the quality of the surrogate gradient , does not always provide a descent direction that is better than the surrogate gradient , and it remains open how to obtain such surrogate gradients in general settings . In deep learning in general , experimental evidence has established that higher order derivatives are usually `` well behaved '' , in which case gradients of consecutive parameter updates correlate and applying momentum speeds up convergence ( 15 ; 16 ; 17 ) . These observations suggest that past update directions are promising candidates for surrogate gradients . In this work , we extend the line of research of ( 14 ) . Our contribution is threefold : • First , we show theoretically how to optimally combine the surrogate gradient directions with random search directions . More precisely , our approach computes the direction of the subspace spanned by the evaluated search directions that is most aligned with the true gradient . Our gradient estimator does not need to know the quality of the surrogate gradients and always provides a descent direction that is more aligned with the true gradient than the surrogate gradient . • Second , above properties of our gradient estimator allow us to iteratively use the last update direction as a surrogate gradient for our gradient estimator . Repeatedly using the last update direction as a surrogate gradient will aggregate information about the gradient over time and results in improved gradient estimates . In order to demonstrate how the gradient estimate improves over time , we prove fast convergence to the true gradient for linear functions and show , that under simplifying assumptions , it offers an improvement over ES that depends on the Hessian for general functions . • Third , we validate experimentally that these results transfer to practice , that is , the proposed approach computes more accurate gradients than standard ES . We observe that our algorithm considerably improves gradient estimation on the MNIST task compared to standard ES and that it improves convergence speed and performance on the tested Roboschool reinforcement learning environments . 2 RELATED WORK . Evolutionary strategies ( 1 ; 2 ; 3 ) are black box optimization techniques that approximate the gradient by sampling finite differences in random directions in parameter space . Promising potential of ES for the optimization of neural networks used for RL was demonstrated in ( 4 ) . They showed that ES gives rise to efficient training despite the noisy gradient estimates that are generated from a much smaller number of samples than the dimensionality of parameter space . This placed ES on a prominent spot in the RL tool kit ( 5 ; 6 ; 7 ; 8 ) . The history of descent directions was previously used to adapt the search distribution in covariance matrix adaptation ES ( CMA-ES ) ( 18 ) . CMA-ES constructs a second-order model of the underlying objective function and samples search directions and adapts step size according to it . However , maintaining the full covariance matrix makes the algorithm quadratic in the number of parameters , and thus impractical for high-dimensional spaces . Linear time approximations of CMA-ES like diagonal approximations of the covariance matrix ( 19 ) often do not work well , in the sense that their gradient estimates do not converge to the true gradient even if the true gradient does not change over time . Our approach differs as we simply improve the gradient estimation and then feed the gradient estimate to a first-order optimization algorithm . Our work is inspired by the line of research of ( 14 ) , where surrogate gradient directions are used to improve gradient estimations by ’ elongating ’ the search space along these directions . That approach has two shortcomings . First , the bias of the surrogate gradients needs to be known to adapt the covariance matrix . Second , once the bias of the surrogate gradient is too small , the algorithm will not find a better descent direction than the surrogate gradient . Another related area of research investigates how to use momentum for the optimization of deep neural networks . Applying different kinds of momentum has become one of the standard tools in current deep learning and it has been shown to speed-up learning in a very wide range of tasks ( 20 ; 16 ; 17 ) . This hints , that for many problems the higher-order terms in deep learning models are `` well-behaved '' and thus , the gradients do not change too drastically after parameter updates . While these approaches use momentum for parameter updates , our approach can be seen as a form of momentum when sampling directions from the search space of ES . 3 GRADIENT ESTIMATION . We aim at minimizing a function f : Rn → R by steepest descent . In scenarios where the gradient 5f does not exist or is inefficient to compute , we are interested in obtaining some estimate of the ( smoothed ) gradient of f that provides a good parameter update direction . 3.1 THE ES GRADIENT ESTIMATOR . ES considers the function fσ that is obtained by Gaussian smoothing fσ ( θ ) = E ∼N ( 0 , I ) [ f ( θ + σ ) ] , where σ is a parameter modulating the size of the smoothing area and N ( 0 , I ) is the n-dimensional Gaussian distribution with 0 being the all 0 vector and I being the n-dimensional identity matrix . The gradient of fσ with respect to parameters θ is given by 5fσ = 1 σ E ∼N ( 0 , I ) [ f ( θ + σ ) ] , which can be sampled by a Monte Carlo estimator , see ( 5 ) . Often antithetic sampling is used , as it reduces variance ( 5 ) . The antithetic ES gradient estimator using P samples is given by gES = 1 2σP P∑ i=1 ( f ( θ + σ i ) − f ( θ − σ i ) ) i , ( 1 ) where i are independently sampled from N ( 0 , I ) for i ∈ { 1 , . . . , P } . This gradient estimator has been shown to be effective in RL settings ( 4 ) . 3.2 OUR ONE STEP GRADIENT ESTIMATOR . We first give some intuition before presenting our gradient estimator formally . Given one surrogate gradient direction ζ , our one step gradient estimator applies the following sampling strategy . First , it estimates how much the gradient points into the direction of ζ by antithetically evaluating f in the direction of ζ . Second , it estimates the part of5f that is orthogonal to ζ by evaluating random , pairwise orthogonal search directions that are orthogonal to ζ . In this way , our estimator detects the optimal lengths of the parameter update step into both the surrogate direction and the evaluated orthogonal directions ( e.g . if ζ and 5f are parallel , the update step is parallel to ζ , and if they are orthogonal the step into direction ζ has length 0 ) . Additionally , if the surrogate direction and the gradient are not perfectly aligned , then the gradient estimate almost surely improves over the surrogate direction due to the contribution from the evaluated directions orthogonal to ζ . In the following we define our estimator formally and prove that the estimated direction possesses best possible alignment with the gradient that can be achieved with our sampling scheme . We assume that k pairwise orthogonal surrogate gradient directions ζ1 , . . . , ζk are given to our estimator . Denote by Rζ the subspace of Rn that is spanned by the ζi , and by R⊥ζ the subspace that is orthogonal to Rζ . Further , for vectors v and5f , we denote by v̂ and 5̂f the normalized vector v ‖v‖ and 5f ‖5f‖ , respectively . Let ̂ 1 , . . . , ̂P be random orthogonal unit vectors from R⊥ζ . Then , our estimator is defined as g⊥ = k∑ i=1 f ( θ + σζ̂i ) − f ( θ − σζ̂i ) 2σ ζ̂i + P∑ i=1 f ( θ + σ̂i ) − f ( θ − σ̂i ) 2σ ̂i . ( 2 ) We write5f = 5f‖ζ +5f⊥ζ , where5f‖ζ and5f⊥ζ are the projections of5f on Rζ and R⊥ζ , respectively . In essence , the first sum in ( 2 ) computes5f‖ζ by assessing the quality of each surrogate gradient direction , and the second sum estimates5f⊥ζ similar to an orthogonalized antithetic ES gradient estimator , that samples directions from R⊥ζ , see ( 5 ) . We remark that we require pairwise orthogonal unit directions ̂i for the optimality proof . Due to the orthogonality of the directions , no normalization factor like the 1/P factor in ( 1 ) is required in ( 2 ) . In practice , the dimensionality n is often much larger than P . Then , sampling pairwise orthogonal unit vecotrs i is nearly identical to sampling the is from a N ( 0 , I ) distribution , because in high-dimensional space the norm of i ∼ N ( 0 , I ) is highly concentrated around 1 and the cosine of two such random vectors is highly concentrated around 0 . For the sake of analysis , we assume that f is differentiable and that the second order approximation f ( x + ) ≈ f ( x ) + 〈 ,5f ( x ) 〉 + TH ( x ) , where H ( x ) denotes the Hessian matrix of f at x , is exact . This assumption implies that f ( θ + σ̂ ) − f ( θ − σ̂ ) 2σ = 〈5f ( θ ) , ̂〉 , ( 3 ) because the even terms cancel for antithetic sampling . The following proposition and theorems provide theoretical understanding , how our gradient estimation scheme improves gradient estimation in this smooth , noise-free setting . In the following , we will omit the θ in5f ( θ ) . Our first proposition states that g⊥ computes the direction in the subspace spanned by ζ1 , . . . , ζk , 1 , . . . , P that is most aligned with5f . Proposition 1 ( Optimality of g⊥ ) . Let ζ1 , . . . , ζk , 1 , . . . , P be pairwise orthogonal vectors in Rn . Then , g⊥ = ∑k i=1〈5f , ζ̂i〉ζ̂i + ∑P i=1〈5f , ̂i〉̂i computes the projection of 5f on the subspace spanned by ζ1 , . . . , ζk , 1 , . . . , P . Especially , = g⊥ is the vector of that subspace that maximizes the cosine 〈5̂f , ̂〉 between5f and . Moreover , the squared cosine between g⊥ and5f is given by 〈5̂f , ĝour〉2 = k∑ i=1 〈5̂f , ζ̂i〉2 + P∑ i=1 〈5̂f , ̂i〉2 . ( 4 ) We remark that when evaluating 〈5f , vi〉 for arbitrary directions vi , no information about search directions orthogonal to the subspace spanned by the vis is obtained . Therefore , one can only hope for finding the best approximation of 5f lying within the subspace spanned by the vis , which is accomplished by g⊥ . The proof of Proposition 1 follows easily from the Cauchy-Schwarz inequality and is given in the appendix .
This paper provides a new type of gradient estimator that combines an Evolutionary Strategies (ES) style estimate (using function evaluations at perturbed parameters) along with surrogate gradient estimates (gradient estimates that may be biased and/or high variance). The estimator involves computing antithetic ES estimates in two subspaces: along the set of (normalized) surrogate gradients, and along a set of randomly chosen vectors in the orthogonal complement of the span of the surrogate gradients. The paper provides a proof of the optimality of the estimate, that is, the proposed gradient estimate maximizes the cosine of the angle with the true gradient over the vectors in the subspace defined by the set of surrogate gradients and sampled directions. The paper proposes an additional mechanism for generating surrogate gradients by simply using previous gradient estimates as surrogate gradients, and derives a convergence rate for when this iterative estimator will approximate a fixed, true gradient (e.g. for linear functions). Finally, the paper applies the estimate to two tasks: MNIST classification and robotic control via reinforcement learning, demonstrating improvements on both compared to standard ES.
SP:ce211e46a1eac8bd3e35ccc30621bfdd53ba9a82
Where is the Information in a Deep Network?
1 INTRODUCTION . At the end of training a deep neural network , all that is left of past experience is a set of values stored in its weights . So , studying what “ information ” they contain seems like a natural starting point to understand how deep networks learn . But how is the information in a deep neural network even defined ? The weights are not a random variable , and the network outputs a deterministic function of its input , with degenerate ( infinite ) Shannon Mutual Information between the two . This presents a challenge for theories of Deep Learning based on Shannon Information ( Saxe et al. , 2018 ) . Several frameworks have been developed to reason about information in fixed sets of values , for instance by Fisher and Kolmogorov , but they either do not relate directly to relevant concepts in Deep Learning , such as generalization and invariance , or can not be estimated in practice for modern deep neural networks ( DNNs ) . Beyond how they define information , existing theories of Deep Learning are limited by whose information they address : Most approaches focus on information of the activations of the network – the output of its layers – rather than their parameters , or weights , although recent information-theoretic approaches to study the weights are discussed in the next section . The weights are a representation of past data ( the training set of inputs and outputs ) , trained for predicting statistics of the training set itself ( e.g. , the output ) , relative to prior knowledge . The activations are a representation of ( possibly unseen ) future inputs ( test set ) , ideally sufficient to predict future outputs , and invariant to nuisance variability in the data that should not affect the output . We have no access to future data , and the Shannon Information their representation contains does not account for the finite training set , hence missing a link to generalization . But how are these properties of sufficiency and invariance achieved through the training process ? Sufficiency alone is trivial — any invertible function of the data is , in theory , sufficient — but it comes at the expense of complexity1 ( or minimality ) and invariance of the representation . Invariance alone is similarly trivial – any constant function is invariant . A learning criterion therefore must trade off accuracy , complexity and invariance . The best achievable complexity trade-off is what we define 1In this paper , we refer to complexity as information complexity , to be distinguished from complexity of the hypothesis space , for instance measured by the VC Dimension . as Information for the task . The challenge is that we wish to characterize sufficiency and invariance of representations of the test data , while we only have access to the training set . So , throughout this paper , we discuss four distinct concepts : ( 1 ) Sufficiency of the weights , captured by a training loss ( e.g. , empirical cross-entropy ) ; ( 2 ) complexity and minimality of the weights , captured by the information they contain ; ( 3 ) sufficiency of the activations , captured by the test loss which we can not compute , but can bound using the Information in the Weights ; ( 4 ) invariance of the activations , a property of the test data , which is not explicitly present in the formulation of the learning process when training a deep neural network . To do all that , we first need to formally define both information of the weights and of the activations . 1.1 SUMMARY OF CONTRIBUTIONS AND RELATED WORK . Our first contribution is to measure the Information in the Weights of a deep neural network as the trade-off between the amount of noise we could add to the weights ( measured by its entropy relative to a prior ) , and the performance the network would achieve in the task at hand . Informally , given an encoding algorithm , this is the number of bits needed to encode the weights in order to solve the task at some level of precision , as customary in Rate-Distortion Theory . The optimal trade-off traces a curve that depends on the task and the architecture , and solutions along the curve can be found by optimizing an Information Lagrangian . The Information Lagrangian is in the general form of an Information Bottleneck ( IB ) ( Tishby et al. , 1999 ) , but is fundamentally different from the IB used in most prior work in deep learning ( Tishby & Zaslavsky , 2015 ) , which refers to the activations , rather than the weights . Our measure of information is practical even in large-scale networks , with millions of parameters , and retains the dependency on the number of samples in the training set . Our second contribution is to derive a relation between the two informations ( Section 4 ) , where we show that the Information Lagrangian of the weights of deep networks bounds the Information Bottleneck of the activations , but not vice-versa . This is important , as the IB of the activations is degenerate when computed on the training set , hence can not be used at training time to enforce properties . On the other hand , the Information Lagrangian of the weights remains well defined , and through our bound it controls invariance at test time . Our method requires specifying a parametrized noise distribution , as well as a prior , to measure information . While this may seem undesirable , we believe it is essential and key to the flexibility of the method , as it allows us to compute concrete quantities , tailored to DNNs , that relate generalization and invariance in novel ways . Of all possible choices of noise and prior to compute the Information in the Weights , there are a few standard ones : An uninformative prior yields the Fisher Information of the weights . A prior obtained by averaging training over all relevant datasets yields the Shannon mutual information between the dataset ( now a random variable ) and the weights . A third important choice is the noise distribution induced by stochastic gradient descent ( SGD ) during the training process , which we discuss in the paper . As it turns out , all three resulting notions of information are important to understand learning in deep networks : Shannon ’ s relates closely to generalization , via the PAC-Bayes Bound ( Section 3.1 ) . Fisher ’ s relates closely to invariance in the representation of test data ( activations ) as we show in Section 4 . The noise distribution of SGD is what connects the two , and establishes the link between invariance and generalization . Although it is possible to minimize Fisher or Shannon Information independently , we show that when the weights are learned using SGD , the two are related . This is our third contribution , which is made possible by the flexibility of our framework ( Section 3.3 ) . Finally , in Section 5 we discuss open problems and further relations with prior work . There is a growing literature on information and generalization bounds for the weights of deep networks ( Xu & Raginsky , 2017 ; Pensia et al. , 2018 ) . Given a data generating distribution D ∼ µ ( x , y ) , a training algorithm w = A ( D ) is said to be ( , µ ) -information stable if I ( w ; D ) < . The generalization gap of a training algorithm can then be bounded in terms of its information stability . Indeed , the quantity I ( w ; D ) is related to our definition of information in the weights ( Section 3.2 ) , but we emphasize that our general definition of amount of information in the weights extends to the case where both the dataset D and w are assumed to be given and fixed ( as it is often common in Deep Learning ) , and not resampled every time . First , some of our main results are to prove that convergence to flat minima ( low Fisher Information ) and “ path ” stability of SGD ( Hardt et al. , 2015 ) imply “ information ” stability in the sense of Xu & Raginsky ( 2017 ) ( Proposition 3.7 ) . Unlike those works , our bound does not depend solely on the noise induced by the steps of SGD , but also the geometry of the loss lanscape , which allows to better capture some fundamental properties of Deep Learning . Second , while those work only bound the generalization performance on the training task , we connect the information stability with amount of information in the activations of a DNN and invariance to nuisances ( Section 4 ) . This this can be used to guarantee the quality of the learned representation in a transfer learning setting . Note that the fact that some weights can be perturbed at little loss has been known empirically for a while ( Hinton & Van Camp , 1993 ) . We exploit this property to define information for a particular set of weights , in a manner that is quite distinct from standard PAC-Bayes , using Fisher ’ s information instead . This paper takes Achille et al . ( 2019 ) as the starting point of investigation , attempting to measure the quantities at play more accurately , which leads us beyond Shannon ’ s formalism , to a more general setting that also includes Fisher ’ s formalism and the relation between the two , mediated by the properties of deep neural networks and SGD . All the ( information ) quantities we measure are specific to a particular weight vector , not its distribution . 2 PRELIMINARIES AND NOTATION . We denote with x ∈ X an input ( e.g. , an image ) , and with y ∈ Y a “ task variable , ” a random variable which we are trying to infer , e.g. , a label Y = { 1 , . . . , C } . A dataset is a finite collection of samples D = { ( xi , yi ) } Ni=1 that specify the task . A DNN model trained with the cross-entropy loss encodes a conditional distribution pw ( y|x ) , parametrized by the weights w , meant to approximate the posterior of the task variable y given the input x . The Kullbach-Liebler , or KL-divergence , is the relative entropy between p ( x ) and q ( x ) : KL ( p ( x ) ‖ q ( x ) ) = Ex∼p ( x ) [ log ( p ( x ) /q ( x ) ) ] . It is always non-negative , and zero if and only if p ( x ) = q ( x ) . It measures the ( asymmetric ) similarity between two distributions . Given a family of conditional distributions pw ( y|x ) parametrized by a vector w , we can ask how much perturbing the parameter w by a small amount δw will change the distribution , as measured by the KL-divergence . To second-order , this is given by Ex KL ( pw ( y|x ) ‖ pw+δw ( y|x ) ) = δwtFδw + o ( ‖δw‖2 ) where F is the Fisher Information Matrix ( or simply “ Fisher ” ) , defined by F = Ex , y∼p ( x ) pw ( y|x ) [ ∇ log pw ( y|x ) t∇ log pw ( y|x ) ] = Ex∼p ( x ) pw ( y|x ) [ −∇2w log pw ( y|x ) ] . For its relevant properties see Martens ( 2014 ) . It is important to notice that the Fisher depends on the ground-truth data distribution p ( x , y ) only through the domain variable x , not the task variable y , since y ∼ pw ( y|x ) is sampled from the model distribution when computing the Fisher . This property will be used later . Given two random variables x and z , their Shannon mutual information is defined as I ( x ; z ) = Ex∼p ( x ) [ KL ( p ( z|x ) ‖ p ( z ) ) ] that is , the expected divergence between the distribution of z after an observation of x , and the prior distribution of z . It is positive , symmetric , zero if and only if the variables are independent ( Cover & Thomas , 2012 ) , and measured in Nats when using the natural logarithm . In supervised classification one is usually interested in finding weights w that minimize the crossentropy loss LD ( w ) = E ( x , y ) ∼D [ − log pw ( y|x ) ] on the training set D. The loss LD ( w ) is usually minimized using stochastic gradient descent ( SGD ) , which updates the weights w with an estimate of the gradient computed from a small number of samples ( mini-batch ) . That is , wk+1 = wk − η∇L̂ξk ( w ) , where ξk are the indices of a randomly sampled mini-batch and L̂ξk ( w ) = 1 |ξk| ∑ i∈ξk [ − log pw ( yi|xi ) ] . Notice that Eξk [ ∇L̂ξk ( w ) ] = ∇LD ( w ) , so we can think of the mini-batch gradient∇L̂ξk ( w ) as a noisy version of the real gradient . Using this intuition we can write : wk+1 = wk − η∇LD ( wk ) + √ η Tξk ( wk ) ( 1 ) with the induced “ noise ” term T k ( w ) = √ η ( ∇L̂ξk ( w ) − ∇L ( w ) ) . Written in this form , eq . ( 1 ) is a Langevin diffusion process , with ( non-isotropic ) noise Tξk Li et al . ( 2017 ) ; Chaudhari & Soatto ( 2018 ) .
This paper presents a theoretical account of information encoded within deep neural networks subject to information theoretic measures. In contrast to other efforts that examine information encoded in weights, this work emphasizes the effective information in the activations. This characterization is further related to information in the weights, and a theoretical justification is made for what this means with respect to properties of generalization and invariance in the network.
SP:9da6cd132a934387f69fe759dbe5b1d2853242c5
Where is the Information in a Deep Network?
1 INTRODUCTION . At the end of training a deep neural network , all that is left of past experience is a set of values stored in its weights . So , studying what “ information ” they contain seems like a natural starting point to understand how deep networks learn . But how is the information in a deep neural network even defined ? The weights are not a random variable , and the network outputs a deterministic function of its input , with degenerate ( infinite ) Shannon Mutual Information between the two . This presents a challenge for theories of Deep Learning based on Shannon Information ( Saxe et al. , 2018 ) . Several frameworks have been developed to reason about information in fixed sets of values , for instance by Fisher and Kolmogorov , but they either do not relate directly to relevant concepts in Deep Learning , such as generalization and invariance , or can not be estimated in practice for modern deep neural networks ( DNNs ) . Beyond how they define information , existing theories of Deep Learning are limited by whose information they address : Most approaches focus on information of the activations of the network – the output of its layers – rather than their parameters , or weights , although recent information-theoretic approaches to study the weights are discussed in the next section . The weights are a representation of past data ( the training set of inputs and outputs ) , trained for predicting statistics of the training set itself ( e.g. , the output ) , relative to prior knowledge . The activations are a representation of ( possibly unseen ) future inputs ( test set ) , ideally sufficient to predict future outputs , and invariant to nuisance variability in the data that should not affect the output . We have no access to future data , and the Shannon Information their representation contains does not account for the finite training set , hence missing a link to generalization . But how are these properties of sufficiency and invariance achieved through the training process ? Sufficiency alone is trivial — any invertible function of the data is , in theory , sufficient — but it comes at the expense of complexity1 ( or minimality ) and invariance of the representation . Invariance alone is similarly trivial – any constant function is invariant . A learning criterion therefore must trade off accuracy , complexity and invariance . The best achievable complexity trade-off is what we define 1In this paper , we refer to complexity as information complexity , to be distinguished from complexity of the hypothesis space , for instance measured by the VC Dimension . as Information for the task . The challenge is that we wish to characterize sufficiency and invariance of representations of the test data , while we only have access to the training set . So , throughout this paper , we discuss four distinct concepts : ( 1 ) Sufficiency of the weights , captured by a training loss ( e.g. , empirical cross-entropy ) ; ( 2 ) complexity and minimality of the weights , captured by the information they contain ; ( 3 ) sufficiency of the activations , captured by the test loss which we can not compute , but can bound using the Information in the Weights ; ( 4 ) invariance of the activations , a property of the test data , which is not explicitly present in the formulation of the learning process when training a deep neural network . To do all that , we first need to formally define both information of the weights and of the activations . 1.1 SUMMARY OF CONTRIBUTIONS AND RELATED WORK . Our first contribution is to measure the Information in the Weights of a deep neural network as the trade-off between the amount of noise we could add to the weights ( measured by its entropy relative to a prior ) , and the performance the network would achieve in the task at hand . Informally , given an encoding algorithm , this is the number of bits needed to encode the weights in order to solve the task at some level of precision , as customary in Rate-Distortion Theory . The optimal trade-off traces a curve that depends on the task and the architecture , and solutions along the curve can be found by optimizing an Information Lagrangian . The Information Lagrangian is in the general form of an Information Bottleneck ( IB ) ( Tishby et al. , 1999 ) , but is fundamentally different from the IB used in most prior work in deep learning ( Tishby & Zaslavsky , 2015 ) , which refers to the activations , rather than the weights . Our measure of information is practical even in large-scale networks , with millions of parameters , and retains the dependency on the number of samples in the training set . Our second contribution is to derive a relation between the two informations ( Section 4 ) , where we show that the Information Lagrangian of the weights of deep networks bounds the Information Bottleneck of the activations , but not vice-versa . This is important , as the IB of the activations is degenerate when computed on the training set , hence can not be used at training time to enforce properties . On the other hand , the Information Lagrangian of the weights remains well defined , and through our bound it controls invariance at test time . Our method requires specifying a parametrized noise distribution , as well as a prior , to measure information . While this may seem undesirable , we believe it is essential and key to the flexibility of the method , as it allows us to compute concrete quantities , tailored to DNNs , that relate generalization and invariance in novel ways . Of all possible choices of noise and prior to compute the Information in the Weights , there are a few standard ones : An uninformative prior yields the Fisher Information of the weights . A prior obtained by averaging training over all relevant datasets yields the Shannon mutual information between the dataset ( now a random variable ) and the weights . A third important choice is the noise distribution induced by stochastic gradient descent ( SGD ) during the training process , which we discuss in the paper . As it turns out , all three resulting notions of information are important to understand learning in deep networks : Shannon ’ s relates closely to generalization , via the PAC-Bayes Bound ( Section 3.1 ) . Fisher ’ s relates closely to invariance in the representation of test data ( activations ) as we show in Section 4 . The noise distribution of SGD is what connects the two , and establishes the link between invariance and generalization . Although it is possible to minimize Fisher or Shannon Information independently , we show that when the weights are learned using SGD , the two are related . This is our third contribution , which is made possible by the flexibility of our framework ( Section 3.3 ) . Finally , in Section 5 we discuss open problems and further relations with prior work . There is a growing literature on information and generalization bounds for the weights of deep networks ( Xu & Raginsky , 2017 ; Pensia et al. , 2018 ) . Given a data generating distribution D ∼ µ ( x , y ) , a training algorithm w = A ( D ) is said to be ( , µ ) -information stable if I ( w ; D ) < . The generalization gap of a training algorithm can then be bounded in terms of its information stability . Indeed , the quantity I ( w ; D ) is related to our definition of information in the weights ( Section 3.2 ) , but we emphasize that our general definition of amount of information in the weights extends to the case where both the dataset D and w are assumed to be given and fixed ( as it is often common in Deep Learning ) , and not resampled every time . First , some of our main results are to prove that convergence to flat minima ( low Fisher Information ) and “ path ” stability of SGD ( Hardt et al. , 2015 ) imply “ information ” stability in the sense of Xu & Raginsky ( 2017 ) ( Proposition 3.7 ) . Unlike those works , our bound does not depend solely on the noise induced by the steps of SGD , but also the geometry of the loss lanscape , which allows to better capture some fundamental properties of Deep Learning . Second , while those work only bound the generalization performance on the training task , we connect the information stability with amount of information in the activations of a DNN and invariance to nuisances ( Section 4 ) . This this can be used to guarantee the quality of the learned representation in a transfer learning setting . Note that the fact that some weights can be perturbed at little loss has been known empirically for a while ( Hinton & Van Camp , 1993 ) . We exploit this property to define information for a particular set of weights , in a manner that is quite distinct from standard PAC-Bayes , using Fisher ’ s information instead . This paper takes Achille et al . ( 2019 ) as the starting point of investigation , attempting to measure the quantities at play more accurately , which leads us beyond Shannon ’ s formalism , to a more general setting that also includes Fisher ’ s formalism and the relation between the two , mediated by the properties of deep neural networks and SGD . All the ( information ) quantities we measure are specific to a particular weight vector , not its distribution . 2 PRELIMINARIES AND NOTATION . We denote with x ∈ X an input ( e.g. , an image ) , and with y ∈ Y a “ task variable , ” a random variable which we are trying to infer , e.g. , a label Y = { 1 , . . . , C } . A dataset is a finite collection of samples D = { ( xi , yi ) } Ni=1 that specify the task . A DNN model trained with the cross-entropy loss encodes a conditional distribution pw ( y|x ) , parametrized by the weights w , meant to approximate the posterior of the task variable y given the input x . The Kullbach-Liebler , or KL-divergence , is the relative entropy between p ( x ) and q ( x ) : KL ( p ( x ) ‖ q ( x ) ) = Ex∼p ( x ) [ log ( p ( x ) /q ( x ) ) ] . It is always non-negative , and zero if and only if p ( x ) = q ( x ) . It measures the ( asymmetric ) similarity between two distributions . Given a family of conditional distributions pw ( y|x ) parametrized by a vector w , we can ask how much perturbing the parameter w by a small amount δw will change the distribution , as measured by the KL-divergence . To second-order , this is given by Ex KL ( pw ( y|x ) ‖ pw+δw ( y|x ) ) = δwtFδw + o ( ‖δw‖2 ) where F is the Fisher Information Matrix ( or simply “ Fisher ” ) , defined by F = Ex , y∼p ( x ) pw ( y|x ) [ ∇ log pw ( y|x ) t∇ log pw ( y|x ) ] = Ex∼p ( x ) pw ( y|x ) [ −∇2w log pw ( y|x ) ] . For its relevant properties see Martens ( 2014 ) . It is important to notice that the Fisher depends on the ground-truth data distribution p ( x , y ) only through the domain variable x , not the task variable y , since y ∼ pw ( y|x ) is sampled from the model distribution when computing the Fisher . This property will be used later . Given two random variables x and z , their Shannon mutual information is defined as I ( x ; z ) = Ex∼p ( x ) [ KL ( p ( z|x ) ‖ p ( z ) ) ] that is , the expected divergence between the distribution of z after an observation of x , and the prior distribution of z . It is positive , symmetric , zero if and only if the variables are independent ( Cover & Thomas , 2012 ) , and measured in Nats when using the natural logarithm . In supervised classification one is usually interested in finding weights w that minimize the crossentropy loss LD ( w ) = E ( x , y ) ∼D [ − log pw ( y|x ) ] on the training set D. The loss LD ( w ) is usually minimized using stochastic gradient descent ( SGD ) , which updates the weights w with an estimate of the gradient computed from a small number of samples ( mini-batch ) . That is , wk+1 = wk − η∇L̂ξk ( w ) , where ξk are the indices of a randomly sampled mini-batch and L̂ξk ( w ) = 1 |ξk| ∑ i∈ξk [ − log pw ( yi|xi ) ] . Notice that Eξk [ ∇L̂ξk ( w ) ] = ∇LD ( w ) , so we can think of the mini-batch gradient∇L̂ξk ( w ) as a noisy version of the real gradient . Using this intuition we can write : wk+1 = wk − η∇LD ( wk ) + √ η Tξk ( wk ) ( 1 ) with the induced “ noise ” term T k ( w ) = √ η ( ∇L̂ξk ( w ) − ∇L ( w ) ) . Written in this form , eq . ( 1 ) is a Langevin diffusion process , with ( non-isotropic ) noise Tξk Li et al . ( 2017 ) ; Chaudhari & Soatto ( 2018 ) .
The paper deals with where the information is in a deep network and how information is propagated when new data points are observed. The authors measure information in the weights of a DNN as the trade-off between network accuracy and weight complexity. They bring out the relationships between Shannon MI and Fisher Information and the connections to PAC-Bayes bound and invariance. The main result is that models of low information generalize better and are invariance-tolerant.
SP:9da6cd132a934387f69fe759dbe5b1d2853242c5
A Simple Dynamic Learning Rate Tuning Algorithm For Automated Training of DNNs
1 INTRODUCTION . Deep architectures are generally trained by minimizing a non-convex loss function via underlying optimization algorithm such as stochastic gradient descent or its variants . It takes a fairly large amount of time to find the best suited optimization algorithm and its optimal hyperparameters ( such as learning rate , batch size etc . ) for training a model to the desired accuracy , this being a major challenge for academicians and industry practitioners alike . Usually , such tuning is done by initial configuration optimization through grid search or random search ( Bergstra et al. , 2011 ; Snoek et al. , 2012 ; Thornton et al. ) . Recent works have also formulated it as a bandit problem ( Li et al. , 2017 ) . However , it has been widely demonstrated that hyperparameters , especially the learning rate often needs to be dynamically adjusted as the training progresses , irrespective of the initial choice of configuration . If not adjusted dynamically , the training might get stuck in a bad minima , and no amount of training time can recover it . In this work , we focus on learning rate which is the foremost hyperparameter that one seeks to tune when training a deep learning model to get favourable results . Certain auto-tuning and adaptive variants of SGD , such as AdaGrad ( Duchi et al. , 2011 ) , Adadelta ( Zeiler , 2012 ) , RMSProp ( Tieleman & Hinton , 2012 ) , Adam ( Kingma & Ba , 2015 ) among others have been proposed that automatically adjust the learning rate as the training progresses , using functions of gradient . Yet others have proposed fixed learning rate and/or batch size change regimes ( Goyal et al. , 2017 ) , ( Smith et al. , 2018 ) for certain data set and model combination . In addition to traditional natural learning tasks where a good LR regime might already be known from past experiments , adversarial training for generating robust models is gaining a lot of popularity off late . In these cases , tuning the LR would generally require time consuming multiple experiments , since the LR regime is unlikely to be known for every attack for every model and dataset of interest1 . Moreover , new models are surfacing every day courtesy the state-of-the-art model synthesis systems , and new datasets are also becoming available quite often in different domains such as healthcare , automobile industy etc . In each of these cases , no prior LR regime would be known , and would require considerable manual tuning in the absence of a universal method , with demonstrated effectiveness over a wide range of tasks , models and datasets . Wilson et al . ( 2017 ) observed that solutions found by existing adaptive methods often generalize worse than those found by non-adaptive methods . Even though initially adaptive methods might 1For example , one can see a piecewise LR schedule given by Madry et al . ( 2018 ) at https : //github . com/MadryLab/cifar10_challenge/blob/master/config.json for a particular model . display faster initial progress on the training set , their performance quickly plateaus on the test set , and learning rate tuning is required to improve the generalization performance of these methods . For the case of SGD with Momentum , learning rate ( LR ) step decay is very popular ( Goyal et al. , 2017 ) , ( Huang et al. , 2017 ) , ReduceLRonPlateau2 . However , in certain junctures of training , increasing the LR can potentially lead to a quick , further exploration of the loss landscape and help the training to escape a sharp minima ( having poor generalisation Keskar et al . ( 2016 ) ) . Further , recent works have shown that the distance traveled by the model in the parameter space determines how far the training is from convergence Hoffer et al . ( 2017 ) . This inspires the idea that increasing the LR to take bigger steps in the loss landscape , while maintaining numerical stability might help in better generalization . The idea of increasing and decreasing the LR periodically during training has been demonstrated by Smith ( 2017 ) ; Smith & Topin ( 2017 ) in their cyclical learning rate method ( CLR ) . This has also been shown by Loshchilov & Hutter ( 2016 ) , in Stochastic Gradient Descent with Warm Restarts ( SGDR , popularly referred to as Cosine Annealing with Warm Restarts ) . In CLR , the LR is varied periodically in a linear manner , between a maximum and a minimum value , and it is shown empirically that such increase of learning rate is overall beneficial to the training compared to fixed schedules . In SGDR , the training periodically restarts from an initial learning rate , and then decreases to a minimum learning rate through a cosine schedule of LR decay . The period typically increases in powers of 2 . The authors suggest optimizing the initial LR and minimum LR for good performance . Schaul et al . ( 2013 ) had suggested an adaptive learning rate schedule that allows the learning rate to increase when the signal is non-stationary and the underline distribution changes . This is a computationally heavy method , requiring computing the Hessian in an online manner . Recently , there has been some work that explore gradients in different forms for hyperaparameter optimization . Maclaurin et al . ( 2015 ) suggest an approach by which they exactly reverse SGD with momentum to compute gradients with respect to all continuous learning parameters ( referred to as hypergradients ) ; this is then propagated through an inner optimization . Baydin et al . ( 2018 ) suggest a dynamic LR-tuning approach , namely , hypergradient descent , that apply gradient-based updates to the learning rate at each iteration in an online fashion . We propose a new algorithm to automatically determine the learning rate for a deep learning job in an autonomous manner that simply compares the current training loss with the best observed thus far to adapt the LR . The proposed algorithm works across multiple datasets and models for different tasks such as natural as well as adversarial training . It is an ‘ optimistic ’ method , in the sense that it increases the LR to as high as possible by examining the training loss repeatedly . We show through rigorous experimentation that in spite of its simplicity , the proposed algorithm performs surprisingly well as compared to the state-of-the-art . Our contributions : • We propose a novel , and simple algorithmic approach for autonomous , adaptive learning rate determination that does not require any manual tuning , inspection , or pre-experimental discovery of the algorithmic parameters . • Our proposed algorithm works across data sets and models with no customization and reaches higher or comparable accuracy as standard baselines in literature in the same number of epochs on each of these datasets and models . It consistently performs well , finding stable minima with good generalization and converges smoothly . • Our algorithm works very well for adversarial learning scenario along with natural training as demonstrated across different models and datasets . • We provide extensive empirical validation of our algorithm and convergence discussion . 2 PROPOSED METHOD . We propose an autonomous , adaptive LR tuning algorithm 1 towards determining the LR trajectory during the course of training . It operates in two phases : Phase 1 : Initial LR exploration , that strives 2For eg . https : //www.tensorflow.org/api_docs/python/tf/keras/callbacks/ ReduceLROnPlateau . to find a good starting LR ; Phase 2 : Optimistic Binary Exploration . The pseudocode is provided at Algorithm 1 . For the rest of the paper , we refer to the Automated Adaptive Learning Rate tuning algorithm as AALR in short . Algorithm 1 Automated Adaptive Learning Rate Tuning Algorithm ( AALR ) for Training DNNs Require : Model θ , N Training Samples ( xi , yi ) Ni=1 , Optimizer SGD , Momentum=0.9 , Weight De- cay , Batch size , Number of epochs T , Loss Function J ( θ ) . Initial LR η0 = 0.1 . Ensure : Learning Rate ηt at every epoch t. 1 : Initialize : θ , SGD with LR= η0 , best loss L∗ ← J ( θ ) ( forward pass through initial model ) . 2 : PHASE 1 : Start Initial LR Exploration . 3 : Set patience p← 10 , patience counter i← 0 , epoch number t← 0 4 : while i < p do 5 : Evaluate new loss L← J ( θ ) after training for an epoch . Increment i and t by 1 each . 6 : if L > L∗ or L is NAN then 7 : Halve LR : η0 ← η0/2 . 8 : Reload θ , reset optimizer with LR= η0 , and reset counter i = 0 . 9 : else 10 : L∗ ← L ( Update best loss ) . 11 : end if 12 : end while 13 : Save checkpoint θ and L∗ . 14 : PHASE 2 : Start Optimistic Binary Exploration 15 : Double LR ηt ← 2η0 . Patience p← 1 . 16 : while t < T do 17 : Train for p epochs . Increment epoch number t. 18 : Evaluate new loss L← J ( θ ) . 19 : if L is NAN then 20 : Halve LR : ηt+1 ← ηt/2 . 21 : Load checkpoint θ and L∗ . Reset optimizer with LR= ηt+1 . 22 : Double patience pt+1 ← 2pt . Continue . 23 : end if 24 : if L < L∗ then 25 : Update L∗ ← L. Save checkpoint θ and L∗ . 26 : Double LR ηt+1 ← 2ηt . Set patience p = 1 . 27 : else 28 : Train for another p epochs . Increment epoch number t. 29 : Evaluate new loss L← J ( θ ) . 30 : if L < L∗ then 31 : Update L∗ ← L. Save checkpoint θ and L∗ . 32 : Double LR ηt+1 ← 2ηt . Set patience p = 1 . 33 : else 34 : Halve LR : ηt+1 ← ηt/2 . 35 : Double patience pt+1 ← 2pt . 36 : if L is NAN then 37 : Load checkpoint θ and L∗ . Reset optimizer with LR= ηt+1 . 38 : end if 39 : end if 40 : end if 41 : end while The notation used in the following description is as follows . Patience : p , Learning rate : η , best loss L∗ , current loss L. Model θ , Loss function J ( θ ) . L∗ is initialized as the J ( θ ) after initializing the model , before training starts . Phase 1 : Initial LR exploration Phase 1 starts from an initial learning rate η = 0.1 , and patience p = 10 . It trains for an epoch , evaluates the loss L , and compares to the best loss L∗ . If L < L∗ , the L∗ is updated , and it continues training for another epoch . Otherwise , the model θ is reloaded and re-initialized , LR is halved η = η/2 , and optimizer is reset with the new LR . The patience counter is reset . This continues till a stable LR is determined by the algorithm , in which it trains at this LR for p epochs . The loss L∗ , the model θ and the optimizer state after Phase 1 is saved in a checkpoint . Phase 2 : Optimistic Binary Exploration In this phase , AALR keeps the learning rate η as high as possible for as long as possible at any given state of the training . Phase 2 starts by doubling LR to 2η , and setting p = 1 . After training for p+ 1 epochs , firstly AALR checks if the loss is NAN . In this case , the checkpoint ( model θ and optimizer ) corresponding to the best loss along with the best loss value L∗ are reloaded . Then LR is halved η = η/2 , patience is doubled , and the training continues . If instead , the loss is observed to decrease compared to the best loss , L < L∗ , then L∗ is updated , and the corresponding model θ , optimizer and L∗ are updated in checkpoint . This is followed by doubling the LR η = 2η , resetting p to 1 and continuing training for the next p+ 1 epochs . On the other hand , if L ≥ L∗ , AALR trains for another p + 1 epochs and check the loss L. This is because as informally stated before , AALR is ‘ optimistic ’ and persists in the high LR for some more time . ( In case , the newly evaluated loss is NAN , the previous approach is followed . ) However , if the new lossL ≥ L∗ , then AALR finally lowers the LR . AALR halves the LR η = η/2 , doubles patience p = 2p , and continues training for p + 1 epochs . If however , the loss had decreased , L < L∗ , the previous approach is followed : i.e. , it doubles the LR η = 2η , resets the patience p = 1 , updates best loss and checkpoint , and repeats training for p+ 1 epochs . The above cycle repeats till the stopping criterion is met . For ease of exposition , the pseudocode is given in Algorithm 1
The paper considers the problem of automated adaptation of learning rate during (deep) neural network training. The use cases described are standard and adversarial training for image classification. Given the wide use of DNNs in computer vision (and other areas), learning rate tuning is clearly an important problem and is being actively researched.
SP:58b49ce9f05350745bc62b1ed2cb116fa07bb7d9
A Simple Dynamic Learning Rate Tuning Algorithm For Automated Training of DNNs
1 INTRODUCTION . Deep architectures are generally trained by minimizing a non-convex loss function via underlying optimization algorithm such as stochastic gradient descent or its variants . It takes a fairly large amount of time to find the best suited optimization algorithm and its optimal hyperparameters ( such as learning rate , batch size etc . ) for training a model to the desired accuracy , this being a major challenge for academicians and industry practitioners alike . Usually , such tuning is done by initial configuration optimization through grid search or random search ( Bergstra et al. , 2011 ; Snoek et al. , 2012 ; Thornton et al. ) . Recent works have also formulated it as a bandit problem ( Li et al. , 2017 ) . However , it has been widely demonstrated that hyperparameters , especially the learning rate often needs to be dynamically adjusted as the training progresses , irrespective of the initial choice of configuration . If not adjusted dynamically , the training might get stuck in a bad minima , and no amount of training time can recover it . In this work , we focus on learning rate which is the foremost hyperparameter that one seeks to tune when training a deep learning model to get favourable results . Certain auto-tuning and adaptive variants of SGD , such as AdaGrad ( Duchi et al. , 2011 ) , Adadelta ( Zeiler , 2012 ) , RMSProp ( Tieleman & Hinton , 2012 ) , Adam ( Kingma & Ba , 2015 ) among others have been proposed that automatically adjust the learning rate as the training progresses , using functions of gradient . Yet others have proposed fixed learning rate and/or batch size change regimes ( Goyal et al. , 2017 ) , ( Smith et al. , 2018 ) for certain data set and model combination . In addition to traditional natural learning tasks where a good LR regime might already be known from past experiments , adversarial training for generating robust models is gaining a lot of popularity off late . In these cases , tuning the LR would generally require time consuming multiple experiments , since the LR regime is unlikely to be known for every attack for every model and dataset of interest1 . Moreover , new models are surfacing every day courtesy the state-of-the-art model synthesis systems , and new datasets are also becoming available quite often in different domains such as healthcare , automobile industy etc . In each of these cases , no prior LR regime would be known , and would require considerable manual tuning in the absence of a universal method , with demonstrated effectiveness over a wide range of tasks , models and datasets . Wilson et al . ( 2017 ) observed that solutions found by existing adaptive methods often generalize worse than those found by non-adaptive methods . Even though initially adaptive methods might 1For example , one can see a piecewise LR schedule given by Madry et al . ( 2018 ) at https : //github . com/MadryLab/cifar10_challenge/blob/master/config.json for a particular model . display faster initial progress on the training set , their performance quickly plateaus on the test set , and learning rate tuning is required to improve the generalization performance of these methods . For the case of SGD with Momentum , learning rate ( LR ) step decay is very popular ( Goyal et al. , 2017 ) , ( Huang et al. , 2017 ) , ReduceLRonPlateau2 . However , in certain junctures of training , increasing the LR can potentially lead to a quick , further exploration of the loss landscape and help the training to escape a sharp minima ( having poor generalisation Keskar et al . ( 2016 ) ) . Further , recent works have shown that the distance traveled by the model in the parameter space determines how far the training is from convergence Hoffer et al . ( 2017 ) . This inspires the idea that increasing the LR to take bigger steps in the loss landscape , while maintaining numerical stability might help in better generalization . The idea of increasing and decreasing the LR periodically during training has been demonstrated by Smith ( 2017 ) ; Smith & Topin ( 2017 ) in their cyclical learning rate method ( CLR ) . This has also been shown by Loshchilov & Hutter ( 2016 ) , in Stochastic Gradient Descent with Warm Restarts ( SGDR , popularly referred to as Cosine Annealing with Warm Restarts ) . In CLR , the LR is varied periodically in a linear manner , between a maximum and a minimum value , and it is shown empirically that such increase of learning rate is overall beneficial to the training compared to fixed schedules . In SGDR , the training periodically restarts from an initial learning rate , and then decreases to a minimum learning rate through a cosine schedule of LR decay . The period typically increases in powers of 2 . The authors suggest optimizing the initial LR and minimum LR for good performance . Schaul et al . ( 2013 ) had suggested an adaptive learning rate schedule that allows the learning rate to increase when the signal is non-stationary and the underline distribution changes . This is a computationally heavy method , requiring computing the Hessian in an online manner . Recently , there has been some work that explore gradients in different forms for hyperaparameter optimization . Maclaurin et al . ( 2015 ) suggest an approach by which they exactly reverse SGD with momentum to compute gradients with respect to all continuous learning parameters ( referred to as hypergradients ) ; this is then propagated through an inner optimization . Baydin et al . ( 2018 ) suggest a dynamic LR-tuning approach , namely , hypergradient descent , that apply gradient-based updates to the learning rate at each iteration in an online fashion . We propose a new algorithm to automatically determine the learning rate for a deep learning job in an autonomous manner that simply compares the current training loss with the best observed thus far to adapt the LR . The proposed algorithm works across multiple datasets and models for different tasks such as natural as well as adversarial training . It is an ‘ optimistic ’ method , in the sense that it increases the LR to as high as possible by examining the training loss repeatedly . We show through rigorous experimentation that in spite of its simplicity , the proposed algorithm performs surprisingly well as compared to the state-of-the-art . Our contributions : • We propose a novel , and simple algorithmic approach for autonomous , adaptive learning rate determination that does not require any manual tuning , inspection , or pre-experimental discovery of the algorithmic parameters . • Our proposed algorithm works across data sets and models with no customization and reaches higher or comparable accuracy as standard baselines in literature in the same number of epochs on each of these datasets and models . It consistently performs well , finding stable minima with good generalization and converges smoothly . • Our algorithm works very well for adversarial learning scenario along with natural training as demonstrated across different models and datasets . • We provide extensive empirical validation of our algorithm and convergence discussion . 2 PROPOSED METHOD . We propose an autonomous , adaptive LR tuning algorithm 1 towards determining the LR trajectory during the course of training . It operates in two phases : Phase 1 : Initial LR exploration , that strives 2For eg . https : //www.tensorflow.org/api_docs/python/tf/keras/callbacks/ ReduceLROnPlateau . to find a good starting LR ; Phase 2 : Optimistic Binary Exploration . The pseudocode is provided at Algorithm 1 . For the rest of the paper , we refer to the Automated Adaptive Learning Rate tuning algorithm as AALR in short . Algorithm 1 Automated Adaptive Learning Rate Tuning Algorithm ( AALR ) for Training DNNs Require : Model θ , N Training Samples ( xi , yi ) Ni=1 , Optimizer SGD , Momentum=0.9 , Weight De- cay , Batch size , Number of epochs T , Loss Function J ( θ ) . Initial LR η0 = 0.1 . Ensure : Learning Rate ηt at every epoch t. 1 : Initialize : θ , SGD with LR= η0 , best loss L∗ ← J ( θ ) ( forward pass through initial model ) . 2 : PHASE 1 : Start Initial LR Exploration . 3 : Set patience p← 10 , patience counter i← 0 , epoch number t← 0 4 : while i < p do 5 : Evaluate new loss L← J ( θ ) after training for an epoch . Increment i and t by 1 each . 6 : if L > L∗ or L is NAN then 7 : Halve LR : η0 ← η0/2 . 8 : Reload θ , reset optimizer with LR= η0 , and reset counter i = 0 . 9 : else 10 : L∗ ← L ( Update best loss ) . 11 : end if 12 : end while 13 : Save checkpoint θ and L∗ . 14 : PHASE 2 : Start Optimistic Binary Exploration 15 : Double LR ηt ← 2η0 . Patience p← 1 . 16 : while t < T do 17 : Train for p epochs . Increment epoch number t. 18 : Evaluate new loss L← J ( θ ) . 19 : if L is NAN then 20 : Halve LR : ηt+1 ← ηt/2 . 21 : Load checkpoint θ and L∗ . Reset optimizer with LR= ηt+1 . 22 : Double patience pt+1 ← 2pt . Continue . 23 : end if 24 : if L < L∗ then 25 : Update L∗ ← L. Save checkpoint θ and L∗ . 26 : Double LR ηt+1 ← 2ηt . Set patience p = 1 . 27 : else 28 : Train for another p epochs . Increment epoch number t. 29 : Evaluate new loss L← J ( θ ) . 30 : if L < L∗ then 31 : Update L∗ ← L. Save checkpoint θ and L∗ . 32 : Double LR ηt+1 ← 2ηt . Set patience p = 1 . 33 : else 34 : Halve LR : ηt+1 ← ηt/2 . 35 : Double patience pt+1 ← 2pt . 36 : if L is NAN then 37 : Load checkpoint θ and L∗ . Reset optimizer with LR= ηt+1 . 38 : end if 39 : end if 40 : end if 41 : end while The notation used in the following description is as follows . Patience : p , Learning rate : η , best loss L∗ , current loss L. Model θ , Loss function J ( θ ) . L∗ is initialized as the J ( θ ) after initializing the model , before training starts . Phase 1 : Initial LR exploration Phase 1 starts from an initial learning rate η = 0.1 , and patience p = 10 . It trains for an epoch , evaluates the loss L , and compares to the best loss L∗ . If L < L∗ , the L∗ is updated , and it continues training for another epoch . Otherwise , the model θ is reloaded and re-initialized , LR is halved η = η/2 , and optimizer is reset with the new LR . The patience counter is reset . This continues till a stable LR is determined by the algorithm , in which it trains at this LR for p epochs . The loss L∗ , the model θ and the optimizer state after Phase 1 is saved in a checkpoint . Phase 2 : Optimistic Binary Exploration In this phase , AALR keeps the learning rate η as high as possible for as long as possible at any given state of the training . Phase 2 starts by doubling LR to 2η , and setting p = 1 . After training for p+ 1 epochs , firstly AALR checks if the loss is NAN . In this case , the checkpoint ( model θ and optimizer ) corresponding to the best loss along with the best loss value L∗ are reloaded . Then LR is halved η = η/2 , patience is doubled , and the training continues . If instead , the loss is observed to decrease compared to the best loss , L < L∗ , then L∗ is updated , and the corresponding model θ , optimizer and L∗ are updated in checkpoint . This is followed by doubling the LR η = 2η , resetting p to 1 and continuing training for the next p+ 1 epochs . On the other hand , if L ≥ L∗ , AALR trains for another p + 1 epochs and check the loss L. This is because as informally stated before , AALR is ‘ optimistic ’ and persists in the high LR for some more time . ( In case , the newly evaluated loss is NAN , the previous approach is followed . ) However , if the new lossL ≥ L∗ , then AALR finally lowers the LR . AALR halves the LR η = η/2 , doubles patience p = 2p , and continues training for p + 1 epochs . If however , the loss had decreased , L < L∗ , the previous approach is followed : i.e. , it doubles the LR η = 2η , resets the patience p = 1 , updates best loss and checkpoint , and repeats training for p+ 1 epochs . The above cycle repeats till the stopping criterion is met . For ease of exposition , the pseudocode is given in Algorithm 1
This paper proposes an algorithm for automatically tuning the learning rate of SGD while training deep neural networks. The proposed learning rate tuning algorithm is a finite state machine and consists of two phases: the first phase finds the largest learning rate that the network can begin training with for p = 10 epochs; the second phase is an optimistic binary exploration phase which increases or decreases the learning rate depending upon whether the loss is NaN, increasing or decreasing. Empirical results are shown on a few standard neural networks for image classification on CIFAR-10/100 datasets and for adversarial training on the CIFAR-10 dataset.
SP:58b49ce9f05350745bc62b1ed2cb116fa07bb7d9
TSInsight: A local-global attribution framework for interpretability in time-series data
1 INTRODUCTION . Deep learning models have been at the forefront of technology in a range of different domains including image classification ( Krizhevsky et al. , 2012 ) , object detection ( Girshick , 2015 ) , speech recognition ( Dahl et al. , 2010 ) , text recognition ( Breuel , 2008 ) , image captioning ( Karpathy & Fei-Fei , 2015 ) and pose estimation ( Cao et al. , 2018 ) . These models are particularly effective in automatically discovering useful features . However , this automated feature extraction comes at the cost of lack of transparency of the system . Therefore , despite these advances , their employment in safety-critical domains like finance ( Knight , 2017 ) , self-driving cars ( Kim et al. , 2018 ) and medicine ( Zintgraf et al. , 2017 ) is limited due to the lack of interpretability of the decision made by the network . Numerous efforts have been made for the interpretation of these black-box models . These efforts can be mainly classified into two separate directions . The first set of strategies focuses on making the network itself interpretable by trading off some performance . These strategies include SelfExplainable Neural Network ( SENN ) ( Alvarez-Melis & Jaakkola , 2018 ) and Bayesian non-parametric regression models ( Guo et al. , 2018 ) . The second set of strategies focuses on explaining a pretrained model i.e . they try to infer the reason for a particular prediction . These attribution techniques include saliency map ( Yosinski et al. , 2015 ) and layer-wise relevance propagation ( Bach et al. , 2015 ) . However , all of these methods have been particularly developed and tested for visual modalities which are directly intelligible for humans . Transferring methodologies developed for visual modalities to time-series data is difficult due to the non-intuitive nature of time-series . Therefore , only a handful of methods have been focused on explaining time-series models in the past ( Kumar et al. , 2017 ; Siddiqui et al. , 2019 ) . We approach the attribution problem in a novel way by attaching an auto-encoder on top of the classifier . The auto-encoder is fine-tuned based on the gradients from the classifier . Rather than 1Code along with the trained models will be made publicly available upon publication asking the auto-encoder to reconstruct the whole input , we ask the network to only reconstruct parts which are useful for the classifier i.e . are correlated or causal for the prediction . In order to achieve this , we introduce a sparsity inducing norm onto the output of the auto-encoder . In particular , the contributions of this paper are twofold : • A novel attribution method for time-series data which makes it much easier to interpret the decision of any deep learning model . The method also leverages dataset-level insights when explaining individual decisions in contrast to other attribution methods . • Detailed analysis of the information captured by different attribution techniques using a simple suppression test on a range of different time-series datasets . This also includes analysis of the different out of the box properties achieved by TSInsight including generic applicability , contraction in the output space and resistance against trivial adversarial noise . 2 RELATED WORK . Since the resurgence of deep learning in 2012 after a deep network comprehensively outperformed its feature engineered counterparts ( Krizhevsky et al. , 2012 ) on the ImageNet visual recognition challenge comprising of 1.2 million images ( Russakovsky et al. , 2015 ) , deep learning has been integrated into a range of different applications to gain unprecedented levels of improvement . Significant efforts have been made in the past regarding the interpretability of deep models , specifically for image modality . These methods are mainly categorized into two different streams where the first stream is focused on explaining the decisions of a pretrained network which is much more applicable in the real-world . The second stream is directed towards making models more interpretable by trading off accuracy . The first stream for explainable systems which attempts to explain pretrained models using attribution techniques has been a major focus of research in the past years . The most common strategy is to visualize the filters of the deep model ( Zeiler & Fergus , 2013 ; Simonyan et al. , 2013 ; Yosinski et al. , 2015 ; Palacio et al. , 2018 ; Bach et al. , 2015 ) . This is very effective for visual modalities since images are directly intelligible for humans . Zeiler & Fergus ( 2013 ) introduced deconvnet layer to understand the intermediate representations of the network . They not only visualized the network , but were also able to improve the network based on these visualizations to achieve state-of-the-art performance on ImageNet ( Russakovsky et al. , 2015 ) . Simonyan et al . ( 2013 ) proposed a method to visualize class-specific saliency maps . Yosinski et al . ( 2015 ) proposed a visualization framework for image based deep learning models . They tried to visualize the features that a particular filter was responding to by using regularized optimization . Instead of using first-order gradients , Bach et al . ( 2015 ) introduced a Layer-wise Relevance Propagation ( LRP ) framework which identified the relevant portions of the image by distributing the contribution to the incoming nodes . Smilkov et al . ( 2017 ) introduced the SmoothGrad method where they computed the mean gradients after adding small random noise sampled from a zero-mean Gaussian distribution to the original point . Integrated gradients method introduced by Sundararajan et al . ( 2017 ) computed the average gradient from the original point to the baseline input ( zero-image in their case ) at regular intervals . Guo et al . ( 2018 ) used Bayesian non-parametric regression mixture model with multiple elastic nets to extract generalizable insights from the trained model . Either these methods are not directly applicable to time-series data , or are inferior in terms of intelligibility for time-series data . Palacio et al . ( 2018 ) introduced yet another approach to understand a deep model by leveraging auto-encoders . After training both the classifier and the auto-encoder in isolation , they attached the auto-encoder to the head of the classifier and fine-tuned only the decoder freezing the parameters of the classifier and the encoder . This transforms the decoder to focus on features which are relevant for the network . Applying this method directly to time-series yields no interesting insights ( Fig . 1b ) into the network ’ s preference for input . Therefore , this method is strictly a special case of the TSInsight ’ s formulation . In the second stream for explainable systems , Alvarez-Melis & Jaakkola ( 2018 ) proposed SelfExplaining Neural Networks ( SENN ) where they learn two different networks . The first network is the concept encoder which encodes different concepts while the second network learns the weightings of these concepts . This transforms the system into a linear problem with a set of features making it easily interpretable for humans . SENN trade-offs accuracy in favor of interpretability . Kim et al . ( 2018 ) attached a second network ( video-to-text ) to the classifier which was responsible for the production of natural language based explanation of the decisions taken by the network using the saliency information from the classifier . This framework relies on LSTM for the generation of the descriptions adding yet another level of opaqueness making it hard to decipher whether the error originated from the classification network or from the explanation generator . Kumar et al . ( 2017 ) made the first attempt to understand deep learning models for time-series analysis where they specifically focused on financial data . They computed the input saliency based on the firstorder gradients of the network . Siddiqui et al . ( 2019 ) proposed an influence computation framework which enabled exploration of the network at the filter level by computing the per filter saliency map and filter importance again based on first-order gradients . However , both methods lack in providing useful insights due to the noise inherent to first-order gradients . Another major limitation of saliency based methods is the sole use of local information . Therefore , TSInsight significantly supersedes in the identification of the important regions of the input using a combination of both local information for that particular example along with generalizable insights extracted from the entire dataset in order to reach a particular description . Due to the use of auto-encoders , TSInsight is inherently related to sparse ( Ng et al. , 2011 ) and contractive auto-encoders ( Rifai et al. , 2011 ) . In sparse auto-encoders ( Ng et al. , 2011 ) , the sparsity is induced on the hidden representation by minimizing the KL-divergence between the average activations and a hyperparameter which defines the fraction of non-zero units . This KL-divergence is a necessity for sigmoid-based activation functions . However , in our case , the sparsity is induced directly on the output of the auto-encoder , which introduces a contraction on the input space of the classifier , and can directly be achieved by using Manhattan norm on the activations as we obtain real-valued outputs . Albeit sparsity being introduced in both cases , the sparsity in the case of sparse auto-encoders is not useful for interpretability . In the case of contractive auto-encoders ( Rifai et al. , 2011 ) , a contraction mapping is introduced by penalizing the Fobenius norm of the Jacobian of the encoder along with the reconstruction error . This makes the learned representation invariant to minor perturbations in the input . TSInsight on the other hand , induces a contraction on the input space for interpretability , thus , favoring sparsity inducing norm . 3 METHOD . We first train an auto-encoder as well as a classifier in isolation on the desired dataset . Once both the auto-encoder as well as the classifier are trained , we attach the auto-encoder to the head of the classifier . TSInsight is based on a novel loss formulation , which introduces a sparsity-inducing norm on the output of the auto-encoder along with a reconstruction and classification penalty for the optimization of the auto-encoder keeping the classifier fixed . Inducing sparsity on the auto-encoder ’ s output forces the network to only reproduce relevant regions of the input to the classifier since the auto-encoder is optimized using the gradients from the classifier . As inducing sparsity on the auto-encoder ’ s output significantly hampers the auto-encoder ’ s ability to reconstruct the input which can in turn result in fully transformed outputs , it is important to have a reconstruction penalty in place . This effect is illustrated in Fig . 2a where the auto-encoder produced a novel sparse representation of the input , which albeit being an interesting one , doesn ’ t help with the interpretability of the model . Therefore , the proposed optimization objective can be written as : ( W ′ E , W ′ D ) = arg min W∗E , W∗D 1 |X | ∑ ( x , y ) ∈X×Y [ L ( Φ ( D ( E ( x ; W∗E ) ; W∗D ) ; W∗ ) , y ) + γ ( ‖x−D ( E ( x ; W∗E ) ; W∗D ) ‖22 ) + β ( ‖D ( E ( x ; W∗E ) ; W∗D ) ‖1 ) ] + λ ( ‖W∗E‖22 + ‖W∗D‖22 ) ( 1 ) where L represents the classification loss function which is cross-entropy in our case , Φ denotes the classifier with pretrained weightsW∗ , while E and D denotes the encoder and decoder respectively with corresponding pretrained weightsW∗E andW∗D . We introduce two new hyperparameters , γ and β. γ controls the auto-encoder ’ s focus on reconstruction of the input . β on the other hand , controls the sparsity enforced on the output of the auto-encoder . Pretrained weights are obtained by training the auto-encoder as well as the classifier in isolation as previously mentioned . With this new formulation , the output of the auto-encoder is both sparse as well as aligned with the input as evident from Fig . 2b . The selection of β can significantly impact the output of the model . Performing grid search to determine this value is not possible as large values of β results in models which are more interpretable but inferior in terms of performance , therefore , presenting a trade-off between performance and interpretability which is difficult to quantify . A rudimentary way which we tested for automated selection of these hyperparameters ( β and γ ) is via feature importance measures ( Siddiqui et al. , 2019 ; Vidovic et al. , 2016 ) . The simplest candidate for this importance measure is saliency . This can be written as : I ( x ) = ∂aL ∂x where L denotes the number of layers in the classifier and aL denotes the activations of the last layer in the classifier . This computation is just based on the classifier i.e . we ignore the auto-encoder at this point . Once the values of the corresponding importance metric is evaluated , the values are scaled in the range of [ 0 , 1 ] to serve as the corresponding reconstruction weight i.e . γ . The inverted importance values serve as the corresponding sparsity weight i.e . β. I ( x ) = I ( x ) −min j I ( x ) j max j I ( x ) j −min j I ( x ) j γ∗ ( x ) = I ( x ) & β∗ ( x ) = 1.0− I ( x ) Therefore , the final term imposing sparsity on the classifier can be written as : γ ( ‖x−D ( E ( x ; W∗E ) ; W∗D ) ‖22 ) + β ( ‖D ( E ( x ; W∗E ) ; W∗D ) ‖1 ) ⇒ C × ‖D ( E ( x ; W∗E ) ; W∗D ) β∗ ( x ) ‖1 + ‖ ( x−D ( E ( x ; W∗E ) ; W∗D ) ) γ∗ ( x ) ‖22 In contrast to the instance-based value of β , we used the average saliency value in our experiments . This ensures that the activations are not sufficiently penalized so as to significantly impact the performance of the classifier . Due to the low relative magnitude of the sparsity term , we scaled it by a constant factor C ( we used C = 10 in our experiments ) . This approach despite being interesting , still results in inferior performance as compared to manual fine-tuning of hyperparameters . This needs further investigation for it to work in the future .
The paper presents a new approach for improving the interpretability of deep learning methods used for time series. The is mainly concerned with classification tasks for time series. First, the classifier is learned in a usual way. Subsequently, a sparse auto-encoder is used that encodes the last layer of the classifier. For training the auto-encoder the classifier is fixed and there is a decoding loss as well as a sparsity loss. The sparse encoding of the last layer is supposed to increase the interpretability of the classification as it indicates which features are important for the classification.
SP:9fbad6b7a8485b00a2b22a46dca0f672f624c501
TSInsight: A local-global attribution framework for interpretability in time-series data
1 INTRODUCTION . Deep learning models have been at the forefront of technology in a range of different domains including image classification ( Krizhevsky et al. , 2012 ) , object detection ( Girshick , 2015 ) , speech recognition ( Dahl et al. , 2010 ) , text recognition ( Breuel , 2008 ) , image captioning ( Karpathy & Fei-Fei , 2015 ) and pose estimation ( Cao et al. , 2018 ) . These models are particularly effective in automatically discovering useful features . However , this automated feature extraction comes at the cost of lack of transparency of the system . Therefore , despite these advances , their employment in safety-critical domains like finance ( Knight , 2017 ) , self-driving cars ( Kim et al. , 2018 ) and medicine ( Zintgraf et al. , 2017 ) is limited due to the lack of interpretability of the decision made by the network . Numerous efforts have been made for the interpretation of these black-box models . These efforts can be mainly classified into two separate directions . The first set of strategies focuses on making the network itself interpretable by trading off some performance . These strategies include SelfExplainable Neural Network ( SENN ) ( Alvarez-Melis & Jaakkola , 2018 ) and Bayesian non-parametric regression models ( Guo et al. , 2018 ) . The second set of strategies focuses on explaining a pretrained model i.e . they try to infer the reason for a particular prediction . These attribution techniques include saliency map ( Yosinski et al. , 2015 ) and layer-wise relevance propagation ( Bach et al. , 2015 ) . However , all of these methods have been particularly developed and tested for visual modalities which are directly intelligible for humans . Transferring methodologies developed for visual modalities to time-series data is difficult due to the non-intuitive nature of time-series . Therefore , only a handful of methods have been focused on explaining time-series models in the past ( Kumar et al. , 2017 ; Siddiqui et al. , 2019 ) . We approach the attribution problem in a novel way by attaching an auto-encoder on top of the classifier . The auto-encoder is fine-tuned based on the gradients from the classifier . Rather than 1Code along with the trained models will be made publicly available upon publication asking the auto-encoder to reconstruct the whole input , we ask the network to only reconstruct parts which are useful for the classifier i.e . are correlated or causal for the prediction . In order to achieve this , we introduce a sparsity inducing norm onto the output of the auto-encoder . In particular , the contributions of this paper are twofold : • A novel attribution method for time-series data which makes it much easier to interpret the decision of any deep learning model . The method also leverages dataset-level insights when explaining individual decisions in contrast to other attribution methods . • Detailed analysis of the information captured by different attribution techniques using a simple suppression test on a range of different time-series datasets . This also includes analysis of the different out of the box properties achieved by TSInsight including generic applicability , contraction in the output space and resistance against trivial adversarial noise . 2 RELATED WORK . Since the resurgence of deep learning in 2012 after a deep network comprehensively outperformed its feature engineered counterparts ( Krizhevsky et al. , 2012 ) on the ImageNet visual recognition challenge comprising of 1.2 million images ( Russakovsky et al. , 2015 ) , deep learning has been integrated into a range of different applications to gain unprecedented levels of improvement . Significant efforts have been made in the past regarding the interpretability of deep models , specifically for image modality . These methods are mainly categorized into two different streams where the first stream is focused on explaining the decisions of a pretrained network which is much more applicable in the real-world . The second stream is directed towards making models more interpretable by trading off accuracy . The first stream for explainable systems which attempts to explain pretrained models using attribution techniques has been a major focus of research in the past years . The most common strategy is to visualize the filters of the deep model ( Zeiler & Fergus , 2013 ; Simonyan et al. , 2013 ; Yosinski et al. , 2015 ; Palacio et al. , 2018 ; Bach et al. , 2015 ) . This is very effective for visual modalities since images are directly intelligible for humans . Zeiler & Fergus ( 2013 ) introduced deconvnet layer to understand the intermediate representations of the network . They not only visualized the network , but were also able to improve the network based on these visualizations to achieve state-of-the-art performance on ImageNet ( Russakovsky et al. , 2015 ) . Simonyan et al . ( 2013 ) proposed a method to visualize class-specific saliency maps . Yosinski et al . ( 2015 ) proposed a visualization framework for image based deep learning models . They tried to visualize the features that a particular filter was responding to by using regularized optimization . Instead of using first-order gradients , Bach et al . ( 2015 ) introduced a Layer-wise Relevance Propagation ( LRP ) framework which identified the relevant portions of the image by distributing the contribution to the incoming nodes . Smilkov et al . ( 2017 ) introduced the SmoothGrad method where they computed the mean gradients after adding small random noise sampled from a zero-mean Gaussian distribution to the original point . Integrated gradients method introduced by Sundararajan et al . ( 2017 ) computed the average gradient from the original point to the baseline input ( zero-image in their case ) at regular intervals . Guo et al . ( 2018 ) used Bayesian non-parametric regression mixture model with multiple elastic nets to extract generalizable insights from the trained model . Either these methods are not directly applicable to time-series data , or are inferior in terms of intelligibility for time-series data . Palacio et al . ( 2018 ) introduced yet another approach to understand a deep model by leveraging auto-encoders . After training both the classifier and the auto-encoder in isolation , they attached the auto-encoder to the head of the classifier and fine-tuned only the decoder freezing the parameters of the classifier and the encoder . This transforms the decoder to focus on features which are relevant for the network . Applying this method directly to time-series yields no interesting insights ( Fig . 1b ) into the network ’ s preference for input . Therefore , this method is strictly a special case of the TSInsight ’ s formulation . In the second stream for explainable systems , Alvarez-Melis & Jaakkola ( 2018 ) proposed SelfExplaining Neural Networks ( SENN ) where they learn two different networks . The first network is the concept encoder which encodes different concepts while the second network learns the weightings of these concepts . This transforms the system into a linear problem with a set of features making it easily interpretable for humans . SENN trade-offs accuracy in favor of interpretability . Kim et al . ( 2018 ) attached a second network ( video-to-text ) to the classifier which was responsible for the production of natural language based explanation of the decisions taken by the network using the saliency information from the classifier . This framework relies on LSTM for the generation of the descriptions adding yet another level of opaqueness making it hard to decipher whether the error originated from the classification network or from the explanation generator . Kumar et al . ( 2017 ) made the first attempt to understand deep learning models for time-series analysis where they specifically focused on financial data . They computed the input saliency based on the firstorder gradients of the network . Siddiqui et al . ( 2019 ) proposed an influence computation framework which enabled exploration of the network at the filter level by computing the per filter saliency map and filter importance again based on first-order gradients . However , both methods lack in providing useful insights due to the noise inherent to first-order gradients . Another major limitation of saliency based methods is the sole use of local information . Therefore , TSInsight significantly supersedes in the identification of the important regions of the input using a combination of both local information for that particular example along with generalizable insights extracted from the entire dataset in order to reach a particular description . Due to the use of auto-encoders , TSInsight is inherently related to sparse ( Ng et al. , 2011 ) and contractive auto-encoders ( Rifai et al. , 2011 ) . In sparse auto-encoders ( Ng et al. , 2011 ) , the sparsity is induced on the hidden representation by minimizing the KL-divergence between the average activations and a hyperparameter which defines the fraction of non-zero units . This KL-divergence is a necessity for sigmoid-based activation functions . However , in our case , the sparsity is induced directly on the output of the auto-encoder , which introduces a contraction on the input space of the classifier , and can directly be achieved by using Manhattan norm on the activations as we obtain real-valued outputs . Albeit sparsity being introduced in both cases , the sparsity in the case of sparse auto-encoders is not useful for interpretability . In the case of contractive auto-encoders ( Rifai et al. , 2011 ) , a contraction mapping is introduced by penalizing the Fobenius norm of the Jacobian of the encoder along with the reconstruction error . This makes the learned representation invariant to minor perturbations in the input . TSInsight on the other hand , induces a contraction on the input space for interpretability , thus , favoring sparsity inducing norm . 3 METHOD . We first train an auto-encoder as well as a classifier in isolation on the desired dataset . Once both the auto-encoder as well as the classifier are trained , we attach the auto-encoder to the head of the classifier . TSInsight is based on a novel loss formulation , which introduces a sparsity-inducing norm on the output of the auto-encoder along with a reconstruction and classification penalty for the optimization of the auto-encoder keeping the classifier fixed . Inducing sparsity on the auto-encoder ’ s output forces the network to only reproduce relevant regions of the input to the classifier since the auto-encoder is optimized using the gradients from the classifier . As inducing sparsity on the auto-encoder ’ s output significantly hampers the auto-encoder ’ s ability to reconstruct the input which can in turn result in fully transformed outputs , it is important to have a reconstruction penalty in place . This effect is illustrated in Fig . 2a where the auto-encoder produced a novel sparse representation of the input , which albeit being an interesting one , doesn ’ t help with the interpretability of the model . Therefore , the proposed optimization objective can be written as : ( W ′ E , W ′ D ) = arg min W∗E , W∗D 1 |X | ∑ ( x , y ) ∈X×Y [ L ( Φ ( D ( E ( x ; W∗E ) ; W∗D ) ; W∗ ) , y ) + γ ( ‖x−D ( E ( x ; W∗E ) ; W∗D ) ‖22 ) + β ( ‖D ( E ( x ; W∗E ) ; W∗D ) ‖1 ) ] + λ ( ‖W∗E‖22 + ‖W∗D‖22 ) ( 1 ) where L represents the classification loss function which is cross-entropy in our case , Φ denotes the classifier with pretrained weightsW∗ , while E and D denotes the encoder and decoder respectively with corresponding pretrained weightsW∗E andW∗D . We introduce two new hyperparameters , γ and β. γ controls the auto-encoder ’ s focus on reconstruction of the input . β on the other hand , controls the sparsity enforced on the output of the auto-encoder . Pretrained weights are obtained by training the auto-encoder as well as the classifier in isolation as previously mentioned . With this new formulation , the output of the auto-encoder is both sparse as well as aligned with the input as evident from Fig . 2b . The selection of β can significantly impact the output of the model . Performing grid search to determine this value is not possible as large values of β results in models which are more interpretable but inferior in terms of performance , therefore , presenting a trade-off between performance and interpretability which is difficult to quantify . A rudimentary way which we tested for automated selection of these hyperparameters ( β and γ ) is via feature importance measures ( Siddiqui et al. , 2019 ; Vidovic et al. , 2016 ) . The simplest candidate for this importance measure is saliency . This can be written as : I ( x ) = ∂aL ∂x where L denotes the number of layers in the classifier and aL denotes the activations of the last layer in the classifier . This computation is just based on the classifier i.e . we ignore the auto-encoder at this point . Once the values of the corresponding importance metric is evaluated , the values are scaled in the range of [ 0 , 1 ] to serve as the corresponding reconstruction weight i.e . γ . The inverted importance values serve as the corresponding sparsity weight i.e . β. I ( x ) = I ( x ) −min j I ( x ) j max j I ( x ) j −min j I ( x ) j γ∗ ( x ) = I ( x ) & β∗ ( x ) = 1.0− I ( x ) Therefore , the final term imposing sparsity on the classifier can be written as : γ ( ‖x−D ( E ( x ; W∗E ) ; W∗D ) ‖22 ) + β ( ‖D ( E ( x ; W∗E ) ; W∗D ) ‖1 ) ⇒ C × ‖D ( E ( x ; W∗E ) ; W∗D ) β∗ ( x ) ‖1 + ‖ ( x−D ( E ( x ; W∗E ) ; W∗D ) ) γ∗ ( x ) ‖22 In contrast to the instance-based value of β , we used the average saliency value in our experiments . This ensures that the activations are not sufficiently penalized so as to significantly impact the performance of the classifier . Due to the low relative magnitude of the sparsity term , we scaled it by a constant factor C ( we used C = 10 in our experiments ) . This approach despite being interesting , still results in inferior performance as compared to manual fine-tuning of hyperparameters . This needs further investigation for it to work in the future .
The aim of this work is to improve interpretability in time series prediction. To do so, they propose to use a relatively post-hoc procedure which learns a sparse representation informed by gradients of the prediction objective under a trained model. In particular, given a trained next-step classifier, they propose to train a sparse autoencoder with a combined objective of reconstruction and classification performance (while keeping the classifier fixed), so as to expose which features are useful for time series prediction. Sparsity, and sparse auto-encoders, have been widely used for the end of interpretability. In this sense, the crux of the approach is very well motivated by the literature.
SP:9fbad6b7a8485b00a2b22a46dca0f672f624c501
AN ATTENTION-BASED DEEP NET FOR LEARNING TO RANK
1 INTRODUCTION . Learning to rank applies supervised or semi-supervised machine learning to construct ranking models for information retrieval problems . In learning to rank , a query is given and a number of search results are to be ranked by their relevant importance given the query . Many problems in information retrieval can be formulated or partially solved by learning to rank . In learning to rank , there are typically three approaches : the pointwise , pairwise , and listwise approaches Liu ( 2011 ) . The pointwise approach assigns an importance score to each pair of query and search result . The pairwise approach discerns which search result is more relevant for a certain query and a pair of search results . The listwise approach outputs the ranks for all search results given a specific query , therefore being the most general . For learning to rank , neural networks are known to enjoy a success . Generally in such models , neural networks are applied to model the ranking probabilities with the features of queries and search results as the input . For instance , RankNet Burges et al . ( 2005 ) applies a neural network to calculate a probability for any search result being more relevant compared to another . Each pair of query and search result is combined into a feature vector , which is the input of the neural network , and a ranking priority score is the output . Another approach learns the matching mechanism between the query and the search result , which is particularly suitable for image retrieval . Usually the mechanism is represented by a similarity matrix which outputs a bilinear form as the ranking priority score ; for instance , such a structure is applied in Severyn & Moschitti ( 2015 ) . We postulate that it could be beneficial to apply multiple embeddings of the queries and search results to a learning to rank model . It has already been observed that for training images , applying a committee of convolutional neural nets improves digit and character recognition Ciresan et al . ( 2011 ) ; Meier et al . ( 2011 ) . From such an approach , the randomness of the architecture of a single neural network can be effectively reduced . For training text data , combining different techniques such as tf-idf , latent Dirichlet allocation ( LDA ) Blei et al . ( 2003 ) , or word2vec Mikolov et al . ( 2013 ) , has also been explored by Das et al . ( 2015 ) . This is due to the fact that it is relatively hard to judge different models a priori . However , we have seen no literature on designing a mechanism to incorporate different embeddings for ranking . We hypothesize that applying multiple embeddings to a ranking neural network can improve the accuracy not only in terms of “ averaging out ” the error , but it can also provide a more robust solution compared to applying a single embedding . For learning to rank , we propose the application of the attention mechanism Bahdanau et al . ( 2015 ) ; Cho et al . ( 2015 ) , which is demonstrated to be successful in focusing on different aspects of the input so that it can incorporate distinct features . It incorporates different embeddings with weights changing over time , derived from a recurrent neural network ( RNN ) structure . Thus , it can help us better summarize information from the query and search results . We also apply a decoder mechanism to rank all the search results , which provides a flexible list-wise ranking approach that can be applied to both image retrieval and text querying . Our model has the following contributions : ( 1 ) it applies the attention mechanism to listwise learning to rank problems , which we think is novel in the learning to rank literature ; ( 2 ) it takes different embeddings of queries and search results into account , incorporating them with the attention mechanism ; ( 3 ) double attention mechanisms are applied to both queries and search results . Section 2 reviews RankNet , similarity matching , and the attention mechanism in details . Section 3 constructs the attention-based deep net for ranking , and discusses how to calibrate the model . Section 4 demonstrates the performance of our model on image retrieval and text querying data sets . Section 5 discusses about potential future research and concludes the paper . 2 LITERATURE REVIEW . To begin with , for RankNet , each pair of query and search result is turned into a feature vector . For two feature vectors x0 ∈ Rd0 and x′0 ∈ Rd0 sharing the same query , we define x0 ≺ x′0 if the search result associated with x0 is ranked before that with x′0 , and vice versa . For x0 , { x1 = f ( W0x0 + b0 ) ∈ Rd1 , x2 = f ( W1x1 + b1 ) ∈ Rd2 = R , and similarly for x′0 . Here Wl is a dl+1 × dl weight matrix , and bl ∈ Rdl+1 is a bias for l = 0 , 1 . Function f is an element-wise nonlinear activation function ; for instance , it can be the sigmoid function σ ( u ) = eu/ ( 1 + eu ) . Then for RankNet , the ranking probability is defined as P ( x0 ≺ x′0 ) = e x2−x′2/ ( 1 + ex2−x ′ 2 ) . Therefore the ranking priority of two search results can be determined with a two-layer neural network structure , offering a pairwise approach . A deeper application of RankNet can be found in Song et al . ( 2014 ) , where a five-layer RankNet is proposed , and each data example is weighed differently for each user in order to adapt to personalized search . A global model is first trained with the training data , and then a different regularized model is adapted for each user with a validation data set . A number of models similar to RankNet has been proposed . For instance , LambdaRank Burges et al . ( 2006 ) speeds up RankNet by altering the cost function according to the change in NDCG caused by swapping search results . LambdaMART Burges ( 2010 ) applies the boosted tree algorithm to LambdaRank . Ranking SVM Joachims ( 2002 ) applies the support vector machine to pairs of search results . Additional models such as ListNet Cao et al . ( 2007 ) and FRank Tsai et al . ( 2007 ) can be found in the summary of Liu ( 2011 ) . However , we are different from the above models not only because we integrate different embeddings with the attention mechanism , but also because we learn the matching mechanism between a query and search results with a similarity matrix . There are a number of papers applying this structure . For instance , Severyn & Moschitti ( 2015 ) applied a text convolutional neural net together with such a structure for text querying . For image querying , Wan et al . ( 2014 ) applied deep convolutional neural nets together with the OASIS algorithm Chechik et al . ( 2009 ) for similarity learning . Still , our approach is different from them in that we apply the attention mechanism , and develop an approach allowing both image and text queries . We explain the idea of similarity matching as follows . We take a triplet ( q , r , r′ ) into account , where q denotes an embedding , i.e . vectorized feature representation of a query , and ( r , r′ ) denotes the embeddings of two search results . A similarity function is defined as SW ( q , r ) = qTWr , and apparently r ≺ r′ if and only if SW ( q , r ) > SW ( q , r′ ) . Note that we may create multiple deep convolutional nets so that we obtain multiple embeddings for the queries and search results . Therefore , it is a question how to incorporate them together . The attention mechanism weighs the embeddings with different sets of weights for each state t , which are derived with a recurrent neural network ( RNN ) from t = 1 to t = T . Therefore , for each state t , the different embeddings can be “ attended ” differently by the attention mechanism , thus making the model more flexible . This model has been successfully applied to various problems . For instance , Bahdanau et al . ( 2015 ) applied it to neural machine translation with a bidirectional recurrent neural network . Cho et al . ( 2015 ) further applied it to image caption and video description generation with convolutional neural nets . Vinyals et al . ( 2015 ) applied it for solving combinatorial problems with the sequence-to-sequence paradigm . Note that in our scenario , the ranking process , i.e . sorting the search results from the most related one to the least related one for a query , can be modeled by different “ states. ” Thus , the attention mechanism helps incorporating different embeddings along with the ranking process , therefore providing a listwise approach . Below we explain our model in more details . 3 MODEL AND ALGORITHM . 3.1 INTRODUCTION TO THE MODEL . Both queries and search results can be embedded with neural networks . Given an input vector x0 representing a query or a search result , we denote the l-th layer in a neural net as xl ∈ Rdl , l = 0 , 1 , . . . , L. We have xl+1 = f ( Wlxl + bl ) , l = 0 , 1 , . . . , L− 1 where Wl is a dl+1 × dl weight matrix , bl ∈ Rdl+1 is the bias , and f is a nonlinear activation function . If the goal is classification with C categories , then ( P ( y = 1 ) , . . . , P ( y = C ) ) = softmax ( WLxL + bL ) , where y is a class indicator , and softmax ( u ) = ( eu1/ ∑d i=1 e ui , . . . , eud/ ∑d i=1 e ui ) for u ∈ Rd . From training this model , we may take the softmax probabilities as the embedding , and create different embeddings with different neural network structures . For images , convolutional neural nets ( CNNs ) LeCun et al . ( 1998 ) are more suitable , in which each node only takes information from neighborhoods of the previous layer . Pooling over each neighborhood is also performed for each layer of a convolutional neural net . With different networks , we can obtain different embeddings c1 , . . . , cM . In the attention mechanism below , we generate the weights αt with an RNN structure , and summarize ct in a decoder series zt , etm = fATT ( zt−1 , c m , αt−1 ) , m = 1 , . . . , M , αt = softmax ( et ) , ct = ∑M m=1 αtmc m , zt = φθ ( zt−1 , ct ) . Here fATT and φθ are chosen as tanh layers in our experiments . Note that the attention weight αt at state t depends on the previous attention weight αt−1 , the embeddings , and the previous decoder state zt−1 , and the decoder series zt sums up information of ct up to state t. As aforementioned , given multiple embeddings , the ranking process can be viewed as applying different attention weights to the embeddings and generating the decoder series zt , offering a listwise approach . However , since there are features for both queries and search results , we consider them as separately , and apply double attention mechanisms to each of them . Our full model is described below .
In this paper, the authors propose to use attention to combine multiple input representations for both query and search results in the learning to rank task. When these representations are embeddings from differentiable functions, they can be jointly learned with the neural network which predicts rankings. A limited set of experiments suggest the proposed approach very mildly outperforms benchmark approaches.
SP:969c99a939e5b56335f08b0d5828fa5a28842db0
AN ATTENTION-BASED DEEP NET FOR LEARNING TO RANK
1 INTRODUCTION . Learning to rank applies supervised or semi-supervised machine learning to construct ranking models for information retrieval problems . In learning to rank , a query is given and a number of search results are to be ranked by their relevant importance given the query . Many problems in information retrieval can be formulated or partially solved by learning to rank . In learning to rank , there are typically three approaches : the pointwise , pairwise , and listwise approaches Liu ( 2011 ) . The pointwise approach assigns an importance score to each pair of query and search result . The pairwise approach discerns which search result is more relevant for a certain query and a pair of search results . The listwise approach outputs the ranks for all search results given a specific query , therefore being the most general . For learning to rank , neural networks are known to enjoy a success . Generally in such models , neural networks are applied to model the ranking probabilities with the features of queries and search results as the input . For instance , RankNet Burges et al . ( 2005 ) applies a neural network to calculate a probability for any search result being more relevant compared to another . Each pair of query and search result is combined into a feature vector , which is the input of the neural network , and a ranking priority score is the output . Another approach learns the matching mechanism between the query and the search result , which is particularly suitable for image retrieval . Usually the mechanism is represented by a similarity matrix which outputs a bilinear form as the ranking priority score ; for instance , such a structure is applied in Severyn & Moschitti ( 2015 ) . We postulate that it could be beneficial to apply multiple embeddings of the queries and search results to a learning to rank model . It has already been observed that for training images , applying a committee of convolutional neural nets improves digit and character recognition Ciresan et al . ( 2011 ) ; Meier et al . ( 2011 ) . From such an approach , the randomness of the architecture of a single neural network can be effectively reduced . For training text data , combining different techniques such as tf-idf , latent Dirichlet allocation ( LDA ) Blei et al . ( 2003 ) , or word2vec Mikolov et al . ( 2013 ) , has also been explored by Das et al . ( 2015 ) . This is due to the fact that it is relatively hard to judge different models a priori . However , we have seen no literature on designing a mechanism to incorporate different embeddings for ranking . We hypothesize that applying multiple embeddings to a ranking neural network can improve the accuracy not only in terms of “ averaging out ” the error , but it can also provide a more robust solution compared to applying a single embedding . For learning to rank , we propose the application of the attention mechanism Bahdanau et al . ( 2015 ) ; Cho et al . ( 2015 ) , which is demonstrated to be successful in focusing on different aspects of the input so that it can incorporate distinct features . It incorporates different embeddings with weights changing over time , derived from a recurrent neural network ( RNN ) structure . Thus , it can help us better summarize information from the query and search results . We also apply a decoder mechanism to rank all the search results , which provides a flexible list-wise ranking approach that can be applied to both image retrieval and text querying . Our model has the following contributions : ( 1 ) it applies the attention mechanism to listwise learning to rank problems , which we think is novel in the learning to rank literature ; ( 2 ) it takes different embeddings of queries and search results into account , incorporating them with the attention mechanism ; ( 3 ) double attention mechanisms are applied to both queries and search results . Section 2 reviews RankNet , similarity matching , and the attention mechanism in details . Section 3 constructs the attention-based deep net for ranking , and discusses how to calibrate the model . Section 4 demonstrates the performance of our model on image retrieval and text querying data sets . Section 5 discusses about potential future research and concludes the paper . 2 LITERATURE REVIEW . To begin with , for RankNet , each pair of query and search result is turned into a feature vector . For two feature vectors x0 ∈ Rd0 and x′0 ∈ Rd0 sharing the same query , we define x0 ≺ x′0 if the search result associated with x0 is ranked before that with x′0 , and vice versa . For x0 , { x1 = f ( W0x0 + b0 ) ∈ Rd1 , x2 = f ( W1x1 + b1 ) ∈ Rd2 = R , and similarly for x′0 . Here Wl is a dl+1 × dl weight matrix , and bl ∈ Rdl+1 is a bias for l = 0 , 1 . Function f is an element-wise nonlinear activation function ; for instance , it can be the sigmoid function σ ( u ) = eu/ ( 1 + eu ) . Then for RankNet , the ranking probability is defined as P ( x0 ≺ x′0 ) = e x2−x′2/ ( 1 + ex2−x ′ 2 ) . Therefore the ranking priority of two search results can be determined with a two-layer neural network structure , offering a pairwise approach . A deeper application of RankNet can be found in Song et al . ( 2014 ) , where a five-layer RankNet is proposed , and each data example is weighed differently for each user in order to adapt to personalized search . A global model is first trained with the training data , and then a different regularized model is adapted for each user with a validation data set . A number of models similar to RankNet has been proposed . For instance , LambdaRank Burges et al . ( 2006 ) speeds up RankNet by altering the cost function according to the change in NDCG caused by swapping search results . LambdaMART Burges ( 2010 ) applies the boosted tree algorithm to LambdaRank . Ranking SVM Joachims ( 2002 ) applies the support vector machine to pairs of search results . Additional models such as ListNet Cao et al . ( 2007 ) and FRank Tsai et al . ( 2007 ) can be found in the summary of Liu ( 2011 ) . However , we are different from the above models not only because we integrate different embeddings with the attention mechanism , but also because we learn the matching mechanism between a query and search results with a similarity matrix . There are a number of papers applying this structure . For instance , Severyn & Moschitti ( 2015 ) applied a text convolutional neural net together with such a structure for text querying . For image querying , Wan et al . ( 2014 ) applied deep convolutional neural nets together with the OASIS algorithm Chechik et al . ( 2009 ) for similarity learning . Still , our approach is different from them in that we apply the attention mechanism , and develop an approach allowing both image and text queries . We explain the idea of similarity matching as follows . We take a triplet ( q , r , r′ ) into account , where q denotes an embedding , i.e . vectorized feature representation of a query , and ( r , r′ ) denotes the embeddings of two search results . A similarity function is defined as SW ( q , r ) = qTWr , and apparently r ≺ r′ if and only if SW ( q , r ) > SW ( q , r′ ) . Note that we may create multiple deep convolutional nets so that we obtain multiple embeddings for the queries and search results . Therefore , it is a question how to incorporate them together . The attention mechanism weighs the embeddings with different sets of weights for each state t , which are derived with a recurrent neural network ( RNN ) from t = 1 to t = T . Therefore , for each state t , the different embeddings can be “ attended ” differently by the attention mechanism , thus making the model more flexible . This model has been successfully applied to various problems . For instance , Bahdanau et al . ( 2015 ) applied it to neural machine translation with a bidirectional recurrent neural network . Cho et al . ( 2015 ) further applied it to image caption and video description generation with convolutional neural nets . Vinyals et al . ( 2015 ) applied it for solving combinatorial problems with the sequence-to-sequence paradigm . Note that in our scenario , the ranking process , i.e . sorting the search results from the most related one to the least related one for a query , can be modeled by different “ states. ” Thus , the attention mechanism helps incorporating different embeddings along with the ranking process , therefore providing a listwise approach . Below we explain our model in more details . 3 MODEL AND ALGORITHM . 3.1 INTRODUCTION TO THE MODEL . Both queries and search results can be embedded with neural networks . Given an input vector x0 representing a query or a search result , we denote the l-th layer in a neural net as xl ∈ Rdl , l = 0 , 1 , . . . , L. We have xl+1 = f ( Wlxl + bl ) , l = 0 , 1 , . . . , L− 1 where Wl is a dl+1 × dl weight matrix , bl ∈ Rdl+1 is the bias , and f is a nonlinear activation function . If the goal is classification with C categories , then ( P ( y = 1 ) , . . . , P ( y = C ) ) = softmax ( WLxL + bL ) , where y is a class indicator , and softmax ( u ) = ( eu1/ ∑d i=1 e ui , . . . , eud/ ∑d i=1 e ui ) for u ∈ Rd . From training this model , we may take the softmax probabilities as the embedding , and create different embeddings with different neural network structures . For images , convolutional neural nets ( CNNs ) LeCun et al . ( 1998 ) are more suitable , in which each node only takes information from neighborhoods of the previous layer . Pooling over each neighborhood is also performed for each layer of a convolutional neural net . With different networks , we can obtain different embeddings c1 , . . . , cM . In the attention mechanism below , we generate the weights αt with an RNN structure , and summarize ct in a decoder series zt , etm = fATT ( zt−1 , c m , αt−1 ) , m = 1 , . . . , M , αt = softmax ( et ) , ct = ∑M m=1 αtmc m , zt = φθ ( zt−1 , ct ) . Here fATT and φθ are chosen as tanh layers in our experiments . Note that the attention weight αt at state t depends on the previous attention weight αt−1 , the embeddings , and the previous decoder state zt−1 , and the decoder series zt sums up information of ct up to state t. As aforementioned , given multiple embeddings , the ranking process can be viewed as applying different attention weights to the embeddings and generating the decoder series zt , offering a listwise approach . However , since there are features for both queries and search results , we consider them as separately , and apply double attention mechanisms to each of them . Our full model is described below .
The paper proposed an attention-based deep neural network for implementing 'learning to rank' algorithm. Particularly, the proposed method implements a listwise approach which outputs the ranks for all search results given a query. The search results are claimed to be sorted by their degree of relevance or importance to the query. However, it is not clear to me how the ranking was decided in equation 6 by the softmax function. For example, as per section 4, the documents of the same topic are considered related, then how the proposed model was trained with one document having higher relevance than others in the same topic category.
SP:969c99a939e5b56335f08b0d5828fa5a28842db0
Hybrid Weight Representation: A Quantization Method Represented with Ternary and Sparse-Large Weights
1 INTRODUCTION . Deep Neural Networks have made considerable progress in various tasks such as image classification ( LeCun et al . 1998 , Simonyan & Zisserman 2014 , Szegedy et al . 2015 ) , object detection ( Ren et al . 2015 , Liu et al . 2016 ) , and speech recognition ( Graves et al . 2013 , Amodei et al . 2016 ) . However , outstanding neural networks usually require deeper and/or wider layers , thus making them hard to deploy on mobile and embedded devices . In response to this problem , many studies set their sights on more efficient networks . Various methods such as pruning ( He et al . 2017 ) , light-weights ( Howard et al . 2017 ) , and quantization ( Courbariaux et al . 2015 ) have been carried out to reduce the model size and/or computation complexity effectively . In ternarization , the accuracy degradation is resulted from quantizing values in a limited range with only 2bits . For example , the ternary weights networks ( TWN , Li et al . 2016 ) yields only three quantized values , which prohibits the networks from utilizing high weight values . As known in Han et al . 2015b , large-valued weights tend to have an important role in the prediction . Therefore , the absence of large values can cause the accuracy degradation . To solve this problem , our paper proposes a hybrid weight representation ( HWR ) , expressing networks with both ternary weights ( TW ) and sparse-large weights ( SLW ) . By taking the advantages of both the TW and SLW , the proposed HWR method can preserve their model size compared to ternary weights , as well as avoiding the accuracy degradation in networks . To be specific , the large values of SLW help networks to improve their accuracy . Furthermore , SLW can be encoded with one remaining state which is not used to store TW in a 2 bits representation . It allows the networks to preserve their model size similarly to ternary weights . The compression rate of the encoding method is affected by the entropy of weight distributions . To train narrower distributions for the efficiency of HWR , we also introduce a centralized quantization ( CQ ) process and a weighted ridge ( WR ) regularizer . Figure 1 shows the differences between conventional quantization and HWR . As shown in Figure 1 , there is a small number of SLW and the indices of encoded SLW are allocated in storage , unlike TW . We conduct various experiments , showing that HWR obtains better classification accuracy with the similar model size compared to the trained ternary quantization ( TTQ , Zhu et al . 2016 ) , which is a baseline ternarization method . The experiments are carried out on CIFAR-100 ( Krizhevsky et al . 2009 ) and ImageNet ( Russakovsky et al . 2015 ) . We use AlexNet ( Krizhevsky et al . 2012 ) and ResNet-18 ( He et al . 2016 ) as baseline networks . Our proposed representation improves the AlexNet performance on CIFAR-100 by 4.15 % with only 1.13 % increase in the model size . The contributions of this paper are as follows : • We propose a hybrid weight representation ( HWR ) , including both values ( TW ) and indices of values ( SLW ) . SLW allows the networks to improve their accuracy . Besides , the model size can be preserved by encoding SLW with one remaining state of 2 bits for TW . • We propose a training process , namely centralized quantization ( CQ ) , to improve the efficiency of HWR . In CQ , we can sparsify almost large weights toward ternary weights . The low entropy of the centralized distribution improves the compression rate of encoding . • We propose a regularizer , namely weighted ridge ( WR ) , which gives more penalty to large weights . WR is utilized to centralize weights for narrower distributions and categorize the weights into TW and SLW . 2 RELATED WORK . Quantization In low precision training , one major difference from full precision training is that the conventional 32-bits weights ( w ) are discretized by a quantization funtion and represented with the finite number of elements . The discretized weights ( wq ) are multiplied by input matrices in the feed-forward pass . For example , the binarized neural networks ( BNN , Courbariaux & Bengio 2016 ) utilized a sign function to quantize weights and activations to { -1 , +1 } . The binarized weights ( wb ) are defined as : wb = sign ( w ) , sign ( x ) = { +1 if x ≥ 0 −1 otherwise In XNOR-Net ( Rastegari et al . 2016 ) , input data is even binarized . Furthermore , the multiplication and addition operations in convolution layers are replaced with XNOR and bit-count operations , respectively . In TWN ( Li et al . 2016 ) , the binarized weights are pruned by thresholding as : wt = { +WE if w > ∆ 0 if |w| ≤ ∆ −WE if w < −∆ , ∆ = 0.7 · E ( |w| ) WE = E i∈ { i|∆ < |w ( i ) | } ( |w ( i ) | ) In DoReFa-Net ( Zhou et al . 2016 ) , they carried out experiments in a wider bit-width and also quantized the gradients . The quantized weights in k bit-width ( wk ) by a quantization function ( Qk ( · ) ) are described as : wk = 2 ·Qk ( tanh ( w ) 2max ( |tanh ( w ) | ) + 0.5 ) − 1 , Qk ( x ) = round ( x· ( 2k−1 ) ) 2k−1 In back-propagation , their is a major concern caused by discretization of full precision weights ( w ) . The derivatives of the discretizing functions such as sign ( · ) and Qk ( · ) have zero values at almost input ranges . Therefore , the gradients of w , calculated from the discretized weights ( wb , wt , and wk ) , are also zero values and prevent w from optimizing . To solve this vanishing gradient problem , the straight-through estimator ( STE ) method ( Hinton et al . 2012 , Bengio et al . 2013 ) is proposed , where not calculating the gradients ofw . In other words , the gradients ofw ( ∂L∂w ) is replaced with the gradients of the discretized weights ( ∂L∂wb , ∂L ∂wt , and ∂L∂wk ) instead of back-propagating zero gradients from discretizing functions . ∂L ∂w = ∂L ∂wb = ∂L∂wt = ∂L ∂wk Above methods are linear quantization which have the same intervals between adjoining quantum . By quantizing with the same intervals , the float operations can be replaced to the integer operations . Their are also non-linear quantization methods that the weights have irregular intervals . For instance , the deep compression ( Han et al . 2015a ) clustered the weights to quantize and fine-tuned the quantized weights of each clustering group . In TTQ ( Zhu et al . 2016 ) , they quantized the weights to ternary values with two trainable scale coefficients for negative and positive weight values , i.e. , { −Wn , 0 , +Wp } . In these cases , the model size can be significantly reduced by replacing the weight values with the indices of them . However , the indices are needed to be transformed as weight values . And the irregular intervals make it difficult to utilize integer-based and bit-wise operations . Entropy coding The Huffman coding ( Van Leeuwen 1976 ) aims to compress bit-streams in a lossless manner by reducing bit-length of each element with an optimal prefix tree . The entropy of the weight distribution is one of the important factors that determine the compression rate in Huffman coding . Deep compression ( Han et al . 2015a ) pruned the weights to make lower entropy so saved more storage when applying the Huffman coding . Regularization The regularization is utilized to manipulate weights in artificial direction by imposing penalty . There are two conventional regularizers such as Ridge ( L2 ) and Lasso ( L1 ) ( Han et al . 2015b ) . The L2 weight decay usually prevents networks from being biased on the training dataset by restricting weights from growing . The penalty of Lasso makes the weights to be close to zero value which are unrelated to predict outputs during training-time . Moreover , the explicit loss of Zhou et al . 2018 makes it possible to quantize weights by controlling the strength of their regularizer . The penalty of regularization can be utilized to change the entropy of the weight distribution . From previous studies , we focus on that ternary weights , linearly quantized , only require three states when being stored in 2bits , thus the remaining state has a potential to be utilized as a prefix of more extended weights for improving efficiency in quantization . Furthermore , we draw a deduction that the regularization can help us to generate the lower entropy of the weight distribution , maximizing the compression efficiency when encoding the extended weights . 3 METHOD . In this section , we explain : i ) how centralized quantization ( CQ ) can centralize weights toward ternary values and categorize the weights into TW and SLW ; and ii ) how the quantized weights are encoded to be expressed as hybrid weight representation ( HWR ) . The detailed processes are illustrated in Figure 2 . 3.1 BASIC QUANTIZATION METHOD . The basic quantization ( BQ ) uses a round function to simply quantize the full precision weights ( w ) . The quantization function ( Qw ( · ) ) of BQ is fixed for each layer during training-time . Equation 1 shows how w is quantized to wq by Qw ( · ) . The rng in Eq 1 is the fixed range of w while low precision training , determined by the maximum absolute value of the pre-trained weights ( Mwp = max ( |wp| ) ) . Before being entered into Qw ( · ) , w is clipped to wc by rng to prevent misquantization of the round function . The Qw ( · ) also requires the number of quantization states ( s ) ascertained by the number of bits . To be specific aboutQw ( · ) , thewc is scaled by a float value to be discretized then it is restored by the reciprocal to the float value after the round function . Using STE , the derivative of wc ( ∂L∂wc ) is replaced with the derivative of wq ( ∂L ∂wq ) . By clipping w , we can take a saturation effect on w as in BNN ( Courbariaux & Bengio 2016 ) . But if some weights still over rng then w can be restated by clipping again after updating gradients . wq = Qw ( wc , rng , s ) = round ( wc · s−1 2 rng ) · rngs−1 2 , wc = clip ( w , −rng , rng ) ( 1 ) To quantize activated values , we restrict the range of activated values by using ReLU1 function as activation . The activated values with ReLU1 ( a ) can be quantized to aq in k bit-width by a round function in Equation 2 . As shown in TTQ ( Zhu et al . 2016 ) , initializing with the pre-trained model helps the networks to improve their quantization performance . To take this advantage , our training starts with a full precision model in which weights are pre-trained with ReLU1 activation . a = ReLU1 ( x ) = clip ( x , 0 , 1 ) = min ( max ( 0 , x ) , 1 ) aq = Qa ( a , k ) = round ( a · ( 2k − 1 ) ) / ( 2k − 1 ) ( 2 )
The paper proposes a hybrid weights representation method where the weights of the neural network is split into two portions: a major portion of ternary weights and a minor portion of weights that are represented with different number of bits. The two portions of weights are differentiated by using the previous unused state of a typical ternary neural network since only three states are used out of the four states given 2-bit representation. The experiments are solid based on the selected baseline model on CIFAR-100 and Imagenet dataset.
SP:ad01c6b1219ff781129a7985c073e72ba5967763
Hybrid Weight Representation: A Quantization Method Represented with Ternary and Sparse-Large Weights
1 INTRODUCTION . Deep Neural Networks have made considerable progress in various tasks such as image classification ( LeCun et al . 1998 , Simonyan & Zisserman 2014 , Szegedy et al . 2015 ) , object detection ( Ren et al . 2015 , Liu et al . 2016 ) , and speech recognition ( Graves et al . 2013 , Amodei et al . 2016 ) . However , outstanding neural networks usually require deeper and/or wider layers , thus making them hard to deploy on mobile and embedded devices . In response to this problem , many studies set their sights on more efficient networks . Various methods such as pruning ( He et al . 2017 ) , light-weights ( Howard et al . 2017 ) , and quantization ( Courbariaux et al . 2015 ) have been carried out to reduce the model size and/or computation complexity effectively . In ternarization , the accuracy degradation is resulted from quantizing values in a limited range with only 2bits . For example , the ternary weights networks ( TWN , Li et al . 2016 ) yields only three quantized values , which prohibits the networks from utilizing high weight values . As known in Han et al . 2015b , large-valued weights tend to have an important role in the prediction . Therefore , the absence of large values can cause the accuracy degradation . To solve this problem , our paper proposes a hybrid weight representation ( HWR ) , expressing networks with both ternary weights ( TW ) and sparse-large weights ( SLW ) . By taking the advantages of both the TW and SLW , the proposed HWR method can preserve their model size compared to ternary weights , as well as avoiding the accuracy degradation in networks . To be specific , the large values of SLW help networks to improve their accuracy . Furthermore , SLW can be encoded with one remaining state which is not used to store TW in a 2 bits representation . It allows the networks to preserve their model size similarly to ternary weights . The compression rate of the encoding method is affected by the entropy of weight distributions . To train narrower distributions for the efficiency of HWR , we also introduce a centralized quantization ( CQ ) process and a weighted ridge ( WR ) regularizer . Figure 1 shows the differences between conventional quantization and HWR . As shown in Figure 1 , there is a small number of SLW and the indices of encoded SLW are allocated in storage , unlike TW . We conduct various experiments , showing that HWR obtains better classification accuracy with the similar model size compared to the trained ternary quantization ( TTQ , Zhu et al . 2016 ) , which is a baseline ternarization method . The experiments are carried out on CIFAR-100 ( Krizhevsky et al . 2009 ) and ImageNet ( Russakovsky et al . 2015 ) . We use AlexNet ( Krizhevsky et al . 2012 ) and ResNet-18 ( He et al . 2016 ) as baseline networks . Our proposed representation improves the AlexNet performance on CIFAR-100 by 4.15 % with only 1.13 % increase in the model size . The contributions of this paper are as follows : • We propose a hybrid weight representation ( HWR ) , including both values ( TW ) and indices of values ( SLW ) . SLW allows the networks to improve their accuracy . Besides , the model size can be preserved by encoding SLW with one remaining state of 2 bits for TW . • We propose a training process , namely centralized quantization ( CQ ) , to improve the efficiency of HWR . In CQ , we can sparsify almost large weights toward ternary weights . The low entropy of the centralized distribution improves the compression rate of encoding . • We propose a regularizer , namely weighted ridge ( WR ) , which gives more penalty to large weights . WR is utilized to centralize weights for narrower distributions and categorize the weights into TW and SLW . 2 RELATED WORK . Quantization In low precision training , one major difference from full precision training is that the conventional 32-bits weights ( w ) are discretized by a quantization funtion and represented with the finite number of elements . The discretized weights ( wq ) are multiplied by input matrices in the feed-forward pass . For example , the binarized neural networks ( BNN , Courbariaux & Bengio 2016 ) utilized a sign function to quantize weights and activations to { -1 , +1 } . The binarized weights ( wb ) are defined as : wb = sign ( w ) , sign ( x ) = { +1 if x ≥ 0 −1 otherwise In XNOR-Net ( Rastegari et al . 2016 ) , input data is even binarized . Furthermore , the multiplication and addition operations in convolution layers are replaced with XNOR and bit-count operations , respectively . In TWN ( Li et al . 2016 ) , the binarized weights are pruned by thresholding as : wt = { +WE if w > ∆ 0 if |w| ≤ ∆ −WE if w < −∆ , ∆ = 0.7 · E ( |w| ) WE = E i∈ { i|∆ < |w ( i ) | } ( |w ( i ) | ) In DoReFa-Net ( Zhou et al . 2016 ) , they carried out experiments in a wider bit-width and also quantized the gradients . The quantized weights in k bit-width ( wk ) by a quantization function ( Qk ( · ) ) are described as : wk = 2 ·Qk ( tanh ( w ) 2max ( |tanh ( w ) | ) + 0.5 ) − 1 , Qk ( x ) = round ( x· ( 2k−1 ) ) 2k−1 In back-propagation , their is a major concern caused by discretization of full precision weights ( w ) . The derivatives of the discretizing functions such as sign ( · ) and Qk ( · ) have zero values at almost input ranges . Therefore , the gradients of w , calculated from the discretized weights ( wb , wt , and wk ) , are also zero values and prevent w from optimizing . To solve this vanishing gradient problem , the straight-through estimator ( STE ) method ( Hinton et al . 2012 , Bengio et al . 2013 ) is proposed , where not calculating the gradients ofw . In other words , the gradients ofw ( ∂L∂w ) is replaced with the gradients of the discretized weights ( ∂L∂wb , ∂L ∂wt , and ∂L∂wk ) instead of back-propagating zero gradients from discretizing functions . ∂L ∂w = ∂L ∂wb = ∂L∂wt = ∂L ∂wk Above methods are linear quantization which have the same intervals between adjoining quantum . By quantizing with the same intervals , the float operations can be replaced to the integer operations . Their are also non-linear quantization methods that the weights have irregular intervals . For instance , the deep compression ( Han et al . 2015a ) clustered the weights to quantize and fine-tuned the quantized weights of each clustering group . In TTQ ( Zhu et al . 2016 ) , they quantized the weights to ternary values with two trainable scale coefficients for negative and positive weight values , i.e. , { −Wn , 0 , +Wp } . In these cases , the model size can be significantly reduced by replacing the weight values with the indices of them . However , the indices are needed to be transformed as weight values . And the irregular intervals make it difficult to utilize integer-based and bit-wise operations . Entropy coding The Huffman coding ( Van Leeuwen 1976 ) aims to compress bit-streams in a lossless manner by reducing bit-length of each element with an optimal prefix tree . The entropy of the weight distribution is one of the important factors that determine the compression rate in Huffman coding . Deep compression ( Han et al . 2015a ) pruned the weights to make lower entropy so saved more storage when applying the Huffman coding . Regularization The regularization is utilized to manipulate weights in artificial direction by imposing penalty . There are two conventional regularizers such as Ridge ( L2 ) and Lasso ( L1 ) ( Han et al . 2015b ) . The L2 weight decay usually prevents networks from being biased on the training dataset by restricting weights from growing . The penalty of Lasso makes the weights to be close to zero value which are unrelated to predict outputs during training-time . Moreover , the explicit loss of Zhou et al . 2018 makes it possible to quantize weights by controlling the strength of their regularizer . The penalty of regularization can be utilized to change the entropy of the weight distribution . From previous studies , we focus on that ternary weights , linearly quantized , only require three states when being stored in 2bits , thus the remaining state has a potential to be utilized as a prefix of more extended weights for improving efficiency in quantization . Furthermore , we draw a deduction that the regularization can help us to generate the lower entropy of the weight distribution , maximizing the compression efficiency when encoding the extended weights . 3 METHOD . In this section , we explain : i ) how centralized quantization ( CQ ) can centralize weights toward ternary values and categorize the weights into TW and SLW ; and ii ) how the quantized weights are encoded to be expressed as hybrid weight representation ( HWR ) . The detailed processes are illustrated in Figure 2 . 3.1 BASIC QUANTIZATION METHOD . The basic quantization ( BQ ) uses a round function to simply quantize the full precision weights ( w ) . The quantization function ( Qw ( · ) ) of BQ is fixed for each layer during training-time . Equation 1 shows how w is quantized to wq by Qw ( · ) . The rng in Eq 1 is the fixed range of w while low precision training , determined by the maximum absolute value of the pre-trained weights ( Mwp = max ( |wp| ) ) . Before being entered into Qw ( · ) , w is clipped to wc by rng to prevent misquantization of the round function . The Qw ( · ) also requires the number of quantization states ( s ) ascertained by the number of bits . To be specific aboutQw ( · ) , thewc is scaled by a float value to be discretized then it is restored by the reciprocal to the float value after the round function . Using STE , the derivative of wc ( ∂L∂wc ) is replaced with the derivative of wq ( ∂L ∂wq ) . By clipping w , we can take a saturation effect on w as in BNN ( Courbariaux & Bengio 2016 ) . But if some weights still over rng then w can be restated by clipping again after updating gradients . wq = Qw ( wc , rng , s ) = round ( wc · s−1 2 rng ) · rngs−1 2 , wc = clip ( w , −rng , rng ) ( 1 ) To quantize activated values , we restrict the range of activated values by using ReLU1 function as activation . The activated values with ReLU1 ( a ) can be quantized to aq in k bit-width by a round function in Equation 2 . As shown in TTQ ( Zhu et al . 2016 ) , initializing with the pre-trained model helps the networks to improve their quantization performance . To take this advantage , our training starts with a full precision model in which weights are pre-trained with ReLU1 activation . a = ReLU1 ( x ) = clip ( x , 0 , 1 ) = min ( max ( 0 , x ) , 1 ) aq = Qa ( a , k ) = round ( a · ( 2k − 1 ) ) / ( 2k − 1 ) ( 2 )
This paper is about quantization, and how to represent values as the finite number of states in a low bit width, using discretization. Particularly, they propose an approach to tackle the problems associated with previous ternarization which quantize weights to three values. Their approach is a hybrid weight representation method, which uses a network to output two weight types: ternary weight and sparse-large weights. For the ternary weight, they need 3 states to be stored with 2 bits. The one remaining state is used to indicate the sparse-large weight. They also propose an approach to centralize the weights towards ternary values. Their experiments show that their approach outperforms other compressed modeling approaches, and show an increase in AlexNet performance on the CIFAR-100 while increasing model size by 1.13%.
SP:ad01c6b1219ff781129a7985c073e72ba5967763
Toward Evaluating Robustness of Deep Reinforcement Learning with Continuous Control
1 INTRODUCTION . Deep reinforcement learning ( RL ) has revolutionized the fields of AI and machine learning over the last decade . The introduction of deep learning has achieved unprecedented success in solving many problems that were intractable in the field of RL , such as playing Atari games from pixels and performing robotic control tasks ( Mnih et al. , 2015 ; Lillicrap et al. , 2015 ; Tassa et al. , 2018 ) . Unfortunately , similar to the case of deep neural network classifiers with adversarial examples , recent studies show that deep RL agents are also vulnerable to adversarial attacks . A commonly-used threat model allows the adversary to manipulate the agent ’ s observations at every time step , where the goal of the adversary is to decrease the agent ’ s total accumulated reward . As a pioneering work in this field , ( Huang et al. , 2017 ) show that by leveraging the FGSM attack on each time frame , an agent ’ s average reward can be significantly decreased with small input adversarial perturbations in five Atari games . ( Lin et al. , 2017 ) further improve the efficiency of the attack in ( Huang et al. , 2017 ) by leveraging heuristics of detecting a good time to attack and luring agents to bad states with sample-based Monte-Carlo planning on a trained generative video prediction model . Since the agents have discrete actions in Atari games ( Huang et al. , 2017 ; Lin et al. , 2017 ) , the problem of attacking Atari agents often reduces to the problem of finding adversarial examples on image classifiers , also pointed out in ( Huang et al. , 2017 ) , where the adversaries intend to craft the input perturbations that would drive agent ’ s new action to deviate from its nominal action . However , for agents with continuous actions , the above strategies can not be directly applied . Recently , ( Uesato et al. , 2018 ) studied the problem of adversarial testing for continuous control domains in a similar but slightly different setting . Their goal was to efficiently and effectively find catastrophic failure given a trained agent and to predict its failure probability . The key to success in ( Uesato et al. , 2018 ) is the availability of agent training history . However , such information may not always be accessible to the users , analysts , and adversaries . Besides , although it may not be surprising that adversarial attacks exist for the deep RL agents as adversarial attacks have been shown to be possible for neural network models in various supervised ∗Work done during summer internship at DeepMind , UK . ♣ Equal contributions . learning tasks . However , the vulnerability of RL agents can not be easily discovered by existing baselines which are model-free and build upon random searches and heuristics – this is also verified by our extensive experiments on various domains ( e.g . walker , humanoid , cartpole , and fish ) , where the agents still achieve close to their original best rewards even with baseline attacks at every time step . Hence it is important and necessary to have a systematic methodology to design non-trivial adversarial attacks , which can efficiently and effectively discover the vulnerabilities of deep RL agents – this is indeed the motivation of this work . This paper takes a first step toward this direction by proposing the first sample-efficient model-based adversarial attack . Specifically , we study the robustness of deep RL agents in a more challenging setting where the agent has continuous actions and its training history is not available . We consider the threat models where the adversary is allowed to manipulate an agent ’ s observations or actions with small perturbations , and we propose a two-step algorithmic framework to find efficient adversarial attacks based on learned dynamics models . Experimental results show that our proposed modelbased attack can successfully degrade agent performance and is also more effective and efficient than model-free attacks baselines . The contributions of this paper are the following : • To the best of our knowledge , we propose the first model-based attack on deep RL agents with continuous actions . Our proposed attack algorithm is a general two-step algorithm and can be directly applied to the two commonly-used threat models ( observation manipulation and action manipulation ) . • We study the efficiency and effectiveness of our proposed model-based attack with modelfree attack baselines based on random searches and heuristics . We show that our modelbased attack can degrade agent performance in numerous MuJoCo domains by up to 4× in terms of total reward and up to 4.6× in terms of distance to unsafe states ( smaller means stronger attacks ) compared to the model-free baselines . • Our proposed model-based attack also outperform all the baselines by a large margin in a weaker adversary setting where the adversary can not attack at every time step . In addition , ablation study on the effect of planning length in our proposed technique suggests that our method can still be effective even when the learned dynamics model is not very accurate . 2 BACKGROUND . Adversarial attacks in reinforcement learning . Compared to the rich literature of adversarial examples in image classifications ( Szegedy et al. , 2013 ) and other applications ( including natural language processing ( Jia & Liang , 2017 ) , speech ( Carlini & Wagner , 2018 ) , etc ) , there is relatively little prior work studying adversarial examples in deep RL . One of the first several works in this field are ( Huang et al. , 2017 ) and ( Lin et al. , 2017 ) , where both works focus on deep RL agent in Atari games with pixels-based inputs and discrete actions . In addition , both works assume the agent to be attacked has accurate policy and the problem of finding adversarial perturbation of visual input reduces to the same problem of finding adversarial examples on image classifiers . Hence , ( Huang et al. , 2017 ) applied FGSM ( Goodfellow et al. , 2015 ) to find adversarial perturbations and ( Lin et al. , 2017 ) further improved the efficiency of the attack by heuristics of observing a good timing to attack – when there is a large gap in agents action preference between most-likely and leastlikely action . In a similar direction , ( Uesato et al. , 2018 ) study the problem of adversarial testing by leveraging rejection sampling and the agent training histories . With the availability of training histories , ( Uesato et al. , 2018 ) successfully uncover bad initial states with much fewer samples compared to conventional Monte-Carlo sampling techniques . Recent work by ( Gleave et al. , 2019 ) consider an alternative setting where the agent is attacked by another agent ( known as adversarial policy ) , which is different from the two threat models considered in this paper . Finally , besides adversarial attacks in deep RL , a recent work ( Wang et al. , 2019 ) study verification of deep RL agent under attacks , which is beyond the scope of this paper . Learning dynamics models . Model-based RL methods first acquire a predictive model of the environment dynamics , and then use that model to make decisions ( Atkeson & Santamaria , 1997 ) . These model-based methods tend to be more sample efficient than their model-free counterparts , and the learned dynamics models can be useful across different tasks . Various works have focused on the most effective ways to learn and utilize dynamics models for planning in RL ( Kurutach et al. , 2018 ; Chua et al. , 2018 ; Chiappa et al. , 2017 ; Fu et al. , 2016 ) . 3 PROPOSED FRAMEWORK . In this section , we first describe the problem setup and the two threat models considered in this paper . Next , we present an algorithmic framework to rigorously design adversarial attacks on deep RL agents with continuous actions . 3.1 PROBLEM SETUP AND FORMULATION . Let si ∈ RN and ai ∈ RM be the observation vector and action vector at time step i , and let π : RN → RM be the deterministic policy ( agent ) . Let f : RN × RM → RN be the dynamics model of the system ( environment ) which takes current state-action pair ( si , ai ) as inputs and outputs the next state si+1 . We are now in the role of an adversary , and as an adversary , our goal is to drive the agent to the ( un-safe ) target states starget within the budget constraints . We can formulate this goal into two optimization problems , as we will illustrate shortly below . Within this formalism , we can consider two threat models : Threat model ( i ) : Observation manipulation . For the threat model of observation manipulation , an adversary is allowed to manipulate the observation si that the agent perceived within an budget : ‖∆si‖∞ ≤ , Ls ≤ si + ∆si ≤ Us , ( 1 ) where ∆si ∈ RN is the crafted perturbation and Us ∈ RN , Ls ∈ RN are the observation limits . Threat model ( ii ) : Action manipulation . For the threat model of action manipulation , an adversary can craft ∆ai ∈ RM such that ‖∆ai‖∞ ≤ , La ≤ ai + ∆ai ≤ Ua , ( 2 ) where Ua ∈ RM , La ∈ RM are the limits of agent ’ s actions . Our formulations . Given an initial state s0 and a pre-trained policy π , our ( adversary ) objective is to minimize the total distance of each state si to the pre-defined target state starget up to the unrolled ( planning ) steps T . This can be written as the following optimization problems in Equations 3 and 4 for the Threat model ( i ) and ( ii ) respectively : min ∆si T∑ i=1 d ( si , starget ) s.t . ai = π ( si + ∆si ) , si+1 = f ( si , ai ) , Constraint ( 1 ) , i ∈ ZT , ( 3 ) min ∆ai T∑ i=1 d ( si , starget ) s.t . ai = π ( si ) , si+1 = f ( si , ai + ∆ai ) , Constraint ( 2 ) , i ∈ ZT . ( 4 ) A common choice of d ( x , y ) is the squared ` 2 distance ‖x − y‖22 and f is the learned dynamics model of the system , and T is the unrolled ( planning ) length using the dynamics models . 3.2 OUR ALGORITHM . In this section , we propose a two-step algorithm to solve Equations 3 and 4 . The core of our proposal consists of two important steps : learn a dynamics model f of the environment and deploy optimization technique to solve Equations 3 and 4 . We first discuss the details of each factor , and then present the full algorithm by the end of this section . Step 1 : learn a good dynamics model f . Ideally , if f is the exact ( perfect ) dynamics model of the environment and assuming we have an optimization oracle to solve Equations 3 and 4 , then the solutions are indeed the optimal adversarial perturbations that give the minimal total loss with -budget constraints . Thus , learning a good dynamics model can conceptually help on developing a strong attack . Depending on the environment , different forms of f can be applied . For example , if the environment of concerned is close to a linear system , then we could let f ( s , a ) = As+Ba , where A and B are unknown matrices to be learned from the sample trajectories ( si , ai , si+1 ) pairs . For a more complex environment , we could decide if we still want to use a simple linear model ( the next state prediction may be far deviate from the true next state and thus the learned dynamical model is less useful ) or instead switch to a non-linear model , e.g . neural networks , which usually has better prediction power but may require more training samples . For either case , the model parameters A , B or neural network parameters can be learned via standard supervised learning with the sample trajectories pairs ( si , ai , si+1 ) . Step 2 : solve Equations 3 and 4 . Once we learned a dynamical model f , the next immediate task is to solve Equation 3 and 4 to compute the adversarial perturbations of observations/actions . When the planning ( unrolled ) length T > 1 , Equation 3 usually can not be directly solved by off-theshelf convex optimization toolbox since the deel RL policy π is usually a non-linear and non-convex neural network . Fortunately , we can incorporate the two equality constraints of Equation 3 into the objective and with the remaining -budget constraint ( Equation 1 ) , Equation 3 can be solved via projected gradient descent ( PGD ) 1 . Similarly , Equation 4 can be solved via PGD to get ∆ai . We note that , similar to the n-step model predictive control , our algorithm could use a much larger planning ( unrolled ) length T when solving Equations 3 and 4 and then only apply the first n ( ≤ T ) adversarial perturbations on the agent over n time steps . Besides , with the PGD framework , f is not limited to feed-forward neural networks . Our proposed attack is summarized in Algorithm 2 for Step 1 , and Algorithm 3 for Step 2 . Algorithm 1 Collect trajectories 1 : Input : pre-trained policy π , MaxSampleSize ns , environment env 2 : Output : a set of trajectory pairs S 3 : k ← 0 , S ← φ 4 : s0 ← env.reset ( ) 5 : while k < ns do 6 : ak ← π ( sk ) 7 : sk+1 ← env.step ( ak ) 8 : S ∪ { ( sk , ak , sk+1 ) } 9 : k ← k + 1 10 : end while 11 : Return S
This paper proposed a new adversarial attack method based on model-based RL. Unlike existing adversarial attack methods on deep RL, the authors first approximate the dynamics models and then generate the adversarial samples by minimizing the total distance of each state to the pre-defined target state (i.e. planning). Using Cartpole, Fish, Walker, and Humanoid, the authors showed that the proposed method can pool the agents more effectively.
SP:8033aa140ced2ef797bb83036759dd73acca5623
Toward Evaluating Robustness of Deep Reinforcement Learning with Continuous Control
1 INTRODUCTION . Deep reinforcement learning ( RL ) has revolutionized the fields of AI and machine learning over the last decade . The introduction of deep learning has achieved unprecedented success in solving many problems that were intractable in the field of RL , such as playing Atari games from pixels and performing robotic control tasks ( Mnih et al. , 2015 ; Lillicrap et al. , 2015 ; Tassa et al. , 2018 ) . Unfortunately , similar to the case of deep neural network classifiers with adversarial examples , recent studies show that deep RL agents are also vulnerable to adversarial attacks . A commonly-used threat model allows the adversary to manipulate the agent ’ s observations at every time step , where the goal of the adversary is to decrease the agent ’ s total accumulated reward . As a pioneering work in this field , ( Huang et al. , 2017 ) show that by leveraging the FGSM attack on each time frame , an agent ’ s average reward can be significantly decreased with small input adversarial perturbations in five Atari games . ( Lin et al. , 2017 ) further improve the efficiency of the attack in ( Huang et al. , 2017 ) by leveraging heuristics of detecting a good time to attack and luring agents to bad states with sample-based Monte-Carlo planning on a trained generative video prediction model . Since the agents have discrete actions in Atari games ( Huang et al. , 2017 ; Lin et al. , 2017 ) , the problem of attacking Atari agents often reduces to the problem of finding adversarial examples on image classifiers , also pointed out in ( Huang et al. , 2017 ) , where the adversaries intend to craft the input perturbations that would drive agent ’ s new action to deviate from its nominal action . However , for agents with continuous actions , the above strategies can not be directly applied . Recently , ( Uesato et al. , 2018 ) studied the problem of adversarial testing for continuous control domains in a similar but slightly different setting . Their goal was to efficiently and effectively find catastrophic failure given a trained agent and to predict its failure probability . The key to success in ( Uesato et al. , 2018 ) is the availability of agent training history . However , such information may not always be accessible to the users , analysts , and adversaries . Besides , although it may not be surprising that adversarial attacks exist for the deep RL agents as adversarial attacks have been shown to be possible for neural network models in various supervised ∗Work done during summer internship at DeepMind , UK . ♣ Equal contributions . learning tasks . However , the vulnerability of RL agents can not be easily discovered by existing baselines which are model-free and build upon random searches and heuristics – this is also verified by our extensive experiments on various domains ( e.g . walker , humanoid , cartpole , and fish ) , where the agents still achieve close to their original best rewards even with baseline attacks at every time step . Hence it is important and necessary to have a systematic methodology to design non-trivial adversarial attacks , which can efficiently and effectively discover the vulnerabilities of deep RL agents – this is indeed the motivation of this work . This paper takes a first step toward this direction by proposing the first sample-efficient model-based adversarial attack . Specifically , we study the robustness of deep RL agents in a more challenging setting where the agent has continuous actions and its training history is not available . We consider the threat models where the adversary is allowed to manipulate an agent ’ s observations or actions with small perturbations , and we propose a two-step algorithmic framework to find efficient adversarial attacks based on learned dynamics models . Experimental results show that our proposed modelbased attack can successfully degrade agent performance and is also more effective and efficient than model-free attacks baselines . The contributions of this paper are the following : • To the best of our knowledge , we propose the first model-based attack on deep RL agents with continuous actions . Our proposed attack algorithm is a general two-step algorithm and can be directly applied to the two commonly-used threat models ( observation manipulation and action manipulation ) . • We study the efficiency and effectiveness of our proposed model-based attack with modelfree attack baselines based on random searches and heuristics . We show that our modelbased attack can degrade agent performance in numerous MuJoCo domains by up to 4× in terms of total reward and up to 4.6× in terms of distance to unsafe states ( smaller means stronger attacks ) compared to the model-free baselines . • Our proposed model-based attack also outperform all the baselines by a large margin in a weaker adversary setting where the adversary can not attack at every time step . In addition , ablation study on the effect of planning length in our proposed technique suggests that our method can still be effective even when the learned dynamics model is not very accurate . 2 BACKGROUND . Adversarial attacks in reinforcement learning . Compared to the rich literature of adversarial examples in image classifications ( Szegedy et al. , 2013 ) and other applications ( including natural language processing ( Jia & Liang , 2017 ) , speech ( Carlini & Wagner , 2018 ) , etc ) , there is relatively little prior work studying adversarial examples in deep RL . One of the first several works in this field are ( Huang et al. , 2017 ) and ( Lin et al. , 2017 ) , where both works focus on deep RL agent in Atari games with pixels-based inputs and discrete actions . In addition , both works assume the agent to be attacked has accurate policy and the problem of finding adversarial perturbation of visual input reduces to the same problem of finding adversarial examples on image classifiers . Hence , ( Huang et al. , 2017 ) applied FGSM ( Goodfellow et al. , 2015 ) to find adversarial perturbations and ( Lin et al. , 2017 ) further improved the efficiency of the attack by heuristics of observing a good timing to attack – when there is a large gap in agents action preference between most-likely and leastlikely action . In a similar direction , ( Uesato et al. , 2018 ) study the problem of adversarial testing by leveraging rejection sampling and the agent training histories . With the availability of training histories , ( Uesato et al. , 2018 ) successfully uncover bad initial states with much fewer samples compared to conventional Monte-Carlo sampling techniques . Recent work by ( Gleave et al. , 2019 ) consider an alternative setting where the agent is attacked by another agent ( known as adversarial policy ) , which is different from the two threat models considered in this paper . Finally , besides adversarial attacks in deep RL , a recent work ( Wang et al. , 2019 ) study verification of deep RL agent under attacks , which is beyond the scope of this paper . Learning dynamics models . Model-based RL methods first acquire a predictive model of the environment dynamics , and then use that model to make decisions ( Atkeson & Santamaria , 1997 ) . These model-based methods tend to be more sample efficient than their model-free counterparts , and the learned dynamics models can be useful across different tasks . Various works have focused on the most effective ways to learn and utilize dynamics models for planning in RL ( Kurutach et al. , 2018 ; Chua et al. , 2018 ; Chiappa et al. , 2017 ; Fu et al. , 2016 ) . 3 PROPOSED FRAMEWORK . In this section , we first describe the problem setup and the two threat models considered in this paper . Next , we present an algorithmic framework to rigorously design adversarial attacks on deep RL agents with continuous actions . 3.1 PROBLEM SETUP AND FORMULATION . Let si ∈ RN and ai ∈ RM be the observation vector and action vector at time step i , and let π : RN → RM be the deterministic policy ( agent ) . Let f : RN × RM → RN be the dynamics model of the system ( environment ) which takes current state-action pair ( si , ai ) as inputs and outputs the next state si+1 . We are now in the role of an adversary , and as an adversary , our goal is to drive the agent to the ( un-safe ) target states starget within the budget constraints . We can formulate this goal into two optimization problems , as we will illustrate shortly below . Within this formalism , we can consider two threat models : Threat model ( i ) : Observation manipulation . For the threat model of observation manipulation , an adversary is allowed to manipulate the observation si that the agent perceived within an budget : ‖∆si‖∞ ≤ , Ls ≤ si + ∆si ≤ Us , ( 1 ) where ∆si ∈ RN is the crafted perturbation and Us ∈ RN , Ls ∈ RN are the observation limits . Threat model ( ii ) : Action manipulation . For the threat model of action manipulation , an adversary can craft ∆ai ∈ RM such that ‖∆ai‖∞ ≤ , La ≤ ai + ∆ai ≤ Ua , ( 2 ) where Ua ∈ RM , La ∈ RM are the limits of agent ’ s actions . Our formulations . Given an initial state s0 and a pre-trained policy π , our ( adversary ) objective is to minimize the total distance of each state si to the pre-defined target state starget up to the unrolled ( planning ) steps T . This can be written as the following optimization problems in Equations 3 and 4 for the Threat model ( i ) and ( ii ) respectively : min ∆si T∑ i=1 d ( si , starget ) s.t . ai = π ( si + ∆si ) , si+1 = f ( si , ai ) , Constraint ( 1 ) , i ∈ ZT , ( 3 ) min ∆ai T∑ i=1 d ( si , starget ) s.t . ai = π ( si ) , si+1 = f ( si , ai + ∆ai ) , Constraint ( 2 ) , i ∈ ZT . ( 4 ) A common choice of d ( x , y ) is the squared ` 2 distance ‖x − y‖22 and f is the learned dynamics model of the system , and T is the unrolled ( planning ) length using the dynamics models . 3.2 OUR ALGORITHM . In this section , we propose a two-step algorithm to solve Equations 3 and 4 . The core of our proposal consists of two important steps : learn a dynamics model f of the environment and deploy optimization technique to solve Equations 3 and 4 . We first discuss the details of each factor , and then present the full algorithm by the end of this section . Step 1 : learn a good dynamics model f . Ideally , if f is the exact ( perfect ) dynamics model of the environment and assuming we have an optimization oracle to solve Equations 3 and 4 , then the solutions are indeed the optimal adversarial perturbations that give the minimal total loss with -budget constraints . Thus , learning a good dynamics model can conceptually help on developing a strong attack . Depending on the environment , different forms of f can be applied . For example , if the environment of concerned is close to a linear system , then we could let f ( s , a ) = As+Ba , where A and B are unknown matrices to be learned from the sample trajectories ( si , ai , si+1 ) pairs . For a more complex environment , we could decide if we still want to use a simple linear model ( the next state prediction may be far deviate from the true next state and thus the learned dynamical model is less useful ) or instead switch to a non-linear model , e.g . neural networks , which usually has better prediction power but may require more training samples . For either case , the model parameters A , B or neural network parameters can be learned via standard supervised learning with the sample trajectories pairs ( si , ai , si+1 ) . Step 2 : solve Equations 3 and 4 . Once we learned a dynamical model f , the next immediate task is to solve Equation 3 and 4 to compute the adversarial perturbations of observations/actions . When the planning ( unrolled ) length T > 1 , Equation 3 usually can not be directly solved by off-theshelf convex optimization toolbox since the deel RL policy π is usually a non-linear and non-convex neural network . Fortunately , we can incorporate the two equality constraints of Equation 3 into the objective and with the remaining -budget constraint ( Equation 1 ) , Equation 3 can be solved via projected gradient descent ( PGD ) 1 . Similarly , Equation 4 can be solved via PGD to get ∆ai . We note that , similar to the n-step model predictive control , our algorithm could use a much larger planning ( unrolled ) length T when solving Equations 3 and 4 and then only apply the first n ( ≤ T ) adversarial perturbations on the agent over n time steps . Besides , with the PGD framework , f is not limited to feed-forward neural networks . Our proposed attack is summarized in Algorithm 2 for Step 1 , and Algorithm 3 for Step 2 . Algorithm 1 Collect trajectories 1 : Input : pre-trained policy π , MaxSampleSize ns , environment env 2 : Output : a set of trajectory pairs S 3 : k ← 0 , S ← φ 4 : s0 ← env.reset ( ) 5 : while k < ns do 6 : ak ← π ( sk ) 7 : sk+1 ← env.step ( ak ) 8 : S ∪ { ( sk , ak , sk+1 ) } 9 : k ← k + 1 10 : end while 11 : Return S
This paper looks at a new framework for adversarial attacks on deep reinforcement learning agents under continuous action spaces. They propose a model based approach which adds noise to either the observation or actions of the agent to push the agent to predefined target states. They then report results against several model-free/unlearned baselines on MuJoCo tasks using a policy learned through D4PG.
SP:8033aa140ced2ef797bb83036759dd73acca5623
GQ-Net: Training Quantization-Friendly Deep Networks
1 INTRODUCTION . Neural network quantization is a technique to reduce the size of deep networks and to bypass computationally and energetically expensive floating-point arithmetic operations in favor of efficient integer arithmetic on quantized versions of model weights and activations . Network quantization has been the focus of intensive research in recent years ( Rastegari et al. , 2016 ; Zhou et al. , 2016 ; Jacob et al. , 2018 ; Krishnamoorthi , 2018 ; Jung et al. , 2018 ; Louizos et al. , 2019 ; Nagel et al. , 2019 ; Gong et al. , 2019 ) , with most works belonging to one of two categories . The first line of work quantizes parts of the network while leaving a portion of its operations , e.g . computations in the first and last network layers in floating point . While such networks can be highly efficient , using bitwidths down to 5 or 4 bits with minimal loss in network accuracy ( Zhang et al. , 2018 ; Jung et al. , 2018 ) , they may be difficult to deploy in certain practical settings , due to the complexity of extra floating point hardware needed to execute the non-quantized portions of the network . Another line of work aims for ease of real world deployment by quantizing the entire network , including all weights and activations in all convolutional and fully connected layers ; we term this scheme strict quantization . Maintaining accuracy under strict quantization is considerably more challenging . While nearly lossless 8-bit strictly quantized networks have been proposed ( Jacob et al. , 2018 ) , to date state-of-the-art 4 bit networks incur large losses in accuracy compared to full precision reference models . For example , the strict 4-bit ResNet-18 model in Louizos et al . ( 2019 ) has 61.52 % accuracy , compared to 69.76 % for the full precision model , while the strict 4-bit MobileNet-v2 model in Krishnamoorthi ( 2018 ) has 62.00 % accuracy , compared to 71.88 % accuracy in full precision . To understand the difficulty of training accurate low-bitwidth strictly quantized networks , consider a common training procedure which begins with a pre-trained network , quantizes the model , then applies fine-tuning using straight-through estimators ( STE ) for gradient updates until the model achieves sufficient quantized accuracy . This process faces two problems . First , as the pre-trained model was not initially trained with the task of being subsequently quantized in mind , it may not be “ quantization-friendly ” . That is , the fine-tuning process may need to make substantial changes to the initial model in order to transform it to an accurate quantized model . Second , fine-tuning a model , especially at low bitwidths , is difficult due to the lack of accurate gradient information provided by STE . In particular , fine-tuning using STE is done by updating a model represented internally with floating point values using gradients computed at the nearest quantizations of the floating point values . Thus for example , if we apply 4 bit quantization to floating point model parameters in the range [ 0 , 1 ] , a random parameter will incur an average round-off error of 1/32 , which will be incorporated into the error in the STE gradient for this parameter , leading to possibly ineffective fine-tuning . To address these problems , we propose GQ-Net , a guided quantization training algorithm . The main goal of GQ-Net is to produce an accurate and quantization-friendly full precision model , i.e . a model whose quantized version , obtained by simply rounding each full precision value to its nearest quantized point , has nearly the same accuracy as itself . To do this , we design a loss function for the model which includes two components , one to minimize error with respect to the training labels , and another component to minimize the distributional difference between the model ’ s outputs and the outputs of the model ’ s quantized version . This loss function has the effect of guiding the optimization process towards a model which is both accurate , by virtue of minimizing the first loss component , and which is also similar enough to its quantized version due to minimization of the second component to ensure that the quantized model is also accurate . In addition , because the first component of the loss function deals only with floating point values , it provides accurate gradient information during optimization , in contrast to STE-based optimization which uses biased gradients at rounded points , which further improves the accuracy of the quantized model . Since GQ-Net directly produces a quantized model which does not require further fine-tuning , the number of epochs required to train GQ-Net is substantially less than the total number of epochs needed to train and fine-tune a model using the traditional quantization approach , leading to significantly reduced wall-clock training time . We note that GQ-Net ’ s technique is independent of and can be used in conjunction with other techniques for improving quantization accuracy , as we demonstrate in Section 4.3 . Finally , we believe that the guided training technique we propose can also be applied to other neural network structural optimization problems such as network pruning . We implemented GQ-Net in PyTorch and our codebase and trained models are publicly available 1 . We validated GQ-Net on the ImageNet classification task with the widely used ResNet-18 and 1An anonymous codebase has been submitted to OpenReview . The GitHub repository will be made public after the review process . compact MobileNet-v1/v2 models , and also performed a thorough set of ablation experiments to study different aspects of our technique . In terms of quantization accuracy loss compared to reference floating point models , GQ-Net strictly quantized using 4-bit weights and activations surpasses existing state-of-the-art strict methods by up to 2.7× , and also improves upon these methods even when they use higher bitwidths . In particular , 4-bit GQ-Net applied to ResNet-18 achieves 66.68 % top-1 accuracy , compared to 61.52 % accuracy in Louizos et al . ( 2019 ) and a reference floating point accuracy of 69.76 % , while on MobileNet-v2 GQ-Net achieves 66.15 % top-1 accuracy compared to 62.00 % accuracy in Krishnamoorthi ( 2018 ) and a reference floating point accuracy of 71.88 % . Additionally , GQ-Net achieves these results using layer-wise quantization , as opposed to channel-wise quantization in Krishnamoorthi ( 2018 ) , which further enhances the efficiency and practicality of the technique . 2 RELATED WORKS . Neural network quantization has been the subject of extensive investigation in recent years . Quantization can be applied to different part of neural networks , including weights , activations or gradients . Courbariaux et al . ( 2015 ) , Hou et al . ( 2016 ) , Zhou et al . ( 2017 ) , Hou & Kwok ( 2018 ) and other works quantized model weights to binary , ternary or multi-bit integers to reduce model size . Wei et al . ( 2018 ) quantized activations of object detection models for knowledge transfer . Alistarh et al . ( 2016 ) , Hou et al . ( 2019 ) quantized model gradients to accelerate distributed training . Another line of work quantizes both weights and activations to accelerate model inference by utilizing fix-point or integer arithmetic . These works include Courbariaux et al . ( 2016 ) , Rastegari et al . ( 2016 ) , Gysel et al . ( 2016 ) , Krishnamoorthi ( 2018 ) , Choi et al . ( 2018 ) , Zhang et al . ( 2018 ) , Jung et al . ( 2018 ) . A large set of methods have been proposed to improve training or fine-tuning for network quantization . Straight through estimators ( Bengio et al. , 2013 ) ( STE ) propagate gradients through non-differentiable operations with the identity mapping . Other training methods “ soften ” nondifferentiable operations to similar differentiable ones in order for gradients to pass through , then gradually anneal to piecewise continuous functions by applying stronger constraints . This line of works include Louizos et al . ( 2019 ) , Gong et al . ( 2019 ) , Bai et al . ( 2018 ) . Some works regard quantization as a stochastic process that produces parameterized discrete distributions , and guides training using gradients with respect to these parameters Soudry et al . ( 2014 ) , Shayer et al . ( 2018 ) . Another line of works does not require fine tuning , and instead re-calibrates or modifies the original network to recover accuracy using little or even no data He & Cheng ( 2018 ) , Nagel et al . ( 2019 ) , Meller et al . ( 2019 ) . Several recent works have focused on quantizing all parts of a network , typically in order to support deployment using only integer arithmetic units and avoiding the cost and complexity of additional floating point units . Gysel et al . ( 2016 ) proposed performing network inference using dynamic fixed-point arithmetic , where bitwidths for the integer and mantissa parts are determined based on a model ’ s weight distribution . Jacob et al . ( 2018 ) ; Krishnamoorthi ( 2018 ) proposed the quantization training and deployment algorithm behind the Tensorflow-Lite quantization runtime , which generates strictly quantized networks that can be easily implemented in hardware . Louizos et al . ( 2019 ) proposed a training method for strictly quantized models based on annealing a smooth quantization function to a piecewise continuous one . There has also been recent work on using parameterized quantizers which are optimized during quantization training . Choi et al . ( 2018 ) introduced learnable upper bounds to control the range of quantization . Zhang et al . ( 2018 ) proposed quantizers with a learnable basis which an be executed using fixed-point arithmetic . Jung et al . ( 2018 ) proposed to optimize weight scaling and quantization ranges jointly from task losses . 3 GQ-NET . In this section we describe the architecture of our proposed GQ-Net and then discuss components of the architecture which can be tuned to improve performance . 3.1 GQ-NET ARCHITECTURE . The major components of GQ-Net include the following , and are illustrated in Figure 1 : 1 . An L-layer neural network hW ( · ) with all computations performed using full precision floating point arithmetic . Here W = { W1 , . . . , WL } denotes the parameter ( weights ) of the model , with Wi , i ∈ 1 . . . L being the weights in layer i and expressed in floating point . 2 . The quantized model ĥW , Q ( · ) built from hW ( · ) . Here Q = { Qw1 , . . . , QwL , Qa0 , . . . , QaL } is a set of quantizers , i.e . mappings from floating point to ( scaled ) integer values ; the quantizers may be parameterized , and we describe how to optimize these parameters in Section 3.2 . Qwi quantizes weights Wi and Q a i quantizes activations in layer i . Let x0 denote an input to hW . To construct output ĥW , Q ( x0 ) of the quantized network , we proceed layer by layer . We first quantize the weights in layers i = 1 , . . . , L as ŵi = Qwi ( wi ) , and also quantize the input by setting x̂0 = Q a 0 ( x0 ) . we compute the quantized activations x̂i in layer i iteratively for i = 1 , . . . , L using x̂i = Qai ( x̃i ) , where x̃i = gi ( ŵi ∗ x̂i−1 ) , and gi ( · ) denotes the nonlinearity function in layer i and ∗ denotes convolution . Note that since ŵi and x̂i−1 are quantized , x̃i can be computed using integer or fixed point arithmetic . 3 . Next , we construct a loss function L incorporating both the training loss Lf of the full precision model hW and a loss Lq capturing the difference between hW and the quantized model ĥW , Q . L = ωfLf + ωqLq ( 1 ) Here ωf , ωq ∈ R are parameters capturing the relative importance of training loss versus distributional loss . In this paper , we focus on image classification networks , and thus we set Lf to be the cross-entropy loss between outputs from hW and the training labels . In addition , we set Lq = DKL ( σ ( hW ( · ) ) ||σ ( ĥW , Q ( · ) ) ) , where σ denotes the softmax function , to be the KL divergence between distributions σ ( hW ) and σ ( ĥW , Q ) on each input . Hence , minimizing the second term in L corresponds to pushing the floating point and quantized models to behave similarly to each other . Since the weight parameters W appear in both terms in L , the two terms can give conflicting signals for updating W during the optimization of L , causing the optimization to be unstable . We discuss how to deal with this problem in Section 3.2 . To train GQ-Net , we successively take mini-batches of training samples and labels and use them to compute L during the forward pass and propagate gradients with respect to W and the parameters of Q during the backward pass in order to minimize L. After L has converged sufficiently , we take the quantized weights in ĥW , Q ( · ) as the quantized model .
This work introduces GQ-Net, a novel technique that trains quantization friendly networks that facilitate for 4 bit weights and activations. This is achieved by introducing a loss function that consists of a linear combination of two components: one that aims to minimize the error of the network on the training labels of the dataset and one that aims to minimize the discrepancy of the model output with respect to the output of the model when the weights and activations are quantized. The authors argue that this has the effect of “guiding” the optimization procedure in finding networks that can be quantized without loss of performance. For the discrepancy metric the authors use the KL divergence from the predictive distribution of the floating point model to the one of the quantized model. The authors then propose several extra techniques that boost the performance of their method: 1. scheduling the weighting coefficients of the two loss terms (something which reminisces iterative pruning methods), 2. stopping the gradient of the floating point model w.r.t. the second loss term, 3. learning the parameters of the uniform quantizer, 4. alternating optimization between the weights and the parameters of the quantizers and 5. using separate batch normalization statistics for the floating point and quantized models. The authors then evaluate their method on Imagenet classification using ResNet-18 and Mobilenet v1 / v2, while also performing an ablation study about the extra tricks that they propose.
SP:bd4fee07f87a3b40b274d1cbbae3ac07f11cb48d
GQ-Net: Training Quantization-Friendly Deep Networks
1 INTRODUCTION . Neural network quantization is a technique to reduce the size of deep networks and to bypass computationally and energetically expensive floating-point arithmetic operations in favor of efficient integer arithmetic on quantized versions of model weights and activations . Network quantization has been the focus of intensive research in recent years ( Rastegari et al. , 2016 ; Zhou et al. , 2016 ; Jacob et al. , 2018 ; Krishnamoorthi , 2018 ; Jung et al. , 2018 ; Louizos et al. , 2019 ; Nagel et al. , 2019 ; Gong et al. , 2019 ) , with most works belonging to one of two categories . The first line of work quantizes parts of the network while leaving a portion of its operations , e.g . computations in the first and last network layers in floating point . While such networks can be highly efficient , using bitwidths down to 5 or 4 bits with minimal loss in network accuracy ( Zhang et al. , 2018 ; Jung et al. , 2018 ) , they may be difficult to deploy in certain practical settings , due to the complexity of extra floating point hardware needed to execute the non-quantized portions of the network . Another line of work aims for ease of real world deployment by quantizing the entire network , including all weights and activations in all convolutional and fully connected layers ; we term this scheme strict quantization . Maintaining accuracy under strict quantization is considerably more challenging . While nearly lossless 8-bit strictly quantized networks have been proposed ( Jacob et al. , 2018 ) , to date state-of-the-art 4 bit networks incur large losses in accuracy compared to full precision reference models . For example , the strict 4-bit ResNet-18 model in Louizos et al . ( 2019 ) has 61.52 % accuracy , compared to 69.76 % for the full precision model , while the strict 4-bit MobileNet-v2 model in Krishnamoorthi ( 2018 ) has 62.00 % accuracy , compared to 71.88 % accuracy in full precision . To understand the difficulty of training accurate low-bitwidth strictly quantized networks , consider a common training procedure which begins with a pre-trained network , quantizes the model , then applies fine-tuning using straight-through estimators ( STE ) for gradient updates until the model achieves sufficient quantized accuracy . This process faces two problems . First , as the pre-trained model was not initially trained with the task of being subsequently quantized in mind , it may not be “ quantization-friendly ” . That is , the fine-tuning process may need to make substantial changes to the initial model in order to transform it to an accurate quantized model . Second , fine-tuning a model , especially at low bitwidths , is difficult due to the lack of accurate gradient information provided by STE . In particular , fine-tuning using STE is done by updating a model represented internally with floating point values using gradients computed at the nearest quantizations of the floating point values . Thus for example , if we apply 4 bit quantization to floating point model parameters in the range [ 0 , 1 ] , a random parameter will incur an average round-off error of 1/32 , which will be incorporated into the error in the STE gradient for this parameter , leading to possibly ineffective fine-tuning . To address these problems , we propose GQ-Net , a guided quantization training algorithm . The main goal of GQ-Net is to produce an accurate and quantization-friendly full precision model , i.e . a model whose quantized version , obtained by simply rounding each full precision value to its nearest quantized point , has nearly the same accuracy as itself . To do this , we design a loss function for the model which includes two components , one to minimize error with respect to the training labels , and another component to minimize the distributional difference between the model ’ s outputs and the outputs of the model ’ s quantized version . This loss function has the effect of guiding the optimization process towards a model which is both accurate , by virtue of minimizing the first loss component , and which is also similar enough to its quantized version due to minimization of the second component to ensure that the quantized model is also accurate . In addition , because the first component of the loss function deals only with floating point values , it provides accurate gradient information during optimization , in contrast to STE-based optimization which uses biased gradients at rounded points , which further improves the accuracy of the quantized model . Since GQ-Net directly produces a quantized model which does not require further fine-tuning , the number of epochs required to train GQ-Net is substantially less than the total number of epochs needed to train and fine-tune a model using the traditional quantization approach , leading to significantly reduced wall-clock training time . We note that GQ-Net ’ s technique is independent of and can be used in conjunction with other techniques for improving quantization accuracy , as we demonstrate in Section 4.3 . Finally , we believe that the guided training technique we propose can also be applied to other neural network structural optimization problems such as network pruning . We implemented GQ-Net in PyTorch and our codebase and trained models are publicly available 1 . We validated GQ-Net on the ImageNet classification task with the widely used ResNet-18 and 1An anonymous codebase has been submitted to OpenReview . The GitHub repository will be made public after the review process . compact MobileNet-v1/v2 models , and also performed a thorough set of ablation experiments to study different aspects of our technique . In terms of quantization accuracy loss compared to reference floating point models , GQ-Net strictly quantized using 4-bit weights and activations surpasses existing state-of-the-art strict methods by up to 2.7× , and also improves upon these methods even when they use higher bitwidths . In particular , 4-bit GQ-Net applied to ResNet-18 achieves 66.68 % top-1 accuracy , compared to 61.52 % accuracy in Louizos et al . ( 2019 ) and a reference floating point accuracy of 69.76 % , while on MobileNet-v2 GQ-Net achieves 66.15 % top-1 accuracy compared to 62.00 % accuracy in Krishnamoorthi ( 2018 ) and a reference floating point accuracy of 71.88 % . Additionally , GQ-Net achieves these results using layer-wise quantization , as opposed to channel-wise quantization in Krishnamoorthi ( 2018 ) , which further enhances the efficiency and practicality of the technique . 2 RELATED WORKS . Neural network quantization has been the subject of extensive investigation in recent years . Quantization can be applied to different part of neural networks , including weights , activations or gradients . Courbariaux et al . ( 2015 ) , Hou et al . ( 2016 ) , Zhou et al . ( 2017 ) , Hou & Kwok ( 2018 ) and other works quantized model weights to binary , ternary or multi-bit integers to reduce model size . Wei et al . ( 2018 ) quantized activations of object detection models for knowledge transfer . Alistarh et al . ( 2016 ) , Hou et al . ( 2019 ) quantized model gradients to accelerate distributed training . Another line of work quantizes both weights and activations to accelerate model inference by utilizing fix-point or integer arithmetic . These works include Courbariaux et al . ( 2016 ) , Rastegari et al . ( 2016 ) , Gysel et al . ( 2016 ) , Krishnamoorthi ( 2018 ) , Choi et al . ( 2018 ) , Zhang et al . ( 2018 ) , Jung et al . ( 2018 ) . A large set of methods have been proposed to improve training or fine-tuning for network quantization . Straight through estimators ( Bengio et al. , 2013 ) ( STE ) propagate gradients through non-differentiable operations with the identity mapping . Other training methods “ soften ” nondifferentiable operations to similar differentiable ones in order for gradients to pass through , then gradually anneal to piecewise continuous functions by applying stronger constraints . This line of works include Louizos et al . ( 2019 ) , Gong et al . ( 2019 ) , Bai et al . ( 2018 ) . Some works regard quantization as a stochastic process that produces parameterized discrete distributions , and guides training using gradients with respect to these parameters Soudry et al . ( 2014 ) , Shayer et al . ( 2018 ) . Another line of works does not require fine tuning , and instead re-calibrates or modifies the original network to recover accuracy using little or even no data He & Cheng ( 2018 ) , Nagel et al . ( 2019 ) , Meller et al . ( 2019 ) . Several recent works have focused on quantizing all parts of a network , typically in order to support deployment using only integer arithmetic units and avoiding the cost and complexity of additional floating point units . Gysel et al . ( 2016 ) proposed performing network inference using dynamic fixed-point arithmetic , where bitwidths for the integer and mantissa parts are determined based on a model ’ s weight distribution . Jacob et al . ( 2018 ) ; Krishnamoorthi ( 2018 ) proposed the quantization training and deployment algorithm behind the Tensorflow-Lite quantization runtime , which generates strictly quantized networks that can be easily implemented in hardware . Louizos et al . ( 2019 ) proposed a training method for strictly quantized models based on annealing a smooth quantization function to a piecewise continuous one . There has also been recent work on using parameterized quantizers which are optimized during quantization training . Choi et al . ( 2018 ) introduced learnable upper bounds to control the range of quantization . Zhang et al . ( 2018 ) proposed quantizers with a learnable basis which an be executed using fixed-point arithmetic . Jung et al . ( 2018 ) proposed to optimize weight scaling and quantization ranges jointly from task losses . 3 GQ-NET . In this section we describe the architecture of our proposed GQ-Net and then discuss components of the architecture which can be tuned to improve performance . 3.1 GQ-NET ARCHITECTURE . The major components of GQ-Net include the following , and are illustrated in Figure 1 : 1 . An L-layer neural network hW ( · ) with all computations performed using full precision floating point arithmetic . Here W = { W1 , . . . , WL } denotes the parameter ( weights ) of the model , with Wi , i ∈ 1 . . . L being the weights in layer i and expressed in floating point . 2 . The quantized model ĥW , Q ( · ) built from hW ( · ) . Here Q = { Qw1 , . . . , QwL , Qa0 , . . . , QaL } is a set of quantizers , i.e . mappings from floating point to ( scaled ) integer values ; the quantizers may be parameterized , and we describe how to optimize these parameters in Section 3.2 . Qwi quantizes weights Wi and Q a i quantizes activations in layer i . Let x0 denote an input to hW . To construct output ĥW , Q ( x0 ) of the quantized network , we proceed layer by layer . We first quantize the weights in layers i = 1 , . . . , L as ŵi = Qwi ( wi ) , and also quantize the input by setting x̂0 = Q a 0 ( x0 ) . we compute the quantized activations x̂i in layer i iteratively for i = 1 , . . . , L using x̂i = Qai ( x̃i ) , where x̃i = gi ( ŵi ∗ x̂i−1 ) , and gi ( · ) denotes the nonlinearity function in layer i and ∗ denotes convolution . Note that since ŵi and x̂i−1 are quantized , x̃i can be computed using integer or fixed point arithmetic . 3 . Next , we construct a loss function L incorporating both the training loss Lf of the full precision model hW and a loss Lq capturing the difference between hW and the quantized model ĥW , Q . L = ωfLf + ωqLq ( 1 ) Here ωf , ωq ∈ R are parameters capturing the relative importance of training loss versus distributional loss . In this paper , we focus on image classification networks , and thus we set Lf to be the cross-entropy loss between outputs from hW and the training labels . In addition , we set Lq = DKL ( σ ( hW ( · ) ) ||σ ( ĥW , Q ( · ) ) ) , where σ denotes the softmax function , to be the KL divergence between distributions σ ( hW ) and σ ( ĥW , Q ) on each input . Hence , minimizing the second term in L corresponds to pushing the floating point and quantized models to behave similarly to each other . Since the weight parameters W appear in both terms in L , the two terms can give conflicting signals for updating W during the optimization of L , causing the optimization to be unstable . We discuss how to deal with this problem in Section 3.2 . To train GQ-Net , we successively take mini-batches of training samples and labels and use them to compute L during the forward pass and propagate gradients with respect to W and the parameters of Q during the backward pass in order to minimize L. After L has converged sufficiently , we take the quantized weights in ĥW , Q ( · ) as the quantized model .
In this paper, the authors propose a framework towards 4-bit auantization of CNNs. Specifically, during training, the proposed method contains a full precision branch supervised by classification loss for accurate prediction and representation learning, as well as a parameterized quantization branch to approximate the full precision branch. A quantization loss between the full precision branch and the quantization branch is defined to minimize the difference between activation distributions. The authors proposed a series of improvements, including alternative optimization, dynamic scheduling, detach and batch normalization to help boosting the performance to SOTA under 4-bit quantization.
SP:bd4fee07f87a3b40b274d1cbbae3ac07f11cb48d
Policy Optimization by Local Improvement through Search
1 INTRODUCTION . Reinforcement learning ( RL ) has seen a great deal of success in recent years , from playing games ( Mnih et al. , 2015 ; Silver et al. , 2016 ) to robotic control ( Gu et al. , 2017 ; Singh et al. , 2019 ) . These successes showcase the power of learning from direct interactions with environments . However , a well-known disadvantage of reinforcement learning approaches is the demand for a large number of samples for learning ; for example , OpenAI Five learned to solve DOTA using Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2017 ) by playing 180 years worth of games against itself daily ( OpenAI , 2018 ) . The issue with sample complexity can be mitigated by learning from expert behavior . For example , AlphaStar ( The Alpha Star Team , 2019 ) uses models that are pre-trained using supervised learning techniques on expert human demonstrations , before RL is used for refinement . A variety of strategies have been developed in imitation learning to learn from expert behavior , where the expert can be a human ( Stadie et al. , 2017 ) or a pre-trained policy ( Ho & Ermon , 2016 ) . Earlier approaches , such as behavioral cloning ( BC ) , relied on training models to mimic expert behavior at various states in the demonstration data ( Pomerleau , 1991 ) . However , these models suffer from the problem that even small differences between the learned policy and the expert behavior can lead to a snow-balling effect , where the state distribution diverges to a place where the behavior of the policy is now meaningless since it was not trained in that part of space ( Ross & Bagnell , 2010 ) . In order to mitigate these issues , DAgger ( Ross et al. , 2011 ) uses an expert policy to provide supervision to the policy being learned . While these strategies differ on the state distribution at which the expert actions are optimized – for example BC uses the state distribution of the expert , DAgger-like approaches use the state distribution from the policy being trained – the supervision signal itself is the expert ’ s behavior at each step . On the other end of the spectrum , approaches rooted in policy iteration , such as Dual Policy Iteration ( Sun et al. , 2018b ) do not mimic next step actions of a policy directly , but instead use planning or search over the policy to choose an action distribution to train towards ( Silver et al. , 2017 ) . However , this can be computationally expensive , and can also end up training the policy on a state distribution that is far from the current policy ’ s induced distribution . In this paper , we propose an algorithm that finds a middle ground by using Monte Carlo Tree Search ( MCTS ) ( Kocsis & Szepesvári , 2006 ) to perform local trajectory improvement over states sampled from different time steps in the trajectory unrolled from the policy . This approach has the benefit of training the policy on a state distribution that is close to that induced by the policy itself , while using local search or planning over a smaller horizon to generate a good policy to train towards . This approach stands in contrast to other works in interactive imitation learning that correct distribution mismatch by using one-step feedback . We provide theoretical justification for the advantage of a balanced local trajectory improvement and show that MCTS can serve as a policy improvement operator under certain conditions1 . An added benefit of our effort is computational – depth parallel MCTS on local trajectory segments is much faster than traditional sequential MCTS to generate demonstrations . We show that our proposed algorithm can easily be parallelized to enable more efficient imitation learning from MCTS . Notably , this level of parallelism is present at varying depths which differs from existing works on parallel MCTS ( Chaslot et al. , 2008 ) . In summary , our main contributions are : • A general interactive imitation learning algorithm that balances the expert feedback quality and the state distribution divergence . Our proposed approach provides a flexible local trajectory improvement based on MCTS . • Theoretical analysis on the general benefit of local improvement and specific case study on using MCTS as a local improvement operator . • Strong empirical performance on a suite of high-dimensional continuous control problems based on both sample efficiency and training time . 2 RELATED WORK . Imitation Learning . Imitation learning ( IL ) refers to the problem of learning to perform a task from expert demonstrations . Behavioral cloning ( Widrow & Smith , 1964 ) is one popular approach which maximizes the likelihood of expert actions under the agent policy ( Pomerleau , 1989 ; Schaal , 1999 ; Muller et al. , 2006 ; Mlling et al. , 2013 ; Bojarski et al. , 2016 ; Giusti et al. , 2016 ; Mahler & Goldberg , 2017 ; Wang et al. , 2019 ; Bansal et al. , 2018 ) . Inverse Reinforcement Learning is another popular form of IL where a reward function is extracted from expert demonstrations and then a policy is trained to maximize that reward ( Ziebart et al. , 2008 ; Finn et al. , 2016 ; Fu et al. , 2018 ; Ho & Ermon , 2016 ) . In this work we focus on imitation learning via behavioral cloning . Despite its success in canonical problems , such as Go ( Silver et al. , 2017 ) and Starcraft ( The Alpha Star Team , 2019 ) , behavioral cloning suffers from many challenges , most notably distributional shift ( Daumé et al. , 2009 ) . To explain distributional shift , let us assume we train an agent by performing supervised learning on the actions an expert has taken . A small error in the first time step of training may bring the agent to a state that the expert has rarely visited and is therefore not as well modeled . Over time , this error compounds , leading the agent to states far from expert behavior and diverging further from the expert demonstrations . The longer the episode is , the more likely the agent is to deviate from expert demonstrations . 1We note that other techniques such as Generative Adversarial Imitation Learning ( GAIL ) have been developed that use example demonstrations in a modified objective which is still trained by interacting with the environment ( Ho & Ermon , 2016 ) . These approaches directly operate on the induced state distribution itself ; we do not consider these approaches here Various solutions to the problem of distributional shift have been proposed ( Daumé et al. , 2009 ; Ross & Bagnell , 2010 ; Ross et al. , 2011 ; Ho & Ermon , 2016 ; Laskey et al. , 2017 ; Bansal et al. , 2018 ; de Haan et al. , 2019 ) . Some of these approaches reduce the effects of distributional shift by iteratively querying the expert ( Daumé et al. , 2009 ; Ross et al. , 2011 ) . DAgger ( Ross et al. , 2011 ) , one of the most widely used of these solutions , queries the expert on each of the states that are visited by the policy and uses expert demonstrations to improve the policy . At the other extreme , we have Policy Iteration approaches , such as Dual Policy Iteration ( Sun et al. , 2018b ) , AlphaZero ( Silver et al. , 2017 ) and ExIT ( Anthony et al. , 2017 ) that do not directly mimic the actions of an expert , but instead plan or search over the policy to choose an action distribution to train towards ( Silver et al. , 2017 ) . These methods can be computationally expensive and their long planning horizon can cause the state distribution to diverge far from the current policy ’ s induced distribution . In this paper , we propose a new algorithm , named POLISh , that strikes a middle ground by performing multi-step improvements over states sampled from different time steps in the trajectories generated by the policy . We use MCTS ( Kocsis & Szepesvári , 2006 ) to perform the local trajectory improvements . We show the promise of our approach with both theoretical and empirical results . Monte Carlo Tree Search . We refer the reader to ( Browne et al. , 2012 ) for a comprehensive survey on MCTS . To generate the proposed locally optimized trajectories , we follow recent work that explores using MCTS to provide feedback for policy improvement ( Guo et al. , 2014 ; Silver et al. , 2017 ; Anthony et al. , 2017 ) . MCTS is a best-first tree search method which uses a policy to explore the most promising actions first . Through repeated simulation , MCTS builds a tree whose nodes represent states and whose branches correspond to actions that can be taken from those states . The objective is to maximize total return , so after all simulations have been completed , a final trajectory is generated by traversing the most visited sequence of nodes . 3 BACKGROUND & PRELIMINARIES . Markov Decision Processes . We consider policy learning for Markov Decision Processes ( MDPs ) represented as a tuple ( S , A , P , r , γ , D ) . Let S denote the state space , A the action space , P ( s′|s , a ) the ( probabilistic ) state dynamics , r ( s , a ) the reward function , γ the discount factor andD the initial state distribution . Policy Learning . A stochastic policy π maps a state s ∈ S to a distribution over the actions A , denoted by π ( s ) . At each state , an action a is sampled from π ( s ) with probability π ( a|s ) and a reward of r ( s , a ) is received by π . The goal is to learn a policy that maximizes the accumulated discounted rewards JD ( π ) = Eτ∼π [ ∑∞ i=0 γ ir ( si , ai ) ] . We omit the dependency of the initial state distribution D when the context is clear . A few useful quantities related to a policy are the value function Vπ , the state-action value function Qπ and the advantage function Aπ defined as follows : Vπ ( s ) = Eτ∼π [ ∞∑ i=0 γir ( si , ai ) |s0 = s ] , Qπ ( s , a ) = Eτ∼π [ ∞∑ i=0 γir ( si , ai ) |s0 = s , a0 = a ] , Aπ ( s , a ) = Qπ ( s , a ) − Vπ ( s ) As the policy takes a sequence of actions , its performance has a strong connection to the state distribution induced by its actions . We define quantities related to the state distributions induced by policies at different time steps . Let dtπ denote the state distribution obtained by following π for t steps . We use dπ = ( 1−γ ) ∑∞ t=0 γ tdtπ to denote the accumulated discounted state distribution . With these quantities , we can rewrite J ( π ) = ∑∞ t=0 Es∼dtπ , a∼π ( s ) [ γ tr ( s , a ) ] = Es∼dπ , a∼π ( s ) [ r ( s , a ) ] . We will make use of the following relationship between the two policies in our analysis ( Kakade & Langford , 2002 ) : J ( π′ ) = J ( π ) + Es∼dπ [ Ea∼π′ ( s ) [ Aπ ( s , a ) ] ] ( 1 ) Throughout the paper , we use π∗ to denote the expert policy .
This paper proposes POLISH, an imitation learning algorithm that provides a balance between Behavioral Cloning (BC) and DAgger. The algorithm reduces the mismatch between the target policy and an expert policy on states obtained from starting at the target policy's state distribution and following the expert policy for a time segment of t steps. The claim is that a suitable t will keep the training states close to the target policy's state distribution and avoid the compounding errors that arise when the agent drifts away from its training distribution. The paper also explores the possibility of policy optimization by replacing the pre-defined expert policy in POLISH with a policy derived from Monte Carlo Tree Search. Theoretical and empirical analyses in the paper studies the effect of t and MCTS planning in POLISH on policy improvement.
SP:943f1c5c3c9ba6d861df1a89eb9420d1f54d5573
Policy Optimization by Local Improvement through Search
1 INTRODUCTION . Reinforcement learning ( RL ) has seen a great deal of success in recent years , from playing games ( Mnih et al. , 2015 ; Silver et al. , 2016 ) to robotic control ( Gu et al. , 2017 ; Singh et al. , 2019 ) . These successes showcase the power of learning from direct interactions with environments . However , a well-known disadvantage of reinforcement learning approaches is the demand for a large number of samples for learning ; for example , OpenAI Five learned to solve DOTA using Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2017 ) by playing 180 years worth of games against itself daily ( OpenAI , 2018 ) . The issue with sample complexity can be mitigated by learning from expert behavior . For example , AlphaStar ( The Alpha Star Team , 2019 ) uses models that are pre-trained using supervised learning techniques on expert human demonstrations , before RL is used for refinement . A variety of strategies have been developed in imitation learning to learn from expert behavior , where the expert can be a human ( Stadie et al. , 2017 ) or a pre-trained policy ( Ho & Ermon , 2016 ) . Earlier approaches , such as behavioral cloning ( BC ) , relied on training models to mimic expert behavior at various states in the demonstration data ( Pomerleau , 1991 ) . However , these models suffer from the problem that even small differences between the learned policy and the expert behavior can lead to a snow-balling effect , where the state distribution diverges to a place where the behavior of the policy is now meaningless since it was not trained in that part of space ( Ross & Bagnell , 2010 ) . In order to mitigate these issues , DAgger ( Ross et al. , 2011 ) uses an expert policy to provide supervision to the policy being learned . While these strategies differ on the state distribution at which the expert actions are optimized – for example BC uses the state distribution of the expert , DAgger-like approaches use the state distribution from the policy being trained – the supervision signal itself is the expert ’ s behavior at each step . On the other end of the spectrum , approaches rooted in policy iteration , such as Dual Policy Iteration ( Sun et al. , 2018b ) do not mimic next step actions of a policy directly , but instead use planning or search over the policy to choose an action distribution to train towards ( Silver et al. , 2017 ) . However , this can be computationally expensive , and can also end up training the policy on a state distribution that is far from the current policy ’ s induced distribution . In this paper , we propose an algorithm that finds a middle ground by using Monte Carlo Tree Search ( MCTS ) ( Kocsis & Szepesvári , 2006 ) to perform local trajectory improvement over states sampled from different time steps in the trajectory unrolled from the policy . This approach has the benefit of training the policy on a state distribution that is close to that induced by the policy itself , while using local search or planning over a smaller horizon to generate a good policy to train towards . This approach stands in contrast to other works in interactive imitation learning that correct distribution mismatch by using one-step feedback . We provide theoretical justification for the advantage of a balanced local trajectory improvement and show that MCTS can serve as a policy improvement operator under certain conditions1 . An added benefit of our effort is computational – depth parallel MCTS on local trajectory segments is much faster than traditional sequential MCTS to generate demonstrations . We show that our proposed algorithm can easily be parallelized to enable more efficient imitation learning from MCTS . Notably , this level of parallelism is present at varying depths which differs from existing works on parallel MCTS ( Chaslot et al. , 2008 ) . In summary , our main contributions are : • A general interactive imitation learning algorithm that balances the expert feedback quality and the state distribution divergence . Our proposed approach provides a flexible local trajectory improvement based on MCTS . • Theoretical analysis on the general benefit of local improvement and specific case study on using MCTS as a local improvement operator . • Strong empirical performance on a suite of high-dimensional continuous control problems based on both sample efficiency and training time . 2 RELATED WORK . Imitation Learning . Imitation learning ( IL ) refers to the problem of learning to perform a task from expert demonstrations . Behavioral cloning ( Widrow & Smith , 1964 ) is one popular approach which maximizes the likelihood of expert actions under the agent policy ( Pomerleau , 1989 ; Schaal , 1999 ; Muller et al. , 2006 ; Mlling et al. , 2013 ; Bojarski et al. , 2016 ; Giusti et al. , 2016 ; Mahler & Goldberg , 2017 ; Wang et al. , 2019 ; Bansal et al. , 2018 ) . Inverse Reinforcement Learning is another popular form of IL where a reward function is extracted from expert demonstrations and then a policy is trained to maximize that reward ( Ziebart et al. , 2008 ; Finn et al. , 2016 ; Fu et al. , 2018 ; Ho & Ermon , 2016 ) . In this work we focus on imitation learning via behavioral cloning . Despite its success in canonical problems , such as Go ( Silver et al. , 2017 ) and Starcraft ( The Alpha Star Team , 2019 ) , behavioral cloning suffers from many challenges , most notably distributional shift ( Daumé et al. , 2009 ) . To explain distributional shift , let us assume we train an agent by performing supervised learning on the actions an expert has taken . A small error in the first time step of training may bring the agent to a state that the expert has rarely visited and is therefore not as well modeled . Over time , this error compounds , leading the agent to states far from expert behavior and diverging further from the expert demonstrations . The longer the episode is , the more likely the agent is to deviate from expert demonstrations . 1We note that other techniques such as Generative Adversarial Imitation Learning ( GAIL ) have been developed that use example demonstrations in a modified objective which is still trained by interacting with the environment ( Ho & Ermon , 2016 ) . These approaches directly operate on the induced state distribution itself ; we do not consider these approaches here Various solutions to the problem of distributional shift have been proposed ( Daumé et al. , 2009 ; Ross & Bagnell , 2010 ; Ross et al. , 2011 ; Ho & Ermon , 2016 ; Laskey et al. , 2017 ; Bansal et al. , 2018 ; de Haan et al. , 2019 ) . Some of these approaches reduce the effects of distributional shift by iteratively querying the expert ( Daumé et al. , 2009 ; Ross et al. , 2011 ) . DAgger ( Ross et al. , 2011 ) , one of the most widely used of these solutions , queries the expert on each of the states that are visited by the policy and uses expert demonstrations to improve the policy . At the other extreme , we have Policy Iteration approaches , such as Dual Policy Iteration ( Sun et al. , 2018b ) , AlphaZero ( Silver et al. , 2017 ) and ExIT ( Anthony et al. , 2017 ) that do not directly mimic the actions of an expert , but instead plan or search over the policy to choose an action distribution to train towards ( Silver et al. , 2017 ) . These methods can be computationally expensive and their long planning horizon can cause the state distribution to diverge far from the current policy ’ s induced distribution . In this paper , we propose a new algorithm , named POLISh , that strikes a middle ground by performing multi-step improvements over states sampled from different time steps in the trajectories generated by the policy . We use MCTS ( Kocsis & Szepesvári , 2006 ) to perform the local trajectory improvements . We show the promise of our approach with both theoretical and empirical results . Monte Carlo Tree Search . We refer the reader to ( Browne et al. , 2012 ) for a comprehensive survey on MCTS . To generate the proposed locally optimized trajectories , we follow recent work that explores using MCTS to provide feedback for policy improvement ( Guo et al. , 2014 ; Silver et al. , 2017 ; Anthony et al. , 2017 ) . MCTS is a best-first tree search method which uses a policy to explore the most promising actions first . Through repeated simulation , MCTS builds a tree whose nodes represent states and whose branches correspond to actions that can be taken from those states . The objective is to maximize total return , so after all simulations have been completed , a final trajectory is generated by traversing the most visited sequence of nodes . 3 BACKGROUND & PRELIMINARIES . Markov Decision Processes . We consider policy learning for Markov Decision Processes ( MDPs ) represented as a tuple ( S , A , P , r , γ , D ) . Let S denote the state space , A the action space , P ( s′|s , a ) the ( probabilistic ) state dynamics , r ( s , a ) the reward function , γ the discount factor andD the initial state distribution . Policy Learning . A stochastic policy π maps a state s ∈ S to a distribution over the actions A , denoted by π ( s ) . At each state , an action a is sampled from π ( s ) with probability π ( a|s ) and a reward of r ( s , a ) is received by π . The goal is to learn a policy that maximizes the accumulated discounted rewards JD ( π ) = Eτ∼π [ ∑∞ i=0 γ ir ( si , ai ) ] . We omit the dependency of the initial state distribution D when the context is clear . A few useful quantities related to a policy are the value function Vπ , the state-action value function Qπ and the advantage function Aπ defined as follows : Vπ ( s ) = Eτ∼π [ ∞∑ i=0 γir ( si , ai ) |s0 = s ] , Qπ ( s , a ) = Eτ∼π [ ∞∑ i=0 γir ( si , ai ) |s0 = s , a0 = a ] , Aπ ( s , a ) = Qπ ( s , a ) − Vπ ( s ) As the policy takes a sequence of actions , its performance has a strong connection to the state distribution induced by its actions . We define quantities related to the state distributions induced by policies at different time steps . Let dtπ denote the state distribution obtained by following π for t steps . We use dπ = ( 1−γ ) ∑∞ t=0 γ tdtπ to denote the accumulated discounted state distribution . With these quantities , we can rewrite J ( π ) = ∑∞ t=0 Es∼dtπ , a∼π ( s ) [ γ tr ( s , a ) ] = Es∼dπ , a∼π ( s ) [ r ( s , a ) ] . We will make use of the following relationship between the two policies in our analysis ( Kakade & Langford , 2002 ) : J ( π′ ) = J ( π ) + Es∼dπ [ Ea∼π′ ( s ) [ Aπ ( s , a ) ] ] ( 1 ) Throughout the paper , we use π∗ to denote the expert policy .
This paper proposes POLISH, a reinforcement learning learning algorithm based on imitating partial trajectories produced by an MCTS procedure. The intuition behind this idea is that behavioral cloning suffers from distribution shift over time, and using MCTS allows imitation learning to be done on states closer to the policy's state distribution, which the authors justify using techniques similar to DAgger. The authors evaluate this method on continuous OpenAI Gym tasks, and show that it consistently beats a PPO baseline.
SP:943f1c5c3c9ba6d861df1a89eb9420d1f54d5573
A Graph Neural Network Assisted Monte Carlo Tree Search Approach to Traveling Salesman Problem
We present a graph neural network assisted Monte Carlo Tree Search approach for the classical traveling salesman problem ( TSP ) . We adopt a greedy algorithm framework to construct the optimal solution to TSP by adding the nodes successively . A graph neural network ( GNN ) is trained to capture the local and global graph structure and give the prior probability of selecting each vertex every step . The prior probability provides a heuristic for MCTS , and the MCTS output is an improved probability for selecting the successive vertex , as it is the feedback information by fusing the prior with the scouting procedure . Experimental results on TSP up to 100 nodes demonstrate that the proposed method obtains shorter tours than other learning-based methods . 1 INTRODUCTION . Traveling Salesman Problem ( TSP ) is a classical combinatorial optimization problem and has many practical applications in real life , such as planning , manufacturing , genetics ( Applegate et al. , 2006b ) . The goal of TSP is to find the shortest route that visits each city once and ends in the origin city , which is well-known as an NP-hard problem ( Papadimitriou , 1977 ) . In the literature , approximation algorithms were proposed to solve TSP ( Lawler et al. , 1986 ; Goodrich & Tamassia , 2015 ) . In particular , many heuristic search algorithms were made to find a satisfactory solution within a reasonable time . However , the performance of heuristic algorithms depends on handcrafted heuristics to guide the search procedure to find competitive tours efficiently , and the design of heuristics usually requires substantial expertise of the problem ( Johnson & McGeoch , 1997 ; Dorigo & Gambardella , 1997 ) . Recent advances in deep learning provide a powerful way of learning effective representations from data , leading to breakthroughs in many fields such as speech recognition ( Lecun et al. , 2015 ) . Efforts of the deep learning approach to tackling TSP has been made under the supervised learning and reinforcement learning frameworks . Vinyals et al . ( Vinyals et al. , 2015 ) introduced a pointer network based on the Recurrent Neural Network ( RNN ) to model the stochastic policy that assigns high probabilities to short tours given an input set of coordinates of vertices . Dai et al . ( Dai et al. , 2017 ) tackled the difficulty of designing heuristics by Deep Q-Network ( DQN ) based on structure2vec ( Dai et al. , 2016b ) , and a TSP solution was constructed incrementally by the learned greedy policy . Most recently , Kool et al . ( Kool et al. , 2019 ) used Transformer-Pointer Network ( Vaswani et al. , 2017 ) to learn heuristics efficiently and got close to the optimal TSP solution for up to 100 vertices . These efforts made it possible to solve TSP by an end-to-end heuristic algorithm without special expert skills and complicated feature design . In this paper , we present a new approach to solving TSP . Our approach combines the deep neural network with the Monte Carlo Tree Search ( MCTS ) , so that takes advantage of the powerful feature representation and scouting exploration . A graph neural network ( GNN ) is trained to capture the local and global graph structure and predict the prior probability , for each vertex , of whether this vertex belongs to the partial tour . Besides node features , we integrate edge information into each update-layer in order to extract features efficiently from the problem whose solution relies on the edge weight . Similar to above-learned heuristic approaches , we could greedily select the vertex according to the biggest prior probability and yet the algorithm may fall into the local optimum because the algorithm has only one shot to compute the optimal tour and never goes back and reverses the decision . To overcome this problem , we introduce a graph neural network assisted Monte Carlo Tree Search ( GNN-MCTS ) to make the decision more reliable by a large number of scouting simulations . The trained GNN is used to guide the MCTS procedure that effectively reduces the complexity of the search space and MCTS provides a more reliable policy to avoid stuck in a local optimum . Experimental results on TSP up to 100 vertices demonstrate that the proposed method obtains shorter tours than other learning-based methods . The remainder of the paper is organized as follows : After reviewing related work in Section 2 , we briefly give a preliminary introduction to TSP in Section 3 . Our approach is formulated in Section 4 . Experimental results are given in Section 5 , followed by the conclusion in Section 6 . 2 RELATED WORK . The TSP is a well studied combinatorial optimization problem , and many learning-based algorithms have been proposed . In 1985 , Hopfield et al . proposed a neural network to solve the TSP ( Hopfield & Tank , 1985 ) . This is the first time that researchers attempted to use neural networks to solve combinatorial optimization problems . Since the impressive results produced by this approach , many researchers have made efforts on improving the performance ( Bout & Miller , 1988 ; Brandt et al. , 1988 ) . Many shallow network architectures were also proposed to solve the combinatorial optimization problem ( Favata & Walker , 1991 ; Fort , 1988 ; Angniol et al. , 1988 ; Kohonen , 1982 ) . Recent years , deep neural networks have been adopted to solve the TSP and many works have achieved remarkable results . We summarize the existing learning-base methods from the following aspects . ENCODER AND DECODER Vinylas et al . ( Vinyals et al. , 2015 ) proposed a neural architecture called Pointer Net ( Ptr-Net ) to learn the conditional probability of a tour using a mechanism of the neural attention . Instead of using attention to blend hidden units of an encoder to a context vector , they used attention as pointers to the input vertices . The parameters of the model are learned by maximizing the conditional probabilities for the training examples in a supervised way . Upon test time , they used a beam search procedure to find the best possible tour . Two flaws exist in the method . First , Ptr-Net can only be applied to solve problems of a small scale ( n≤ 50 ) . Second , the beam search procedure might generate invalid routes . Bello et al . ( Bello et al. , 2017 ) proposed a framework to tackle TSP using neural networks and reinforcement learning . Similar to Vinylas et al. , they employed the approach of Ptr-Net as a policy model to learn a stochastic policy over tours . Furthermore , they masked the visited vertices to avoid deriving invalid routes and added a glimpse which aggregates different parts of the input sequence to improve the performance . Instead of training the model in a supervised way , they introduced an Actor-Critic algorithm to learn the parameters of the model and empirically demonstrated that the generalization is better compared to optimizing a supervised mapping of labeled data . The algorithm significantly outperformed the supervised learning approach ( Vinyals et al. , 2015 ) with up to 100 vertices . Kool et al . ( Kool et al. , 2019 ) introduced an efficient model and training method for TSP and other routing problems . Compared to ( Bello et al. , 2017 ) , they removed the influence on the input order of the vertices by replacing recurrence ( LSTMs ) with attention layers . The model can include valuable information about the vertices by multi-head attention mechanism which plays an important role in the setting where decisions relate directly to the vertices in a graph . Similar to ( Bello et al. , 2017 ) , they applied a reinforcement learning method to train the model . Instead of learning a value function as a baseline , they introduced a greedy rollout policy to generate baseline and empirically showed that the greedy rollout baseline can improve the quality and convergence speed for the approach . They improved the state-of-art performance among 20 , 50 , and 100 vertices . Independent of the work of Kool et al. , Deudon et al . ( Deudon et al. , 2018 ) also proposed a framework which uses attention layers and reinforcement learning algorithm ( Actor-Critic ) to learn a stochastic policy . They combined the machine learning methods with an existing heuristic algorithm , i.e. , 2-opt to enhance the performance of the framework . GRAPH EMBEDDING Dai et al . ( Dai et al. , 2017 ) proposed a framework , which combines reinforcement learning with graph embedding neural network , to construct solutions incrementally for TSP and other combinatorial optimization problems . Instead of using a separate encoder and decoder , they introduced a graph embedding network based on the structure2vec ( Dai et al. , 2016a ) to capture the current state of the solution and the structure of a graph . Furthermore , they used Q-learning parameterized by the graph embedding network to learn a greedy policy that outputs which vertex being inserted into the partial tour . They adopt the farthest strategy ( Rosenkrantz et al. , 2013 ) to get the best insertion position of the partial tour . Nowak et al . ( Nowak et al. , 2017 ) propose a supervised manner to directly output a tour as an adjacency matrix based on a Graph Neural Network and then convert the matrix into a feasible solution by beam search . The author only reports an optimality gap of 2.7 % for n = 20 and slightly worse than the auto-regressive data-driven model ( Vinyals et al. , 2015 ) . The performance of the above-mentioned methods was suffered due to the greedy policy which selects the vertex according to the biggest prior probability or the value . In this paper , we introduce a new Monte Carlo Tree Search-based algorithm to overcome this problem . 3 PRELIMINARIES . TRAVELING SALESMAN PROBLEM Let G ( V , E , w ) denotes a weighted graph , where V is the set of vertices , E the set of edges and w : E → R+ the edge weight function , i.e. , w ( u , v ) is the weight of edge ( u , v ) ∈ E. We use S = { v1 , v2 , ... , vi } to represent an ordered tour sequence that starts with v1 and ends with vi , and S̄ = V \ S the set of candidate vertices for addition , condition on S. The target of TSP is to find a tour sequence with the lowest cost , i.e. , c ( G , S ) = ∑|S|−1 i=1 w ( S ( i ) , S ( i + 1 ) ) + w ( S ( |S| ) , S ( 1 ) ) when |S| = |V | . 4 PROPOSED APPROACH . For a graph , our goal is to construct a tour solution by adding vertices successively . A natural approach is to train a deep neural network of some form to decide which vertex being added to the partial tour at each step . That is , a neural network f would take the graph G and the partial tour sequence S as input , and the output f ( G|S ) would be a prior probability that indices how likely each vertex to be selected . Intuitively , we can use the prior probability in a greedy way , i.e. , selecting vertex with the biggest probability , to generate the tour sequence incrementally . However , deriving tours in this way might fall into the local optimum because the algorithm has only one shot to compute the optimal tour and never goes back and reverses the decision . To overcome this problem , we enhance the policy-decisions by MCTS assisted with the deep neural network . We begin in Section 4.1 by introducing how to transform TSP into a Markov Decision Process ( MDP ) . Then in Section 4.2 , we describe the GNN architecture for parameterizing f ( G|S ) . Finally , Section 4.3 describes GNN-MCTS for combinatorial optimization problems , especially the TSP . The overall approach is illustrated in Figure 1 .
In this paper, the authors introduce a new Monte Carlo Tree Search-based (MCTS) algorithm for computing approximate solutions to the Traveling Salesman Problem (TSP). Yet since the TSP is NP-complete, a learned heuristic is used to guide the search process. For this learned heuristic, the authors propose a Graph Neural Network-derived approach, in which an additional term is added to the network definition that explicitly adds the metric distance between neighboring nodes during each iteration. They perform favorably compared to other TSP approaches, demonstrating improved performance on relatively small TSP problems and quite well on larger problems out of reach for other deep learning strategies.
SP:83c5fa9ad4b7e0de17e75d4575316e84ad21a5b5
A Graph Neural Network Assisted Monte Carlo Tree Search Approach to Traveling Salesman Problem
We present a graph neural network assisted Monte Carlo Tree Search approach for the classical traveling salesman problem ( TSP ) . We adopt a greedy algorithm framework to construct the optimal solution to TSP by adding the nodes successively . A graph neural network ( GNN ) is trained to capture the local and global graph structure and give the prior probability of selecting each vertex every step . The prior probability provides a heuristic for MCTS , and the MCTS output is an improved probability for selecting the successive vertex , as it is the feedback information by fusing the prior with the scouting procedure . Experimental results on TSP up to 100 nodes demonstrate that the proposed method obtains shorter tours than other learning-based methods . 1 INTRODUCTION . Traveling Salesman Problem ( TSP ) is a classical combinatorial optimization problem and has many practical applications in real life , such as planning , manufacturing , genetics ( Applegate et al. , 2006b ) . The goal of TSP is to find the shortest route that visits each city once and ends in the origin city , which is well-known as an NP-hard problem ( Papadimitriou , 1977 ) . In the literature , approximation algorithms were proposed to solve TSP ( Lawler et al. , 1986 ; Goodrich & Tamassia , 2015 ) . In particular , many heuristic search algorithms were made to find a satisfactory solution within a reasonable time . However , the performance of heuristic algorithms depends on handcrafted heuristics to guide the search procedure to find competitive tours efficiently , and the design of heuristics usually requires substantial expertise of the problem ( Johnson & McGeoch , 1997 ; Dorigo & Gambardella , 1997 ) . Recent advances in deep learning provide a powerful way of learning effective representations from data , leading to breakthroughs in many fields such as speech recognition ( Lecun et al. , 2015 ) . Efforts of the deep learning approach to tackling TSP has been made under the supervised learning and reinforcement learning frameworks . Vinyals et al . ( Vinyals et al. , 2015 ) introduced a pointer network based on the Recurrent Neural Network ( RNN ) to model the stochastic policy that assigns high probabilities to short tours given an input set of coordinates of vertices . Dai et al . ( Dai et al. , 2017 ) tackled the difficulty of designing heuristics by Deep Q-Network ( DQN ) based on structure2vec ( Dai et al. , 2016b ) , and a TSP solution was constructed incrementally by the learned greedy policy . Most recently , Kool et al . ( Kool et al. , 2019 ) used Transformer-Pointer Network ( Vaswani et al. , 2017 ) to learn heuristics efficiently and got close to the optimal TSP solution for up to 100 vertices . These efforts made it possible to solve TSP by an end-to-end heuristic algorithm without special expert skills and complicated feature design . In this paper , we present a new approach to solving TSP . Our approach combines the deep neural network with the Monte Carlo Tree Search ( MCTS ) , so that takes advantage of the powerful feature representation and scouting exploration . A graph neural network ( GNN ) is trained to capture the local and global graph structure and predict the prior probability , for each vertex , of whether this vertex belongs to the partial tour . Besides node features , we integrate edge information into each update-layer in order to extract features efficiently from the problem whose solution relies on the edge weight . Similar to above-learned heuristic approaches , we could greedily select the vertex according to the biggest prior probability and yet the algorithm may fall into the local optimum because the algorithm has only one shot to compute the optimal tour and never goes back and reverses the decision . To overcome this problem , we introduce a graph neural network assisted Monte Carlo Tree Search ( GNN-MCTS ) to make the decision more reliable by a large number of scouting simulations . The trained GNN is used to guide the MCTS procedure that effectively reduces the complexity of the search space and MCTS provides a more reliable policy to avoid stuck in a local optimum . Experimental results on TSP up to 100 vertices demonstrate that the proposed method obtains shorter tours than other learning-based methods . The remainder of the paper is organized as follows : After reviewing related work in Section 2 , we briefly give a preliminary introduction to TSP in Section 3 . Our approach is formulated in Section 4 . Experimental results are given in Section 5 , followed by the conclusion in Section 6 . 2 RELATED WORK . The TSP is a well studied combinatorial optimization problem , and many learning-based algorithms have been proposed . In 1985 , Hopfield et al . proposed a neural network to solve the TSP ( Hopfield & Tank , 1985 ) . This is the first time that researchers attempted to use neural networks to solve combinatorial optimization problems . Since the impressive results produced by this approach , many researchers have made efforts on improving the performance ( Bout & Miller , 1988 ; Brandt et al. , 1988 ) . Many shallow network architectures were also proposed to solve the combinatorial optimization problem ( Favata & Walker , 1991 ; Fort , 1988 ; Angniol et al. , 1988 ; Kohonen , 1982 ) . Recent years , deep neural networks have been adopted to solve the TSP and many works have achieved remarkable results . We summarize the existing learning-base methods from the following aspects . ENCODER AND DECODER Vinylas et al . ( Vinyals et al. , 2015 ) proposed a neural architecture called Pointer Net ( Ptr-Net ) to learn the conditional probability of a tour using a mechanism of the neural attention . Instead of using attention to blend hidden units of an encoder to a context vector , they used attention as pointers to the input vertices . The parameters of the model are learned by maximizing the conditional probabilities for the training examples in a supervised way . Upon test time , they used a beam search procedure to find the best possible tour . Two flaws exist in the method . First , Ptr-Net can only be applied to solve problems of a small scale ( n≤ 50 ) . Second , the beam search procedure might generate invalid routes . Bello et al . ( Bello et al. , 2017 ) proposed a framework to tackle TSP using neural networks and reinforcement learning . Similar to Vinylas et al. , they employed the approach of Ptr-Net as a policy model to learn a stochastic policy over tours . Furthermore , they masked the visited vertices to avoid deriving invalid routes and added a glimpse which aggregates different parts of the input sequence to improve the performance . Instead of training the model in a supervised way , they introduced an Actor-Critic algorithm to learn the parameters of the model and empirically demonstrated that the generalization is better compared to optimizing a supervised mapping of labeled data . The algorithm significantly outperformed the supervised learning approach ( Vinyals et al. , 2015 ) with up to 100 vertices . Kool et al . ( Kool et al. , 2019 ) introduced an efficient model and training method for TSP and other routing problems . Compared to ( Bello et al. , 2017 ) , they removed the influence on the input order of the vertices by replacing recurrence ( LSTMs ) with attention layers . The model can include valuable information about the vertices by multi-head attention mechanism which plays an important role in the setting where decisions relate directly to the vertices in a graph . Similar to ( Bello et al. , 2017 ) , they applied a reinforcement learning method to train the model . Instead of learning a value function as a baseline , they introduced a greedy rollout policy to generate baseline and empirically showed that the greedy rollout baseline can improve the quality and convergence speed for the approach . They improved the state-of-art performance among 20 , 50 , and 100 vertices . Independent of the work of Kool et al. , Deudon et al . ( Deudon et al. , 2018 ) also proposed a framework which uses attention layers and reinforcement learning algorithm ( Actor-Critic ) to learn a stochastic policy . They combined the machine learning methods with an existing heuristic algorithm , i.e. , 2-opt to enhance the performance of the framework . GRAPH EMBEDDING Dai et al . ( Dai et al. , 2017 ) proposed a framework , which combines reinforcement learning with graph embedding neural network , to construct solutions incrementally for TSP and other combinatorial optimization problems . Instead of using a separate encoder and decoder , they introduced a graph embedding network based on the structure2vec ( Dai et al. , 2016a ) to capture the current state of the solution and the structure of a graph . Furthermore , they used Q-learning parameterized by the graph embedding network to learn a greedy policy that outputs which vertex being inserted into the partial tour . They adopt the farthest strategy ( Rosenkrantz et al. , 2013 ) to get the best insertion position of the partial tour . Nowak et al . ( Nowak et al. , 2017 ) propose a supervised manner to directly output a tour as an adjacency matrix based on a Graph Neural Network and then convert the matrix into a feasible solution by beam search . The author only reports an optimality gap of 2.7 % for n = 20 and slightly worse than the auto-regressive data-driven model ( Vinyals et al. , 2015 ) . The performance of the above-mentioned methods was suffered due to the greedy policy which selects the vertex according to the biggest prior probability or the value . In this paper , we introduce a new Monte Carlo Tree Search-based algorithm to overcome this problem . 3 PRELIMINARIES . TRAVELING SALESMAN PROBLEM Let G ( V , E , w ) denotes a weighted graph , where V is the set of vertices , E the set of edges and w : E → R+ the edge weight function , i.e. , w ( u , v ) is the weight of edge ( u , v ) ∈ E. We use S = { v1 , v2 , ... , vi } to represent an ordered tour sequence that starts with v1 and ends with vi , and S̄ = V \ S the set of candidate vertices for addition , condition on S. The target of TSP is to find a tour sequence with the lowest cost , i.e. , c ( G , S ) = ∑|S|−1 i=1 w ( S ( i ) , S ( i + 1 ) ) + w ( S ( |S| ) , S ( 1 ) ) when |S| = |V | . 4 PROPOSED APPROACH . For a graph , our goal is to construct a tour solution by adding vertices successively . A natural approach is to train a deep neural network of some form to decide which vertex being added to the partial tour at each step . That is , a neural network f would take the graph G and the partial tour sequence S as input , and the output f ( G|S ) would be a prior probability that indices how likely each vertex to be selected . Intuitively , we can use the prior probability in a greedy way , i.e. , selecting vertex with the biggest probability , to generate the tour sequence incrementally . However , deriving tours in this way might fall into the local optimum because the algorithm has only one shot to compute the optimal tour and never goes back and reverses the decision . To overcome this problem , we enhance the policy-decisions by MCTS assisted with the deep neural network . We begin in Section 4.1 by introducing how to transform TSP into a Markov Decision Process ( MDP ) . Then in Section 4.2 , we describe the GNN architecture for parameterizing f ( G|S ) . Finally , Section 4.3 describes GNN-MCTS for combinatorial optimization problems , especially the TSP . The overall approach is illustrated in Figure 1 .
The paper proposes learning a TSP solver that incrementally constructs a tour by adding one city at a time to it using a graph neural network and MCTS. The problem is posed as a reinforcement learning problem, and the graph neural network parameters are trained to minimize the tour length on a training set of TSP instances. A graph neural network architecture called Static Edge Graph Neural Networks is introduced which takes into account the graph of all cities in a given problem instance as well as the partial tour constructed so far in an episode. The network predicts probabilities for the remaining cities to be selected as the next city in the tour, which is then used to compute a value function that guides MCTS. Results on synthetic TSP instances with 20, 50, and 100 cities show that the approach is able to achieve better objective values than prior learning-based approaches. Applying AlphaZero-like approaches to TSP is an interesting test case for understanding how well they can work on hard optimization problems.
SP:83c5fa9ad4b7e0de17e75d4575316e84ad21a5b5
Learning Generative Image Object Manipulations from Language Instructions
The use of adequate feature representations is essential for achieving high performance in high-level human cognitive tasks in computational modeling . Recent developments in deep convolutional and recurrent neural networks architectures enable learning powerful feature representations from both images and natural language text . Besides , other types of networks such as Relational Networks ( RN ) can learn relations between objects and Generative Adversarial Networks ( GAN ) have shown to generate realistic images . In this paper , we combine these four techniques to acquire a shared feature representation of the relation between objects in an input image and an object manipulation action description in the form of human language encodings to generate an image that shows the resulting end-effect the action would have on a computer-generated scene . The system is trained and evaluated on a simulated dataset and experimentally used on real-world photos . 1 INTRODUCTION . For many human-robot interaction scenarios , a long-term vision has been to develop systems where humans can simply instruct robots using natural language commands . This aim encompasses several sub-challenges in robotics and artificial intelligence , including natural language understanding , symbol grounding , task and motion planning , to name only a few . However , a first step for solving these challenging tasks is to study how to best combine the data representations from different domains , which is still a topic of research . This paper studies whether a perceptual system can via simulation train a computational model to predict the output of actions . The aim is to generate an output image that shows the effect of a certain action on objects when the instruction to change an object is given in natural language . Figure 1 shows a simplified visualization of the main idea where the input image contains several objects of different shape , size , and color , and the input instruction is to manipulate one of the objects in some way ( move , remove , add , replace ) . The output of the model is a synthetic generated image that shows the effect that the action had on the scene . To successfully solve the task of depicting the effect of a certain action , the model must further address a number of different sub-challenges , including ; 1 ) image encoding , 2 ) language learning , 3 ) relational learning , and 4 ) image generation . The key requirement for implementing these human cognitive processes in computational modeling is the data representation for each of the different required domains and how to combine and use their shared representations . There are several works in the literature that combines some of the aforementioned sub-challenges for solving problems such as image captioning You et al . ( 2016 ) , image editing Chen et al . ( 2018 ) , image generation from text descriptions Reed et al . ( 2016a ) , visual question answering Santoro et al . ( 2017 ) ; Yang et al . ( 2016 ) , 3D reconstructed object conditioned on a 2D image Girdhar et al . ( 2016 ) ; Weber et al . ( 2018 ) , paired robot action and linguistic translation Yamada et al . ( 2018 ) , and Vision-and-Language Navigation ( VLN ) Anderson et al . ( 2018 ) . However , the challenge of how to combine all the four sub-challenges and learn their shared representations still requires more research . In this work , we propose a system that combines an image encoder , language encoder , relational network , and image decoder and train it in a GAN setting that conditions on both a source image and the action text description to generate a target image of the scene after the action has been performed . The system is implemented in PyTorch and trained on a synthetic data-set and evaluated on synthetic and real-world images . The dataset and source code can be downloaded from [ link to appear ] . 2 RELATED WORK . The task of generating a target image given a source image and text description has been studied in previous works found in the literature . In work on Language-Based object detection and Segmentation ( LBS ) Hu et al . ( 2016b ; a ) , the goal was to identify the regions of the image that correspond to the visual entities described in the natural language description . In the publication on Language-Based Image Editing ( LBIE ) Chen et al . ( 2018 ) , the source image was edited according to the text description , where recurrent attentive models are used to fuse image and language features for the subtasks of image segmentation and colorization . The work by Shinagawa et al . ( 2017 ) used language instructions ( move , expand , or compress ) to manipulate 2D images of digits from the MNIST data-set and avatar images to generate the resulting target output image . Our approach builds on the idea of LBIE by extending the input to multiple objects in a 3D space for an object manipulation task ( instead of an image editing task ) . This approach requires , however , also the learning of relations between the different objects . Generative Adversarial Networks ( GAN ) Goodfellow et al . ( 2014 ) have recently been used extensively to generate realistic images ; for example , the works presented by Radford et al . ( 2015 ) ; Dosovitskiy et al . ( 2015 ) ; Yang et al . ( 2015 ) . In addition , the work by Isola et al . ( 2017 ) uses a conditional GAN for pre-defined image-to-image translations . The use of GANs have further been extended to be conditioned on text descriptions to generate images with specific properties Reed et al . ( 2016a ) ; Nam et al . ( 2018 ) or photo manipulation Wang et al . ( 2018 ) ; Bau et al . ( 2019 ) . As proposed by Zhang et al . ( 2017 ) , a refinement process that rectifies defects in the first stage , resulting in more realistic synthesized images , can further be achieved by the stacking of GANs . There are , in addition , works that condition the GAN with additional information ( besides the text description ) to generate images . The work by Reed et al . ( 2016b ) give control over the location of where the object , or object parts , should be located by providing additional bounding boxes . The work by Dash et al . ( 2017 ) also conditioned on the class information to diversify the generated samples . The work by El-Nouby et al . ( 2018 ) uses a recurrent GAN to generate iterative output images based on the language instruction . Moreover , there have been presented works that integrate a recurrent neural network for the image generation . For example , a recurrent variational autoencoder with an alignment model over words and attention mechanisms have been used to generate images from text captions Mansimov et al . ( 2016 ) , while a GAN conditioned on an LSTM was , similarly , used for image sequence generation for each word , as presented by Ouyang et al . ( 2018 ) . While related works generate output images from text descriptions , they differ from our work in several ways . Firstly , our method conditions on both the input image and the text description for generating the target image . Secondly , we integrate a relational network in the shared representation for learning relations between objects in a 3D simulated space . Thirdly , we perform image object manipulation instead of image editing , and there is only one correct solution for each pair of the source image and text description , which is typically not the case for other works using GANs . 3 DATA AND METHOD . 3.1 DATA GENERATION . The data consists of a generated synthetic data-set consisting of objects that are randomly placed on a table and contains the before state image , the after state image , and an action sentence , as exemplified in Figure 2 . The actions are chosen from a set of 4 possible actions : 2 non-relational { remove , replace } , and 2 relational { move , add } . A summary of the structure of the actions can be seen in Table 1 . The non-relational actions choose one object and either removes it or replaces it with another object with a specified shape , color , and size . These instructions are non-relational since they do not require to learn the relation between the manipulated object and the referential object in the scene . The synthesized images are of size 128 × 128 × 3 , and each image contains 2 to 6 objects which ( each ) is either a box , sphere , or pyramid , with the color red , green , or blue , and with the size small or big . There are no duplicate objects in the same scene that have the same shape , color , and size in order to avoid ambiguities . For the training data , 1000 input images were generated with 10 action sentences for each image . The validation and test data each consisted of 200 images with 10 actions per image . 3.2 OVERVIEW OF METHOD . The proposed method , called CAE+LSTM+RN , can be seen in Figure 3 and consists of an image encoder , language encoder , relational network , and an image decoder . 3.2.1 IMAGE ENCODER . The image encoder follows a similar structure as DCGAN Radford et al . ( 2015 ) , with 4 convolutional layers with stride 2 , padding 1 , and filter size 4 in each layer and numbers of filters 64 , 128 , 256 , and 512 in each layer . Each convolutional layer is followed by batch normalization and relu-activation . The input images are of size 128×128×3 , and the output of the image encoder is of size 8×8×512 . 3.2.2 LANGUAGE ENCODER . The last output of a 2-layered bi-directional LSTM , with 24 hidden units , was used for the sentence embedding , g ( s ) . A word embedding with dimension 30 was learned from a vocabulary of size 17 , consisting of the following words : ” move , remove , replace , add , small , big , pyramid , sphere , cube , red , green , blue , front , behind , left , right , and top ” . Each sentence was also initially parsed and pre-processed such that articles and prepositions ( e.g. , words such as ” in , on , of , the ” ) , could be removed in an early stage . The sentences were presented in reverse word order . 3.2.3 RELATIONAL NETWORK . A Relational Network ( RN ) Santoro et al . ( 2017 ) is used to both learn relations between objectpairs , as well as merging image and language representations . The output of the image encoder , f ( X ) , is of size 8 × 8 × 512 , where each filter response at position i and j in f ( X ) ( i , j , : ) is considered one object , resulting in a total of 8 ∗ 8 = 64 objects . An object-pair vector is made by concatenating two object vectors , with their spatial coordinates , and with the sentence representation [ f ( X ) T ( i , j , : ) ijf ( X ) T ( k , l , : ) klg ( s ) ] , resulting in 64× 64 = 4096 object-pairs . Each object-pair vector is feed through a three-layered fully-connected network with 256 hidden units and relu-activation . The output of the last layer for each object-pair is summed up and reshaped to size 8× 8× nRN where nRN is set to 32 .
This paper proposes a model that takes an image and a sentence as input, where the sentence is an instruction to manipulate objects in the scene, and outputs another image which shows the scene after manipulation. The model is an integration of CNN, RNN, Relation Nets, and GAN. The results are mostly on synthetic data, though the authors also included some results on real images toward the end.
SP:68afc80f1983d8ae90181ca41c8132c09d78983d
Learning Generative Image Object Manipulations from Language Instructions
The use of adequate feature representations is essential for achieving high performance in high-level human cognitive tasks in computational modeling . Recent developments in deep convolutional and recurrent neural networks architectures enable learning powerful feature representations from both images and natural language text . Besides , other types of networks such as Relational Networks ( RN ) can learn relations between objects and Generative Adversarial Networks ( GAN ) have shown to generate realistic images . In this paper , we combine these four techniques to acquire a shared feature representation of the relation between objects in an input image and an object manipulation action description in the form of human language encodings to generate an image that shows the resulting end-effect the action would have on a computer-generated scene . The system is trained and evaluated on a simulated dataset and experimentally used on real-world photos . 1 INTRODUCTION . For many human-robot interaction scenarios , a long-term vision has been to develop systems where humans can simply instruct robots using natural language commands . This aim encompasses several sub-challenges in robotics and artificial intelligence , including natural language understanding , symbol grounding , task and motion planning , to name only a few . However , a first step for solving these challenging tasks is to study how to best combine the data representations from different domains , which is still a topic of research . This paper studies whether a perceptual system can via simulation train a computational model to predict the output of actions . The aim is to generate an output image that shows the effect of a certain action on objects when the instruction to change an object is given in natural language . Figure 1 shows a simplified visualization of the main idea where the input image contains several objects of different shape , size , and color , and the input instruction is to manipulate one of the objects in some way ( move , remove , add , replace ) . The output of the model is a synthetic generated image that shows the effect that the action had on the scene . To successfully solve the task of depicting the effect of a certain action , the model must further address a number of different sub-challenges , including ; 1 ) image encoding , 2 ) language learning , 3 ) relational learning , and 4 ) image generation . The key requirement for implementing these human cognitive processes in computational modeling is the data representation for each of the different required domains and how to combine and use their shared representations . There are several works in the literature that combines some of the aforementioned sub-challenges for solving problems such as image captioning You et al . ( 2016 ) , image editing Chen et al . ( 2018 ) , image generation from text descriptions Reed et al . ( 2016a ) , visual question answering Santoro et al . ( 2017 ) ; Yang et al . ( 2016 ) , 3D reconstructed object conditioned on a 2D image Girdhar et al . ( 2016 ) ; Weber et al . ( 2018 ) , paired robot action and linguistic translation Yamada et al . ( 2018 ) , and Vision-and-Language Navigation ( VLN ) Anderson et al . ( 2018 ) . However , the challenge of how to combine all the four sub-challenges and learn their shared representations still requires more research . In this work , we propose a system that combines an image encoder , language encoder , relational network , and image decoder and train it in a GAN setting that conditions on both a source image and the action text description to generate a target image of the scene after the action has been performed . The system is implemented in PyTorch and trained on a synthetic data-set and evaluated on synthetic and real-world images . The dataset and source code can be downloaded from [ link to appear ] . 2 RELATED WORK . The task of generating a target image given a source image and text description has been studied in previous works found in the literature . In work on Language-Based object detection and Segmentation ( LBS ) Hu et al . ( 2016b ; a ) , the goal was to identify the regions of the image that correspond to the visual entities described in the natural language description . In the publication on Language-Based Image Editing ( LBIE ) Chen et al . ( 2018 ) , the source image was edited according to the text description , where recurrent attentive models are used to fuse image and language features for the subtasks of image segmentation and colorization . The work by Shinagawa et al . ( 2017 ) used language instructions ( move , expand , or compress ) to manipulate 2D images of digits from the MNIST data-set and avatar images to generate the resulting target output image . Our approach builds on the idea of LBIE by extending the input to multiple objects in a 3D space for an object manipulation task ( instead of an image editing task ) . This approach requires , however , also the learning of relations between the different objects . Generative Adversarial Networks ( GAN ) Goodfellow et al . ( 2014 ) have recently been used extensively to generate realistic images ; for example , the works presented by Radford et al . ( 2015 ) ; Dosovitskiy et al . ( 2015 ) ; Yang et al . ( 2015 ) . In addition , the work by Isola et al . ( 2017 ) uses a conditional GAN for pre-defined image-to-image translations . The use of GANs have further been extended to be conditioned on text descriptions to generate images with specific properties Reed et al . ( 2016a ) ; Nam et al . ( 2018 ) or photo manipulation Wang et al . ( 2018 ) ; Bau et al . ( 2019 ) . As proposed by Zhang et al . ( 2017 ) , a refinement process that rectifies defects in the first stage , resulting in more realistic synthesized images , can further be achieved by the stacking of GANs . There are , in addition , works that condition the GAN with additional information ( besides the text description ) to generate images . The work by Reed et al . ( 2016b ) give control over the location of where the object , or object parts , should be located by providing additional bounding boxes . The work by Dash et al . ( 2017 ) also conditioned on the class information to diversify the generated samples . The work by El-Nouby et al . ( 2018 ) uses a recurrent GAN to generate iterative output images based on the language instruction . Moreover , there have been presented works that integrate a recurrent neural network for the image generation . For example , a recurrent variational autoencoder with an alignment model over words and attention mechanisms have been used to generate images from text captions Mansimov et al . ( 2016 ) , while a GAN conditioned on an LSTM was , similarly , used for image sequence generation for each word , as presented by Ouyang et al . ( 2018 ) . While related works generate output images from text descriptions , they differ from our work in several ways . Firstly , our method conditions on both the input image and the text description for generating the target image . Secondly , we integrate a relational network in the shared representation for learning relations between objects in a 3D simulated space . Thirdly , we perform image object manipulation instead of image editing , and there is only one correct solution for each pair of the source image and text description , which is typically not the case for other works using GANs . 3 DATA AND METHOD . 3.1 DATA GENERATION . The data consists of a generated synthetic data-set consisting of objects that are randomly placed on a table and contains the before state image , the after state image , and an action sentence , as exemplified in Figure 2 . The actions are chosen from a set of 4 possible actions : 2 non-relational { remove , replace } , and 2 relational { move , add } . A summary of the structure of the actions can be seen in Table 1 . The non-relational actions choose one object and either removes it or replaces it with another object with a specified shape , color , and size . These instructions are non-relational since they do not require to learn the relation between the manipulated object and the referential object in the scene . The synthesized images are of size 128 × 128 × 3 , and each image contains 2 to 6 objects which ( each ) is either a box , sphere , or pyramid , with the color red , green , or blue , and with the size small or big . There are no duplicate objects in the same scene that have the same shape , color , and size in order to avoid ambiguities . For the training data , 1000 input images were generated with 10 action sentences for each image . The validation and test data each consisted of 200 images with 10 actions per image . 3.2 OVERVIEW OF METHOD . The proposed method , called CAE+LSTM+RN , can be seen in Figure 3 and consists of an image encoder , language encoder , relational network , and an image decoder . 3.2.1 IMAGE ENCODER . The image encoder follows a similar structure as DCGAN Radford et al . ( 2015 ) , with 4 convolutional layers with stride 2 , padding 1 , and filter size 4 in each layer and numbers of filters 64 , 128 , 256 , and 512 in each layer . Each convolutional layer is followed by batch normalization and relu-activation . The input images are of size 128×128×3 , and the output of the image encoder is of size 8×8×512 . 3.2.2 LANGUAGE ENCODER . The last output of a 2-layered bi-directional LSTM , with 24 hidden units , was used for the sentence embedding , g ( s ) . A word embedding with dimension 30 was learned from a vocabulary of size 17 , consisting of the following words : ” move , remove , replace , add , small , big , pyramid , sphere , cube , red , green , blue , front , behind , left , right , and top ” . Each sentence was also initially parsed and pre-processed such that articles and prepositions ( e.g. , words such as ” in , on , of , the ” ) , could be removed in an early stage . The sentences were presented in reverse word order . 3.2.3 RELATIONAL NETWORK . A Relational Network ( RN ) Santoro et al . ( 2017 ) is used to both learn relations between objectpairs , as well as merging image and language representations . The output of the image encoder , f ( X ) , is of size 8 × 8 × 512 , where each filter response at position i and j in f ( X ) ( i , j , : ) is considered one object , resulting in a total of 8 ∗ 8 = 64 objects . An object-pair vector is made by concatenating two object vectors , with their spatial coordinates , and with the sentence representation [ f ( X ) T ( i , j , : ) ijf ( X ) T ( k , l , : ) klg ( s ) ] , resulting in 64× 64 = 4096 object-pairs . Each object-pair vector is feed through a three-layered fully-connected network with 256 hidden units and relu-activation . The output of the last layer for each object-pair is summed up and reshaped to size 8× 8× nRN where nRN is set to 32 .
1. The paper aims to train a model to move objects in an image using language. For instance, an image with a red cube and blue ball needs to be turned into an image with a red cube and red ball if asked to "replace the red cube with a blue ball". The task itself is interesting as it aims to modify system behavior through language.
SP:68afc80f1983d8ae90181ca41c8132c09d78983d
Kernelized Wasserstein Natural Gradient
1 INTRODUCTION . The success of machine learning algorithms relies on the quality of an underlying optimization method . Many of the current state-of-the-art methods rely on variants of Stochastic Gradient Descent ( SGD ) such as AdaGrad ( Duchi et al. , 2011 ) , RMSProp ( Hinton et al. , 2012 ) , and Adam ( Kingma and Ba , 2014 ) . While generally effective , the performance of such methods remains sensitive to the curvature of the optimization objective . When the Hessian matrix of the objective at the optimum has a large condition number , the problem is said to have a pathological curvature ( Martens , 2010 ; Sutskever et al. , 2013 ) . In this case , the first-order optimization methods tend to have poor performance . Using adaptive step sizes can help when the principal directions of curvature are aligned with the coordinates of the vector parameters . Otherwise , an additional rotation of the basis is needed to achieve this alignment . One strategy is to find an alternative parametrization of the same model that has a better-behaved curvature and is thus easier to optimize with standard first-order optimization methods . Designing good network architectures ( Simonyan and Zisserman , 2014 ; He et al. , 2015 ) along with normalization techniques ( LeCun et al. , 2012 ; Ioffe and Szegedy , 2015 ; Salimans and Kingma , 2016 ) is often critical for the success of such optimization methods . The natural gradient method ( Amari , 1998 ) takes a related but different perspective . Rather than reparametrizing the model , the natural gradient method tries to make the optimizer itself invariant to reparameterizations by directly operating on the manifold of probability distributions . This requires endowing the parameter space with a suitable notion of proximity formalized by a metric . An important metric in this context is the Fisher information metric ( Fisher and Russell , 1922 ; Rao , 1992 ) , which induces the Fisher-Rao natural gradient ( Amari , 1985 ) . Another important metric in probability space is the Wasserstein metric ( Villani , 2009 ; Otto , 2001 ) , which induces the Wasserstein natural gradient ( Li and Montufar , 2018a ; b ; Li , 2018 ) ; see similar formulations in Gaussian families ( Malagò et al. , 2018 ; Modin , 2017 ) . In spite of their numerous theoretical advantages , applying natural gradient methods is challenging in practice . Indeed , each parameter update requires inverting the metric tensor . This becomes infeasible for current deep learning models , which typically have millions of parameters . This has motivated research into finding efficient algorithms to estimate the natural gradient ( Martens and Grosse , 2015 ; Grosse and Martens , 2016 ; George et al. , 2018 ; Heskes , 2000 ; Bernacchia et al. , 2018 ) . Such algorithms often address the case of the Fisher metric and either exploit a particular structure of the parametric family or rely on a low rank decomposition of the information matrix . Recently , Li et al . ( 2019 ) proposed to estimate the metric based on a dual formulation and used this estimate in a proximal method . While this avoids explicitly computing the natural gradient , the proximal method also introduces an additional optimization problem to be solved at each update of the model ’ s parameters . The quality of the solver will thus depend on the accuracy of this additional optimization . In this paper , we use the dual formulation of the metric to directly obtain a closed form expression of the natural gradient as a solution to a convex functional optimization problem . We focus on the Wasserstein metric as it has the advantage of being well defined even when the model doesn ’ t admit a density . The expression remains valid for general metrics including the Fisher-Rao metric . We leverage recent work on Kernel methods ( Sriperumbudur et al. , 2017 ; Arbel and Gretton , 2017 ; Sutherland et al. , 2017 ; Mroueh et al. , 2019 ) to compute an estimate of the natural gradient by restricting the functional space appearing in the dual formulation to a Reproducing Kernel Hilbert Space . We demonstrate empirically the accuracy of our estimator on toy examples , and show how it can be effectively used to approximate the trajectory of the natural gradient descent algorithm . We also analyze the effect of the dimensionality of the model on the accuracy of the proposed estimator . Finally , we illustrate the benefits of our proposed estimator for solving classification problems when the model has an ill-conditioned parametrization . The paper is organized as follows . In Section 2 , after a brief description of natural gradients , we discuss Legendre duality of metrics , and provide details on the Wasserstein natural gradient . In Section 3 , we present our kernel estimator of the natural gradient . In Section 4 we present experiments to evaluate the accuracy of the proposed estimator and demonstrate its effectiveness in supervised learning tasks . 2 NATURAL GRADIENT DESCENT . We first briefly recall the natural gradient descent method in Section 2.1 , and its relation to metrics on probability distribution spaces in Section 2.2 . We next present Legendre dual formulations for metrics in Section 2.3 where we highlight the Fisher-Rao and Wasserstein metrics as important examples . 2.1 GENERAL FORMULATION . It is often possible to formulate learning problems as the minimization of some cost functional ρ 7→F ( ρ ) over probability distributions ρ from a parametric model PΘ . The set PΘ contains probability distributions defined on an open sample space Ω⊂Rd and parametrized by some vector θ∈Θ , where Θ is an open subset of Rq . The learning problem can thus be formalized as finding an optimal value θ∗ that locally minimizes a loss function L ( θ ) : =F ( ρθ ) defined over the parameter space Θ . One convenient way to solve this problem approximately is by gradient descent , which uses the Euclidean gradient of L w.r.t . the parameter vector θ to produce a sequence of updates θt according to the following rule : θt+1 =θt−γt∇L ( θt ) . Here the step-size γt is a positive real number . The Euclidean gradient can be viewed as the direction in parameter space that leads to the highest decrease of some linear modelMt of the cost function L per unit of change of the parameter . More precisely , the Euclidean gradient is obtained as the solution of the optimization problem : ∇L ( θt ) =−argmin u∈Rq Mt ( u ) + 1 2 ‖u‖2 . ( 1 ) The linear modelMt is an approximation of the cost function L in the neighborhood of θt and is simply obtained by a first order expansion : Mt ( u ) =L ( θt ) +∇L ( θt ) > u . The quadratic term ‖u‖2 penalizes the change in the parameter and ensures that the solution remains in the neighborhood where the linear model is still a good approximation of the cost function . This particular choice of quadratic term is what defines the Euclidean gradient descent algorithm , which can often be efficiently implemented for neural network models using back-propagation . The performance of this algorithm is highly dependent on the parametrization of the model PΘ , however ( Martens , 2010 ; Sutskever et al. , 2013 ) . To obtain an algorithm that is robust to parametrization , one can take advantage of the structure of the cost function L ( θ ) which is obtained as the composition of the functional F and the model θ 7→ρθ and define a generalized natural gradient ( Amari and Cichocki , 2010 ) . We first provide a conceptual description of the general approach to obtain such gradient . The starting point is to choose a divergenceD between probability distributions and use it as a new penalization term : −argmin u∈Rq Mt ( u ) + 1 2 D ( ρθt , ρθt+u ) . ( 2 ) Here , changes in the model are penalized directly in probability space rather than parameter space as in ( 1 ) . In the limit of small u , the penalization term can be replaced by a quadratic term u > GD ( θ ) uwhere GD ( θ ) contains second order information about the model as measured byD . This leads to the following expression for the generalized natural gradient∇DL ( θt ) where the dependence inD is made explicit : ∇DL ( θt ) : =−argmin u∈Rq Mt ( u ) + 1 2 u > GD ( θt ) u . ( 3 ) From ( 3 ) , it is possible to express the generalized natural gradient by means of the Euclidean gradient : ∇DL ( θt ) =GD ( θt ) −1∇L ( θt ) . The parameter updates are then obtained by the new update rule : θt+1 =θt−γtGD ( θt ) −1∇L ( θt ) . ( 4 ) Equation ( 4 ) leads to a descent algorithm which is invariant to parametrization in the continuous-time limit : Proposition 1 . Let Ψ be an invertible and smoothly differentiable re-parametrization ψ = Ψ ( θ ) and denote by L̄ ( ψ ) : =L ( Ψ−1 ( ψ ) ) . Consider the continuous-time natural gradient flows : θ̇s=−∇Dθ L ( θs ) , ψ̇s=−∇Dψ L̄ ( ψs ) , ψ0 =Ψ ( θ0 ) Then ψs and θs are related by the equation ψs=Ψ ( θs ) at all times s≥0 . This result implies that an ill-conditioned parametrization of the model has little effect on the optimization when ( 4 ) is used . It is a consequence of the transformation properties of the natural gradient by change of parametrization : ∇Dψ L̄ ( ψ ) = ∇θΨ ( θ ) ∇Dθ L ( θ ) which holds in general for any covariant gradient . We provide a proof of Proposition 1 in Appendix C.1 in the particular the case when D is either Kullback-Leibler divergence F , or the squared Wasserstein-2 distanceW using notions introduced later in Section 2.3 and refer to Ollivier et al . ( 2011 ) for a detailed discussion . The approach based on ( 2 ) for defining the generalized natural gradient is purely conceptual and can be formalized using the notion of metric tensor from differential geometry which allows for more generality . In Section 2.2 , we provide such formal definition in the case when D is either the Kullback-Leibler divergence F , or the squared Wasserstein-2 distanceW . 2.2 INFORMATION MATRIX VIA DIFFERENTIAL GEOMETRY . WhenD is the Kullback-Leibler divergence or relative entropy F , then ( 3 ) defines the Fisher-Rao natural gradient∇FL ( θ ) ( Amari , 1985 ) andGF ( θ ) is called the Fisher information matrix . GF ( θ ) is well defined when the probability distributions inPΘ all have positive densities , and when some additional differentiability and integrability assumptions on ρθ are satisfied . In fact , it has an interpretation in Riemannian geometry as the pull-back of a metric tensor gF defined over the set of probability distributions with positive densities and known as the Fisher-Rao metric ( see Definition 4 in Appendix B.1 ; see also Holbrook et al . 2017 ) : Definition 1 ( Fisher information matrix ) . Assume θ 7→ ρθ ( x ) is differentiable for all x on Ω and that∫ ‖∇ρθ ( x ) ‖2 ρθ ( x ) dx < ∞ . Then the Fisher information matrix is defined as the pull-back of the Fisher-Rao metric gF : GF ( θ ) ij=g F ρθ ( ∂iρθ , ∂jρθ ) : = ∫ fi ( x ) fj ( x ) ρθ ( x ) dx , where the functions fi on Ω are given by : fi= ∂iρθρθ . Definition 1 directly introducesGF using the Fisher-Rao metric tensor which captures the infinitesimal behavior of the KL . This approach can be extended to any metric tensor g defined on a suitable space of probability distributions containing PΘ . In particular , when D is the Wasserstein-2 , the Wasserstein information matrix is obtained directly by means of the Wasserstein-2 metric tensor gW ( Otto and Villani , 2000 ; Lafferty and Wasserman , 2008 ) as proposed in Li and Montufar ( 2018a ) ; Chen and Li ( 2018 ) : Definition 2 ( Wasserstein information matrix ) . The Wasserstein information matrix ( WIM ) is defined as the pull-back of the Wasserstein 2 metric gW : GW ( θ ) ij=g W ρθ ( ∂iρθ , ∂jρθ ) : = ∫ φi ( x ) > φj ( x ) dρθ ( x ) , where φi are vector valued functions on Ω that are solutions to the partial differential equations with Neumann boundary condition : ∂iρθ=−div ( ρθφi ) , ∀1≤i≤q . Moreover , φi are required to be in the closure of the set of gradients of smooth and compactly supported functions inL2 ( ρθ ) d. In particular , when ρθ has a density , φi=∇xfi , for some real valued function fi on Ω . The partial derivatives ∂iρθ should be understood in distribution sense , as discussed in more detail in Section 2.3 . This allows to define the Wasserstein natural gradient even when the model ρθ does not admit a density . Moreover , it allows for more generality than the conceptual approach based on ( 2 ) which would require performing a first order expansion of the Wasserstein distance in terms of its linearized version known as the Negative Sobolev distance . We provide more discussion of those two approaches and their differences in Appendix B.3 . From now on , we will focus on the above two cases of the natural gradient∇DL ( θ ) , namely∇FL ( θ ) and∇WL ( θ ) . When the dimension of the parameter space is high , directly using equation ( 4 ) becomes impractical as it requires storing and inverting the matrix G ( θ ) . In Section 2.3 we will see how equation ( 3 ) can be exploited along with Legendre duality to get an expression for the natural gradient that can be efficiently approximated using kernel methods .
The authors propose an approximate of the natural gradient under Wasserstein metric when optimizing some cost function over a parametric family of probability distributions. The authors leverage the dual formulation and restrict the feasible space to a RKHS. The authors show a trade-off between accuracy and computational cost with theoretical guarantees for the proposed method, and empirically verify it for classification tasks.
SP:2565120a71aa1a9ff3677a993bb3bbd8c2271273
Kernelized Wasserstein Natural Gradient
1 INTRODUCTION . The success of machine learning algorithms relies on the quality of an underlying optimization method . Many of the current state-of-the-art methods rely on variants of Stochastic Gradient Descent ( SGD ) such as AdaGrad ( Duchi et al. , 2011 ) , RMSProp ( Hinton et al. , 2012 ) , and Adam ( Kingma and Ba , 2014 ) . While generally effective , the performance of such methods remains sensitive to the curvature of the optimization objective . When the Hessian matrix of the objective at the optimum has a large condition number , the problem is said to have a pathological curvature ( Martens , 2010 ; Sutskever et al. , 2013 ) . In this case , the first-order optimization methods tend to have poor performance . Using adaptive step sizes can help when the principal directions of curvature are aligned with the coordinates of the vector parameters . Otherwise , an additional rotation of the basis is needed to achieve this alignment . One strategy is to find an alternative parametrization of the same model that has a better-behaved curvature and is thus easier to optimize with standard first-order optimization methods . Designing good network architectures ( Simonyan and Zisserman , 2014 ; He et al. , 2015 ) along with normalization techniques ( LeCun et al. , 2012 ; Ioffe and Szegedy , 2015 ; Salimans and Kingma , 2016 ) is often critical for the success of such optimization methods . The natural gradient method ( Amari , 1998 ) takes a related but different perspective . Rather than reparametrizing the model , the natural gradient method tries to make the optimizer itself invariant to reparameterizations by directly operating on the manifold of probability distributions . This requires endowing the parameter space with a suitable notion of proximity formalized by a metric . An important metric in this context is the Fisher information metric ( Fisher and Russell , 1922 ; Rao , 1992 ) , which induces the Fisher-Rao natural gradient ( Amari , 1985 ) . Another important metric in probability space is the Wasserstein metric ( Villani , 2009 ; Otto , 2001 ) , which induces the Wasserstein natural gradient ( Li and Montufar , 2018a ; b ; Li , 2018 ) ; see similar formulations in Gaussian families ( Malagò et al. , 2018 ; Modin , 2017 ) . In spite of their numerous theoretical advantages , applying natural gradient methods is challenging in practice . Indeed , each parameter update requires inverting the metric tensor . This becomes infeasible for current deep learning models , which typically have millions of parameters . This has motivated research into finding efficient algorithms to estimate the natural gradient ( Martens and Grosse , 2015 ; Grosse and Martens , 2016 ; George et al. , 2018 ; Heskes , 2000 ; Bernacchia et al. , 2018 ) . Such algorithms often address the case of the Fisher metric and either exploit a particular structure of the parametric family or rely on a low rank decomposition of the information matrix . Recently , Li et al . ( 2019 ) proposed to estimate the metric based on a dual formulation and used this estimate in a proximal method . While this avoids explicitly computing the natural gradient , the proximal method also introduces an additional optimization problem to be solved at each update of the model ’ s parameters . The quality of the solver will thus depend on the accuracy of this additional optimization . In this paper , we use the dual formulation of the metric to directly obtain a closed form expression of the natural gradient as a solution to a convex functional optimization problem . We focus on the Wasserstein metric as it has the advantage of being well defined even when the model doesn ’ t admit a density . The expression remains valid for general metrics including the Fisher-Rao metric . We leverage recent work on Kernel methods ( Sriperumbudur et al. , 2017 ; Arbel and Gretton , 2017 ; Sutherland et al. , 2017 ; Mroueh et al. , 2019 ) to compute an estimate of the natural gradient by restricting the functional space appearing in the dual formulation to a Reproducing Kernel Hilbert Space . We demonstrate empirically the accuracy of our estimator on toy examples , and show how it can be effectively used to approximate the trajectory of the natural gradient descent algorithm . We also analyze the effect of the dimensionality of the model on the accuracy of the proposed estimator . Finally , we illustrate the benefits of our proposed estimator for solving classification problems when the model has an ill-conditioned parametrization . The paper is organized as follows . In Section 2 , after a brief description of natural gradients , we discuss Legendre duality of metrics , and provide details on the Wasserstein natural gradient . In Section 3 , we present our kernel estimator of the natural gradient . In Section 4 we present experiments to evaluate the accuracy of the proposed estimator and demonstrate its effectiveness in supervised learning tasks . 2 NATURAL GRADIENT DESCENT . We first briefly recall the natural gradient descent method in Section 2.1 , and its relation to metrics on probability distribution spaces in Section 2.2 . We next present Legendre dual formulations for metrics in Section 2.3 where we highlight the Fisher-Rao and Wasserstein metrics as important examples . 2.1 GENERAL FORMULATION . It is often possible to formulate learning problems as the minimization of some cost functional ρ 7→F ( ρ ) over probability distributions ρ from a parametric model PΘ . The set PΘ contains probability distributions defined on an open sample space Ω⊂Rd and parametrized by some vector θ∈Θ , where Θ is an open subset of Rq . The learning problem can thus be formalized as finding an optimal value θ∗ that locally minimizes a loss function L ( θ ) : =F ( ρθ ) defined over the parameter space Θ . One convenient way to solve this problem approximately is by gradient descent , which uses the Euclidean gradient of L w.r.t . the parameter vector θ to produce a sequence of updates θt according to the following rule : θt+1 =θt−γt∇L ( θt ) . Here the step-size γt is a positive real number . The Euclidean gradient can be viewed as the direction in parameter space that leads to the highest decrease of some linear modelMt of the cost function L per unit of change of the parameter . More precisely , the Euclidean gradient is obtained as the solution of the optimization problem : ∇L ( θt ) =−argmin u∈Rq Mt ( u ) + 1 2 ‖u‖2 . ( 1 ) The linear modelMt is an approximation of the cost function L in the neighborhood of θt and is simply obtained by a first order expansion : Mt ( u ) =L ( θt ) +∇L ( θt ) > u . The quadratic term ‖u‖2 penalizes the change in the parameter and ensures that the solution remains in the neighborhood where the linear model is still a good approximation of the cost function . This particular choice of quadratic term is what defines the Euclidean gradient descent algorithm , which can often be efficiently implemented for neural network models using back-propagation . The performance of this algorithm is highly dependent on the parametrization of the model PΘ , however ( Martens , 2010 ; Sutskever et al. , 2013 ) . To obtain an algorithm that is robust to parametrization , one can take advantage of the structure of the cost function L ( θ ) which is obtained as the composition of the functional F and the model θ 7→ρθ and define a generalized natural gradient ( Amari and Cichocki , 2010 ) . We first provide a conceptual description of the general approach to obtain such gradient . The starting point is to choose a divergenceD between probability distributions and use it as a new penalization term : −argmin u∈Rq Mt ( u ) + 1 2 D ( ρθt , ρθt+u ) . ( 2 ) Here , changes in the model are penalized directly in probability space rather than parameter space as in ( 1 ) . In the limit of small u , the penalization term can be replaced by a quadratic term u > GD ( θ ) uwhere GD ( θ ) contains second order information about the model as measured byD . This leads to the following expression for the generalized natural gradient∇DL ( θt ) where the dependence inD is made explicit : ∇DL ( θt ) : =−argmin u∈Rq Mt ( u ) + 1 2 u > GD ( θt ) u . ( 3 ) From ( 3 ) , it is possible to express the generalized natural gradient by means of the Euclidean gradient : ∇DL ( θt ) =GD ( θt ) −1∇L ( θt ) . The parameter updates are then obtained by the new update rule : θt+1 =θt−γtGD ( θt ) −1∇L ( θt ) . ( 4 ) Equation ( 4 ) leads to a descent algorithm which is invariant to parametrization in the continuous-time limit : Proposition 1 . Let Ψ be an invertible and smoothly differentiable re-parametrization ψ = Ψ ( θ ) and denote by L̄ ( ψ ) : =L ( Ψ−1 ( ψ ) ) . Consider the continuous-time natural gradient flows : θ̇s=−∇Dθ L ( θs ) , ψ̇s=−∇Dψ L̄ ( ψs ) , ψ0 =Ψ ( θ0 ) Then ψs and θs are related by the equation ψs=Ψ ( θs ) at all times s≥0 . This result implies that an ill-conditioned parametrization of the model has little effect on the optimization when ( 4 ) is used . It is a consequence of the transformation properties of the natural gradient by change of parametrization : ∇Dψ L̄ ( ψ ) = ∇θΨ ( θ ) ∇Dθ L ( θ ) which holds in general for any covariant gradient . We provide a proof of Proposition 1 in Appendix C.1 in the particular the case when D is either Kullback-Leibler divergence F , or the squared Wasserstein-2 distanceW using notions introduced later in Section 2.3 and refer to Ollivier et al . ( 2011 ) for a detailed discussion . The approach based on ( 2 ) for defining the generalized natural gradient is purely conceptual and can be formalized using the notion of metric tensor from differential geometry which allows for more generality . In Section 2.2 , we provide such formal definition in the case when D is either the Kullback-Leibler divergence F , or the squared Wasserstein-2 distanceW . 2.2 INFORMATION MATRIX VIA DIFFERENTIAL GEOMETRY . WhenD is the Kullback-Leibler divergence or relative entropy F , then ( 3 ) defines the Fisher-Rao natural gradient∇FL ( θ ) ( Amari , 1985 ) andGF ( θ ) is called the Fisher information matrix . GF ( θ ) is well defined when the probability distributions inPΘ all have positive densities , and when some additional differentiability and integrability assumptions on ρθ are satisfied . In fact , it has an interpretation in Riemannian geometry as the pull-back of a metric tensor gF defined over the set of probability distributions with positive densities and known as the Fisher-Rao metric ( see Definition 4 in Appendix B.1 ; see also Holbrook et al . 2017 ) : Definition 1 ( Fisher information matrix ) . Assume θ 7→ ρθ ( x ) is differentiable for all x on Ω and that∫ ‖∇ρθ ( x ) ‖2 ρθ ( x ) dx < ∞ . Then the Fisher information matrix is defined as the pull-back of the Fisher-Rao metric gF : GF ( θ ) ij=g F ρθ ( ∂iρθ , ∂jρθ ) : = ∫ fi ( x ) fj ( x ) ρθ ( x ) dx , where the functions fi on Ω are given by : fi= ∂iρθρθ . Definition 1 directly introducesGF using the Fisher-Rao metric tensor which captures the infinitesimal behavior of the KL . This approach can be extended to any metric tensor g defined on a suitable space of probability distributions containing PΘ . In particular , when D is the Wasserstein-2 , the Wasserstein information matrix is obtained directly by means of the Wasserstein-2 metric tensor gW ( Otto and Villani , 2000 ; Lafferty and Wasserman , 2008 ) as proposed in Li and Montufar ( 2018a ) ; Chen and Li ( 2018 ) : Definition 2 ( Wasserstein information matrix ) . The Wasserstein information matrix ( WIM ) is defined as the pull-back of the Wasserstein 2 metric gW : GW ( θ ) ij=g W ρθ ( ∂iρθ , ∂jρθ ) : = ∫ φi ( x ) > φj ( x ) dρθ ( x ) , where φi are vector valued functions on Ω that are solutions to the partial differential equations with Neumann boundary condition : ∂iρθ=−div ( ρθφi ) , ∀1≤i≤q . Moreover , φi are required to be in the closure of the set of gradients of smooth and compactly supported functions inL2 ( ρθ ) d. In particular , when ρθ has a density , φi=∇xfi , for some real valued function fi on Ω . The partial derivatives ∂iρθ should be understood in distribution sense , as discussed in more detail in Section 2.3 . This allows to define the Wasserstein natural gradient even when the model ρθ does not admit a density . Moreover , it allows for more generality than the conceptual approach based on ( 2 ) which would require performing a first order expansion of the Wasserstein distance in terms of its linearized version known as the Negative Sobolev distance . We provide more discussion of those two approaches and their differences in Appendix B.3 . From now on , we will focus on the above two cases of the natural gradient∇DL ( θ ) , namely∇FL ( θ ) and∇WL ( θ ) . When the dimension of the parameter space is high , directly using equation ( 4 ) becomes impractical as it requires storing and inverting the matrix G ( θ ) . In Section 2.3 we will see how equation ( 3 ) can be exploited along with Legendre duality to get an expression for the natural gradient that can be efficiently approximated using kernel methods .
Natural gradient has been proven effective in many statistical learning algorithms. A well-known difficulty in using natural gradient is that it is tedious to compute the Fisher matrix (if one is using Fisher-Rao metric) and the Wasserstein information matrix (if one is using Wasserstein metric). It's important to be able to estimate natural gradient in a practical way, and there have been a few papers looking at this problem but mostly for the case with a Fisher-Rao metric. This paper takes a different and general approach to approximate natural gradient by leveraging the dual formulation for the metric restricted to the Reproducing Kernel Hilbert Space. Some theoretical guarantees of the proposed method is established together to some experimental study.
SP:2565120a71aa1a9ff3677a993bb3bbd8c2271273
ASYNCHRONOUS MULTI-AGENT GENERATIVE ADVERSARIAL IMITATION LEARNING
1 INTRODUCTION . Imitation learning ( IL ) also known as learning from demonstrations allows agents to imitate expert demonstrations to make optimal decisions without direct interactions with the environment . Especially , inverse reinforcement learning ( IRL ) ( Ng et al . ( 2000 ) ) recovers a reward function of an expert from collected demonstrations , where it assumes that the demonstrator follows an ( near- ) optimal policy that maximizes the underlying reward . However , IRL is an ill-posed problem , because a number of reward functions match the demonstrated data ( Ziebart et al . ( 2008 ; 2010 ) ; Ho & Ermon ( 2016 ) ; Boularias et al . ( 2011 ) ) , where various principles , including maximum entropy , maximum causal entropy , and relative entropy principles , are employed to solve this ambiguity ( Ziebart et al . ( 2008 ; 2010 ) ; Boularias et al . ( 2011 ) ; Ho & Ermon ( 2016 ) ; Zhang et al . ( 2019 ) ) . Going beyond imitation learning with single agents discussed above , recent works including Song et al . ( 2018 ) , Yu et al . ( 2019 ) , have investigated a more general and challenging scenario with demonstration data from multiple interacting agents . Such interactions are modeled by extending Markov decision processes on individual agents to multi-agent Markov games ( MGs ) ( Littman & Szepesvári ( 1996 ) ) . However , these works only work for synchronous MGs , with all agents making simultaneous decisions in each turn , and do not work for general MGs , allowing agents to make asynchronous decisions in different turns , which is common in many real world scenarios . For example , in multiplayer games ( Knutsson et al . ( 2004 ) ) , such as Go game , and many card games , players take turns to play , thus influence each other ’ s decision . The order in which agents make decisions has a significant impact on the game equilibrium . In this paper , we propose a novel framework , asynchronous multi-agent generative adversarial imitation learning ( AMAGAIL ) : A group of experts provide demonstration data when playing a Markov game ( MG ) with an asynchronous decision-making process , and AMAGAIL inversely learns each expert ’ s decision-making policy . We introduce a player function governed by the environment to capture the participation order and dependency of agents when making decisions . The participation order could be deterministic ( i.e. , agents take turns to act ) or stochastic ( i.e. , agents need to take actions by chance ) . A player function of an agent is a probability function : given the perfectly known agent participation history , i.e. , at each previous round in the history , we know which agent ( s ) participated , it provides the probability of the agent participating in the next round . With the general MG model , our framework generalizes MAGAIL ( Song et al . ( 2018 ) ) from the synchronous Markov games to ( asynchronous ) Markov games , and the learned expert policies are proven to guarantee subgame perfect equilibrium ( SPE ) ( Fudenberg & Levine ( 1983 ) ) , a stronger equilibrium than the Nash equilibrium ( NE ) ( guaranteed in MAGAIL Song et al . ( 2018 ) ) . The experiment results demonstrate that compared to GAIL ( Ho & Ermon ( 2016 ) ) and MAGAIL ( Song et al . ( 2018 ) ) , our AMAGAIL model can better infer the policy of each expert agent using their demonstration data collected from asynchronous decision-making scenarios . 2 PRELIMINARIES . 2.1 MARKOV GAMES . Markov games ( MGs ) ( Littman ( 1994 ) ) are the cases of N interacting agents , with each agent making a sequence of decisions with strategies only depending on the current state . A Markov game1 is denoted as a tuple ( N , S , A , Y , ζ , P , η , r , γ ) with a set of states S andN sets of actions { Ai } Ni=1 . At each time step t with a state st ∈ S , if the indicator variable Ii , t = 1 , an agent i is allowed to take an action ; otherwise , Ii , t = 0 , the agent i does not take an action . As a result , the participation vector It = [ I1 , t , · · · , IN , t ] indicates active vs inactive agents at step t. The set of all possible participation vectors is denoted as I , namely , It ∈ I . Moreover , ht−1 = [ I0 , · · · , It−1 ] represent the participation history from step 0 to t−1 . The player function Y ( governed by the environment ) describes the probability of an agent i being allowed to make an action at a step t , given the participation history ht−1 , namely , Y ( i|ht−1 ) . ζ defines the participation probability of an agent at the initial time step ζ : [ N ] 7→ [ 0 , 1 ] . Note that , the player function can be naturally extended to a higher-order form when the condition includes both previous participation history and previous state-action history ; thus , it can be adapted to non-Markov processes . The initial states are determined by a distribution η : S 7→ [ 0 , 1 ] . Let φ denotes no participation , determined by player function Y , the transition process to the next state follows a transition function : P : S×A1∪ { φ } ×· · ·×AN ∪ { φ } 7→ P ( S ) . Agent i obtains a ( bounded ) reward given by a function ri : S×Ai 7→ R2 . Agent i aims to maximize its own total expected return Ri = ∑∞ t=0 γ tri , t , where γ ∈ [ 0 , 1 ] is the discount factor . Actions are chosen through a stationary and stochastic policy πi : S ×Ai 7→ [ 0 , 1 ] . In this paper , bold variables without subscript i denote the concatenation of variables for all the agents , e.g. , all actions as a , the joint policy defined as π ( a|s ) = ∏N i=1 πi ( ai|s ) , r as all rewards . Subscript −i denotes all agents except for i , then ( ai , a−i ) represents the action of all N agents ( a1 , · · · , aN ) . We use expectation with respect to a policy π to denote an expectation with respect to the trajectories it generates . For example , Eπ , Y [ ri ( s , ai ) ] , Est , a∼π , It∼Y [ ∑∞ t=0 γ tri ( st , ai ) ] , denotes the following sample process as s0 ∼ η , I0 ∼ ζ , It ∼ Y , a ∼ π ( ·|st ) , st+1 ∼ P ( st+1|st , a ) , for ∀i ∈ [ N ] . Clearly , when the player function Y ( i|ht−1 ) = 1 for all agents i ’ s at any time step t , a general Markov game boils down to a synchronous Markov game ( Littman ( 1994 ) ; Song et al . ( 2018 ) ) , where all agents take actions at all steps . To distinguish our work from MAGAIL and be consistent with the literature Chatterjee et al . ( 2004 ) and Hansen et al . ( 2013 ) , we refer the game setting discussed in MAGAIL as synchronous Markov games ( SMGs ) , and that of our work as Markov games ( MGs ) . 2.2 SUBGAME PERFECT EQUILIBRIUM FOR MARKOV GAMES . In synchronous Markov games ( SMGs ) , all agents make simultaneous decisions at any time step t , with the same goal of maximizing its own total expected return . Thus , agents ’ optimal policies are interrelated and mutually influenced . Nash equilibrium ( NE ) has been employed as a solution concept to resolve the dependency across agents , where no agents can achieve a higher expected reward by unilaterally changing its own policy ( Song et al . ( 2018 ) ) . However , in Markov games ( MGs ) allowing asynchronous decisions , there exist situations where agents encounter states ( subgames ) resulted from other agents ’ “ trembling-hand ” actions . Since the NE does not consider the “ trembling-hand ” resulted states and subgames , when trapped in these situations , agents are not able to make optimal decisions based on their polices under NE . To address this problem , Selten firstly proposed subgame perfect equilibrium ( SPE ) ( Selten ( 1965 ) ) . SPE ensures NE for every possible 1Note that Markov games defined in MAGAIL ( Song et al . ( 2018 ) ) are in fact synchronous Markov games , with all agents simultaneously making decisions in each turn . We follow the rich literature ( Chatterjee et al . ( 2004 ) ; Hansen et al . ( 2013 ) ) to define Markov games , which allow both synchronous and asynchronous decision-making processes . 2Because of the asynchronous setting , the rewards only depend on agents ’ own actions . subgame of the original game . It has been shown that in a finite or infinite extensive-form game with either discrete or continuous time , best-response strategies all converge to SPE , rather than NE ( Selten ( 1965 ) ; Abramsky & Winschel ( 2017 ) ; Xu ( 2016 ) ) . 2.3 MULTI-AGENT IMITATION LEARNING IN SYNCHRONOUS MARKOV GAMES . In synchronous Markov games , MAGAIL ( Song et al . ( 2018 ) ) was proposed to learn experts ’ policies constrained by Nash equilibrium . Since there may exist multiple Nash equilibrium solutions , a maximum causal entropy regularizer is employed to resolve the ambiguity . Thus , the optimal policies can be found by solving the following multi-agent reinforcement learning problem . MARL ( r ) = arg max π N∑ i=1 ( βHi ( πi ) + Eπi , πE−i [ ri ] ) , ( 1 ) whereHi ( πi ) is the γ-discounted causal entropy of policy πi ∈ Π , Hi ( πi ) , Eπi [ − log πi ( ai|s ) ] = Est , ai∼πi [ − ∑∞ t=0 γ t log πi ( ai|st ) ] , and β is a weight to the entropy regularization term . In practice , the reward function is unknown . MAGAIL applies multi-agent IRL ( MAIRL ) below to recover experts ’ reward functions , with ψ as a convex regularizer , MAIRLψ ( πE ) = arg max r −ψ ( r ) + N∑ i=1 ( EπE [ ri ] ) − ( max π N∑ i=1 ( βHi ( πi ) ) + Eπi , πE−i [ ri ] ) . ( 2 ) Moreover , MAGAIL solves MARL ◦ MAIRLψ ( πE ) to inversely learn each expert ’ s policy via applying generative adversarial imitation learning ( Ho & Ermon ( 2016 ) ) to each expert i ∈ [ N ] : min θ max w Eπθ [ N∑ i=1 logDwi ( s , ai ) ] + EπE [ N∑ i=1 log ( 1−Dwi ( s , ai ) ) ] . ( 3 ) Dwi is a discriminator for agent i that classifies the experts ’ vs policy trajectories . πθ represent the learned experts ’ parameterized policies , which generate trajectories with maximized the scores from Dwi for i ∈ [ N ] .
In this work, a multi-agent imitation learning algorithm for extensive Markov Games is proposed. Compared to Markov Games (MGs), extensive Markov Games (eMGs) introduces indicator variables, which means whether agents will participate in the game at the specific time step or not, and player function, which is a probability distribution of indicator variables given histories and assumed to be governed by the environment, not by the agents. Such a model allows us to consider asynchronous participation of agents, whereas MGs only consider synchronous participation, which is assumed in the existing multi-agent imitation learning algorithms such as MA-GAIL and MA-AIRL.
SP:e00ee9485f054034892f307e0ab2f8df1e2d8701
ASYNCHRONOUS MULTI-AGENT GENERATIVE ADVERSARIAL IMITATION LEARNING
1 INTRODUCTION . Imitation learning ( IL ) also known as learning from demonstrations allows agents to imitate expert demonstrations to make optimal decisions without direct interactions with the environment . Especially , inverse reinforcement learning ( IRL ) ( Ng et al . ( 2000 ) ) recovers a reward function of an expert from collected demonstrations , where it assumes that the demonstrator follows an ( near- ) optimal policy that maximizes the underlying reward . However , IRL is an ill-posed problem , because a number of reward functions match the demonstrated data ( Ziebart et al . ( 2008 ; 2010 ) ; Ho & Ermon ( 2016 ) ; Boularias et al . ( 2011 ) ) , where various principles , including maximum entropy , maximum causal entropy , and relative entropy principles , are employed to solve this ambiguity ( Ziebart et al . ( 2008 ; 2010 ) ; Boularias et al . ( 2011 ) ; Ho & Ermon ( 2016 ) ; Zhang et al . ( 2019 ) ) . Going beyond imitation learning with single agents discussed above , recent works including Song et al . ( 2018 ) , Yu et al . ( 2019 ) , have investigated a more general and challenging scenario with demonstration data from multiple interacting agents . Such interactions are modeled by extending Markov decision processes on individual agents to multi-agent Markov games ( MGs ) ( Littman & Szepesvári ( 1996 ) ) . However , these works only work for synchronous MGs , with all agents making simultaneous decisions in each turn , and do not work for general MGs , allowing agents to make asynchronous decisions in different turns , which is common in many real world scenarios . For example , in multiplayer games ( Knutsson et al . ( 2004 ) ) , such as Go game , and many card games , players take turns to play , thus influence each other ’ s decision . The order in which agents make decisions has a significant impact on the game equilibrium . In this paper , we propose a novel framework , asynchronous multi-agent generative adversarial imitation learning ( AMAGAIL ) : A group of experts provide demonstration data when playing a Markov game ( MG ) with an asynchronous decision-making process , and AMAGAIL inversely learns each expert ’ s decision-making policy . We introduce a player function governed by the environment to capture the participation order and dependency of agents when making decisions . The participation order could be deterministic ( i.e. , agents take turns to act ) or stochastic ( i.e. , agents need to take actions by chance ) . A player function of an agent is a probability function : given the perfectly known agent participation history , i.e. , at each previous round in the history , we know which agent ( s ) participated , it provides the probability of the agent participating in the next round . With the general MG model , our framework generalizes MAGAIL ( Song et al . ( 2018 ) ) from the synchronous Markov games to ( asynchronous ) Markov games , and the learned expert policies are proven to guarantee subgame perfect equilibrium ( SPE ) ( Fudenberg & Levine ( 1983 ) ) , a stronger equilibrium than the Nash equilibrium ( NE ) ( guaranteed in MAGAIL Song et al . ( 2018 ) ) . The experiment results demonstrate that compared to GAIL ( Ho & Ermon ( 2016 ) ) and MAGAIL ( Song et al . ( 2018 ) ) , our AMAGAIL model can better infer the policy of each expert agent using their demonstration data collected from asynchronous decision-making scenarios . 2 PRELIMINARIES . 2.1 MARKOV GAMES . Markov games ( MGs ) ( Littman ( 1994 ) ) are the cases of N interacting agents , with each agent making a sequence of decisions with strategies only depending on the current state . A Markov game1 is denoted as a tuple ( N , S , A , Y , ζ , P , η , r , γ ) with a set of states S andN sets of actions { Ai } Ni=1 . At each time step t with a state st ∈ S , if the indicator variable Ii , t = 1 , an agent i is allowed to take an action ; otherwise , Ii , t = 0 , the agent i does not take an action . As a result , the participation vector It = [ I1 , t , · · · , IN , t ] indicates active vs inactive agents at step t. The set of all possible participation vectors is denoted as I , namely , It ∈ I . Moreover , ht−1 = [ I0 , · · · , It−1 ] represent the participation history from step 0 to t−1 . The player function Y ( governed by the environment ) describes the probability of an agent i being allowed to make an action at a step t , given the participation history ht−1 , namely , Y ( i|ht−1 ) . ζ defines the participation probability of an agent at the initial time step ζ : [ N ] 7→ [ 0 , 1 ] . Note that , the player function can be naturally extended to a higher-order form when the condition includes both previous participation history and previous state-action history ; thus , it can be adapted to non-Markov processes . The initial states are determined by a distribution η : S 7→ [ 0 , 1 ] . Let φ denotes no participation , determined by player function Y , the transition process to the next state follows a transition function : P : S×A1∪ { φ } ×· · ·×AN ∪ { φ } 7→ P ( S ) . Agent i obtains a ( bounded ) reward given by a function ri : S×Ai 7→ R2 . Agent i aims to maximize its own total expected return Ri = ∑∞ t=0 γ tri , t , where γ ∈ [ 0 , 1 ] is the discount factor . Actions are chosen through a stationary and stochastic policy πi : S ×Ai 7→ [ 0 , 1 ] . In this paper , bold variables without subscript i denote the concatenation of variables for all the agents , e.g. , all actions as a , the joint policy defined as π ( a|s ) = ∏N i=1 πi ( ai|s ) , r as all rewards . Subscript −i denotes all agents except for i , then ( ai , a−i ) represents the action of all N agents ( a1 , · · · , aN ) . We use expectation with respect to a policy π to denote an expectation with respect to the trajectories it generates . For example , Eπ , Y [ ri ( s , ai ) ] , Est , a∼π , It∼Y [ ∑∞ t=0 γ tri ( st , ai ) ] , denotes the following sample process as s0 ∼ η , I0 ∼ ζ , It ∼ Y , a ∼ π ( ·|st ) , st+1 ∼ P ( st+1|st , a ) , for ∀i ∈ [ N ] . Clearly , when the player function Y ( i|ht−1 ) = 1 for all agents i ’ s at any time step t , a general Markov game boils down to a synchronous Markov game ( Littman ( 1994 ) ; Song et al . ( 2018 ) ) , where all agents take actions at all steps . To distinguish our work from MAGAIL and be consistent with the literature Chatterjee et al . ( 2004 ) and Hansen et al . ( 2013 ) , we refer the game setting discussed in MAGAIL as synchronous Markov games ( SMGs ) , and that of our work as Markov games ( MGs ) . 2.2 SUBGAME PERFECT EQUILIBRIUM FOR MARKOV GAMES . In synchronous Markov games ( SMGs ) , all agents make simultaneous decisions at any time step t , with the same goal of maximizing its own total expected return . Thus , agents ’ optimal policies are interrelated and mutually influenced . Nash equilibrium ( NE ) has been employed as a solution concept to resolve the dependency across agents , where no agents can achieve a higher expected reward by unilaterally changing its own policy ( Song et al . ( 2018 ) ) . However , in Markov games ( MGs ) allowing asynchronous decisions , there exist situations where agents encounter states ( subgames ) resulted from other agents ’ “ trembling-hand ” actions . Since the NE does not consider the “ trembling-hand ” resulted states and subgames , when trapped in these situations , agents are not able to make optimal decisions based on their polices under NE . To address this problem , Selten firstly proposed subgame perfect equilibrium ( SPE ) ( Selten ( 1965 ) ) . SPE ensures NE for every possible 1Note that Markov games defined in MAGAIL ( Song et al . ( 2018 ) ) are in fact synchronous Markov games , with all agents simultaneously making decisions in each turn . We follow the rich literature ( Chatterjee et al . ( 2004 ) ; Hansen et al . ( 2013 ) ) to define Markov games , which allow both synchronous and asynchronous decision-making processes . 2Because of the asynchronous setting , the rewards only depend on agents ’ own actions . subgame of the original game . It has been shown that in a finite or infinite extensive-form game with either discrete or continuous time , best-response strategies all converge to SPE , rather than NE ( Selten ( 1965 ) ; Abramsky & Winschel ( 2017 ) ; Xu ( 2016 ) ) . 2.3 MULTI-AGENT IMITATION LEARNING IN SYNCHRONOUS MARKOV GAMES . In synchronous Markov games , MAGAIL ( Song et al . ( 2018 ) ) was proposed to learn experts ’ policies constrained by Nash equilibrium . Since there may exist multiple Nash equilibrium solutions , a maximum causal entropy regularizer is employed to resolve the ambiguity . Thus , the optimal policies can be found by solving the following multi-agent reinforcement learning problem . MARL ( r ) = arg max π N∑ i=1 ( βHi ( πi ) + Eπi , πE−i [ ri ] ) , ( 1 ) whereHi ( πi ) is the γ-discounted causal entropy of policy πi ∈ Π , Hi ( πi ) , Eπi [ − log πi ( ai|s ) ] = Est , ai∼πi [ − ∑∞ t=0 γ t log πi ( ai|st ) ] , and β is a weight to the entropy regularization term . In practice , the reward function is unknown . MAGAIL applies multi-agent IRL ( MAIRL ) below to recover experts ’ reward functions , with ψ as a convex regularizer , MAIRLψ ( πE ) = arg max r −ψ ( r ) + N∑ i=1 ( EπE [ ri ] ) − ( max π N∑ i=1 ( βHi ( πi ) ) + Eπi , πE−i [ ri ] ) . ( 2 ) Moreover , MAGAIL solves MARL ◦ MAIRLψ ( πE ) to inversely learn each expert ’ s policy via applying generative adversarial imitation learning ( Ho & Ermon ( 2016 ) ) to each expert i ∈ [ N ] : min θ max w Eπθ [ N∑ i=1 logDwi ( s , ai ) ] + EπE [ N∑ i=1 log ( 1−Dwi ( s , ai ) ) ] . ( 3 ) Dwi is a discriminator for agent i that classifies the experts ’ vs policy trajectories . πθ represent the learned experts ’ parameterized policies , which generate trajectories with maximized the scores from Dwi for i ∈ [ N ] .
The submission extends the MARL◦MAIR to the extensive Markov game case, where the decisions are made asynchronously. As a result, a stronger equilibrium SPE is becomes the target of the proposed method. To this end, the submission takes advantage of the previous game theory results, to formulate the problem, and transform the model to a MAGAIL form. The empirical performance of the proposed method is demonstrated using experiments.
SP:e00ee9485f054034892f307e0ab2f8df1e2d8701
Measuring and Improving the Use of Graph Information in Graph Neural Networks
1 INTRODUCTION . Graphs are powerful data structures that allow us to easily express various relationships ( i.e. , edges ) between objects ( i.e. , nodes ) . In recent years , extensive studies have been conducted on GNNs for tasks such as node classification and link predication . GNNs utilize the relationship information in graph data and significant improvements over traditional methods have been achieved on benchmark datasets ( Kipf & Welling , 2017 ; Hamilton et al. , 2017 ; Velickovic et al. , 2018 ; Xu et al. , 2019 ; Hou et al. , 2019 ) . Such breakthrough results have led to the exploration of using GNNs and their variants in different areas such as computer vision ( Satorras & Estrach , 2018 ; Marino et al. , 2017 ) , natural language processing ( Peng et al. , 2018 ; Yao et al. , 2019 ) , chemistry ( Duvenaud et al. , 2015 ) , biology ( Fout et al. , 2017 ) , and social networks ( Wang et al. , 2018 ) . Thus , understanding why GNNs can outperform traditional methods that are designed for Euclidean data is important . Such understanding can help us analyze the performance of existing GNN models and develop new GNN models for different types of graphs . In this paper , we make two main contributions : ( 1 ) two graph smoothness metrics to help understand the use of graph information in GNNs , and ( 2 ) a new GNN model that improves the use of graph information using the smoothness values . We elaborate the two contributions as follows . One main reason why GNNs outperform existing Euclidean-based methods is because rich information from the neighborhood of an object can be captured . GNNs collect neighborhood information with aggregators ( Zhou et al. , 2018 ) , such as the mean aggregator that takes the mean value of neighbors ’ feature vectors ( Hamilton et al. , 2017 ) , the sum aggregator that applies summation ( Duvenaud et al. , 2015 ) , and the attention aggregator that takes the weighted sum value ( Velickovic et al. , 2018 ) . Then , the aggregated vector and a node ’ s own feature vector are combined into a new feature vector . After some rounds , the feature vectors of nodes can be used for tasks such as node classification . Thus , the performance improvement brought by graph data is highly related to the quantity and quality of the neighborhood information . To this end , we propose two smoothness metrics on node features and labels to measure the quantity and quality of neighborhood information of nodes . The metrics are used to analyze the performance of existing GNNs on different types of graphs . In practice , not all neighbors of a node contain relevant information w.r.t . a specific task . Thus , neighborhood provides both positive information and negative disturbance for a given task . Simply aggregating the feature vectors of neighbors with manually-picked aggregators ( i.e. , users choose a type of aggregator for different graphs and tasks by trial or by experience ) often can not achieve optimal performance . To address this problem , we propose a new model , CS-GNN , which uses the smoothness metrics to selectively aggregate neighborhood information to amplify useful information and reduce negative disturbance . Our experiments validate the effectiveness of our two smoothness metrics and the performance improvements obtained by CS-GNN over existing methods . 2 MEASURING THE USEFULNESS OF NEIGHBORHOOD INFORMATION . We first introduce a general GNN framework and three representative GNN models , which show how existing GNNs aggregate neighborhood information . Then we propose two smoothness metrics to measure the quantity and quality of the information that nodes obtain from their neighbors . 2.1 GNN FRAMEWORK AND MODELS . The notations used in this paper , together with their descriptions , are listed in Appendix A . We use G = { V , E } to denote a graph , where V and E represent the set of nodes and edges of G. We use ev , v′ ∈ E to denote the edge that connects nodes v and v′ , and Nv = { v′ : ev , v′ ∈ E } to denote the set of neighbors of a node v ∈ V . Each node v ∈ V has a feature vector xv ∈ X with dimension d. Consider a node classification task , for each node v ∈ V with a class label yv , the goal is to learn a representation vector hv and a mapping function f ( · ) to predict the class label yv of node v , i.e. , ŷv = f ( hv ) where ŷv is the predicted label . GNNs are inspired by the Weisfeiler-Lehman test ( Weisfeiler & Lehman , 1968 ; Shervashidze et al. , 2011 ) , which is an effective method for graph isomorphism . Similarly , GNNs utilize a neighborhood aggregation scheme to learn a representation vector hv for each node v , and then use neural networks to learn a mapping function f ( · ) . Formally , consider the general GNN framework ( Hamilton et al. , 2017 ; Zhou et al. , 2018 ; Xu et al. , 2019 ) in Table 1 with K rounds of neighbor aggregation . In each round , only the features of 1-hop neighbors are aggregated , and the framework consists of two functions , AGGREGATE and COMBINE . We initialize h ( 0 ) v = xv . After K rounds of aggregation , each node v ∈ V obtains its representation vector h ( K ) v . We use h ( K ) v and a mapping function f ( · ) , e.g. , a fully connected layer , to obtain the final results for a specific task such as node classification . Many GNN models have been proposed . We introduce three representative ones : Graph Convolutional Networks ( GCN ) ( Kipf & Welling , 2017 ) , GraphSAGE ( Hamilton et al. , 2017 ) , and Graph Attention Networks ( GAT ) ( Velickovic et al. , 2018 ) . GCN merges the combination and aggregation functions , as shown in Table 1 , where A ( · ) represents the activation function and W is a learnable parameter matrix . Different from GCN , GraphSAGE uses concatenation ‘ || ’ as the combination function , which can better preserve a node ’ s own information . Different aggregators ( e.g. , mean , max pooling ) are provided in GraphSAGE . However , GraphSAGE requires users to choose an aggregator to use for different graphs and tasks , which may lead to sub-optimal performance . GAT addresses this problem by an attention mechanism that learns coefficients of neighbors for aggregation . With the learned coefficients a ( k−1 ) i , j on all the edges ( including self-loops ) , GAT aggregates neighbors with a weighted sum aggregator . The attention mechanism can learn coefficients of neighbors in different graphs and achieves significant improvements over prior GNN models . 2.2 GRAPH SMOOTHNESS METRICS . GNNs usually contain an aggregation step to collect neighboring information and a combination step that merges this information with node features . We consider the context cv of node v as the node ’ s own information , which is initialized as the feature vector xv of v. We use sv to denote the surrounding of v , which represents the aggregated feature vector computed from v ’ s neighbors . Since the neighborhood aggregation can be seen as a convolution operation on a graph ( Defferrard et al. , 2016 ) , we generalize the aggregator as weight linear combination , which can be used to express most existing aggregators . Then , we can re-formulate the general GNN framework as a context-surrounding framework with two mapping functions f1 ( · ) and f2 ( · ) in round k as : c ( k ) vi = f1 ( c ( k−1 ) vi , s ( k−1 ) vi ) , s ( k−1 ) vi = f2 ( ∑ vj∈Nvi a ( k−1 ) i , j · c ( k−1 ) vj ) . ( 1 ) From equation ( 1 ) , the key difference between GNNs and traditional neural-network-based methods for Euclidean data is that GNNs can integrate extra information from the surrounding of a node into its context . In graph signal processing ( Ortega et al. , 2018 ) , features on nodes are regarded as signals and it is common to assume that observations contain both noises and true signals in a standard signal processing problem ( Rabiner & Gold , 1975 ) . Thus , we can decompose a context vector into two parts as c ( k ) vi = c̆ ( k ) vi + n̆ ( k ) vi , where c̆ ( k ) vi is the true signal and n̆ ( k ) vi is the noise . Theorem 1 . Assume that the noise n̆ ( k ) vi follows the same distribution for all nodes . If the noise power of n̆ ( k ) vi is defined by its variance σ2 , then the noise power of the surrounding input∑ vj∈Nvi a ( k−1 ) i , j · c ( k−1 ) vj is ∑ vj∈Nvi ( a ( k−1 ) i , j ) 2 · σ2 . The proof can be found in Appendix B. Theorem 1 shows that the surrounding input has less noise power than the context when a proper aggregator ( i.e. , coefficient a ( k−1 ) i , j ) is used . Specifically , the mean aggregator has the best denoising performance and the pooling aggregator ( e.g. , max-pooling ) can not reduce the noise power . For the sum aggregator , where all coefficients are equal to 1 , the noise power of the surrounding input is larger than that of the context . 2.2.1 FEATURE SMOOTHNESS . We first analyze the information gain from the surrounding without considering the noise . In the extreme case when the context is the same as the surrounding input , the surrounding input contributes no extra information to the context . To quantify the information obtained from the surrounding , we present the following definition based on information theory . Definition 2 ( Information Gain from Surrounding ) . For normalized feature space Xk = [ 0 , 1 ] dk , if∑ vj∈Nvi a ( k ) i , j = 1 , the feature space of ∑ vj∈Nvi a ( k ) i , j · c̆ ( k ) vj is also in Xk = [ 0 , 1 ] dk . The probability density function ( PDF ) of c̆ ( k ) vj over Xk is defined as C ( k ) , which is the ground truth and can be estimated by nonparametric methods with a set of samples , where each sample point c̆ ( k ) vi is sampled with probability |Nvi |/2|E| . Correspondingly , the PDF of ∑ vj∈Nvi a ( k ) i , j · c̆ ( k ) vj is S ( k ) , which can be estimated with a set of samples { ∑ vj∈Nvi a ( k ) i , j · c̆ ( k ) vj } , where each point is sampled with probability |Nvi |/2|E| . The information gain from the surrounding in round k can be computed by Kullback–Leibler divergence ( Kullback & Leibler , 1951 ) as DKL ( S ( k ) ||C ( k ) ) = ∫ Xk S ( k ) ( x ) · log S ( k ) ( x ) C ( k ) ( x ) dx . The Kullback–Leibler divergence is a measure of information loss when the context distribution is used to approximate the surrounding distribution ( Kurt , 2017 ) . Thus , we can use the divergence to measure the information gain from the surrounding into the context of a node . When all the context vectors are equal to their surrounding inputs , the distribution of the context is totally the same with that of the surrounding . In this case , the divergence is equal to 0 , which means that there is no extra information that the context can obtain from the surrounding . On the other hand , if the context and the surrounding of a node have different distributions , the divergence value is strictly positive . Note that in practice , the ground-truth distributions of the context and surrounding signals are unknown . In addition , for learnable aggregators , e.g. , the attention aggregator , the coefficients are unknown . Thus , we propose a metric λf to estimate the divergence . Graph smoothness ( Zhou & Schölkopf , 2004 ) is an effective measure of the signal frequency in graph signal processing ( Rabiner & Gold , 1975 ) . Inspired by that , we define the feature smoothness on a graph . Definition 3 ( Feature Smoothness ) . Consider the condition of the first round , where c ( 0 ) v = xv , we define the feature smoothness λf over normalized space X = [ 0 , 1 ] d as λf = ∣∣∣∣∣∣∑v∈V ( ∑v′∈Nv ( xv − xv′ ) ) 2∣∣∣∣∣∣1 |E| · d , where || · ||1 is the Manhattan norm . According to Definition 3 , a larger λf indicates that the feature signal of a graph has higher frequency , meaning that the feature vectors xv and xv′ are more likely dissimilar for two connected nodes v and v′ in the graph . In other words , nodes with dissimilar features tend to be connected . Intuitively , for a graph whose feature sets have high frequency , the context of a node can obtain more information gain from its surrounding . This is because the PDFs ( given in Definition 2 ) of the context and the surrounding have the same probability but fall in different places in space X . Formally , we state the relation between λf and the information gain from the surrounding in the following theorem . For simplicity , we let X = X0 , d = d0 , C = C ( 0 ) and S = S ( 0 ) . Theorem 4 . For a graph G with the set of features X in space [ 0 , 1 ] d and using the mean aggregator , the information gain from the surrounding DKL ( S||C ) is positively correlated to its feature smoothness λf , i.e. , DKL ( S||C ) ∼ λf . In particular , DKL ( S||C ) = 0 when λf = 0 . The proof can be found in Appendix C. According to Theorem 4 , a large λf means that a GNN model can obtain much information from graph data . Note that DKL ( S||C ) here is under the condition when using the mean aggregator . Others aggregators , e.g. , pooling and weight could have different DKL ( S||C ) values , even if the feature smoothness λf is a constant .
The authors study how neighbor information on graphs can be used in Graph Neural Networks. It proposes measures on whether the data in neighboring nodes are useful in terms of labels or features. It also provides a new Graph Neural Network algorithm that is a modification of attention-based models incorporating the derived label and feature smoothness measures. The paper demonstrates the usefulness of these measures and algorithms with several different baselines from different families. The writing is mostly smooth, and the authors seem to provide enough detail of the experiments performed.
SP:ff4879a21fee38d85c20afbb9c7fcac541ee3714
Measuring and Improving the Use of Graph Information in Graph Neural Networks
1 INTRODUCTION . Graphs are powerful data structures that allow us to easily express various relationships ( i.e. , edges ) between objects ( i.e. , nodes ) . In recent years , extensive studies have been conducted on GNNs for tasks such as node classification and link predication . GNNs utilize the relationship information in graph data and significant improvements over traditional methods have been achieved on benchmark datasets ( Kipf & Welling , 2017 ; Hamilton et al. , 2017 ; Velickovic et al. , 2018 ; Xu et al. , 2019 ; Hou et al. , 2019 ) . Such breakthrough results have led to the exploration of using GNNs and their variants in different areas such as computer vision ( Satorras & Estrach , 2018 ; Marino et al. , 2017 ) , natural language processing ( Peng et al. , 2018 ; Yao et al. , 2019 ) , chemistry ( Duvenaud et al. , 2015 ) , biology ( Fout et al. , 2017 ) , and social networks ( Wang et al. , 2018 ) . Thus , understanding why GNNs can outperform traditional methods that are designed for Euclidean data is important . Such understanding can help us analyze the performance of existing GNN models and develop new GNN models for different types of graphs . In this paper , we make two main contributions : ( 1 ) two graph smoothness metrics to help understand the use of graph information in GNNs , and ( 2 ) a new GNN model that improves the use of graph information using the smoothness values . We elaborate the two contributions as follows . One main reason why GNNs outperform existing Euclidean-based methods is because rich information from the neighborhood of an object can be captured . GNNs collect neighborhood information with aggregators ( Zhou et al. , 2018 ) , such as the mean aggregator that takes the mean value of neighbors ’ feature vectors ( Hamilton et al. , 2017 ) , the sum aggregator that applies summation ( Duvenaud et al. , 2015 ) , and the attention aggregator that takes the weighted sum value ( Velickovic et al. , 2018 ) . Then , the aggregated vector and a node ’ s own feature vector are combined into a new feature vector . After some rounds , the feature vectors of nodes can be used for tasks such as node classification . Thus , the performance improvement brought by graph data is highly related to the quantity and quality of the neighborhood information . To this end , we propose two smoothness metrics on node features and labels to measure the quantity and quality of neighborhood information of nodes . The metrics are used to analyze the performance of existing GNNs on different types of graphs . In practice , not all neighbors of a node contain relevant information w.r.t . a specific task . Thus , neighborhood provides both positive information and negative disturbance for a given task . Simply aggregating the feature vectors of neighbors with manually-picked aggregators ( i.e. , users choose a type of aggregator for different graphs and tasks by trial or by experience ) often can not achieve optimal performance . To address this problem , we propose a new model , CS-GNN , which uses the smoothness metrics to selectively aggregate neighborhood information to amplify useful information and reduce negative disturbance . Our experiments validate the effectiveness of our two smoothness metrics and the performance improvements obtained by CS-GNN over existing methods . 2 MEASURING THE USEFULNESS OF NEIGHBORHOOD INFORMATION . We first introduce a general GNN framework and three representative GNN models , which show how existing GNNs aggregate neighborhood information . Then we propose two smoothness metrics to measure the quantity and quality of the information that nodes obtain from their neighbors . 2.1 GNN FRAMEWORK AND MODELS . The notations used in this paper , together with their descriptions , are listed in Appendix A . We use G = { V , E } to denote a graph , where V and E represent the set of nodes and edges of G. We use ev , v′ ∈ E to denote the edge that connects nodes v and v′ , and Nv = { v′ : ev , v′ ∈ E } to denote the set of neighbors of a node v ∈ V . Each node v ∈ V has a feature vector xv ∈ X with dimension d. Consider a node classification task , for each node v ∈ V with a class label yv , the goal is to learn a representation vector hv and a mapping function f ( · ) to predict the class label yv of node v , i.e. , ŷv = f ( hv ) where ŷv is the predicted label . GNNs are inspired by the Weisfeiler-Lehman test ( Weisfeiler & Lehman , 1968 ; Shervashidze et al. , 2011 ) , which is an effective method for graph isomorphism . Similarly , GNNs utilize a neighborhood aggregation scheme to learn a representation vector hv for each node v , and then use neural networks to learn a mapping function f ( · ) . Formally , consider the general GNN framework ( Hamilton et al. , 2017 ; Zhou et al. , 2018 ; Xu et al. , 2019 ) in Table 1 with K rounds of neighbor aggregation . In each round , only the features of 1-hop neighbors are aggregated , and the framework consists of two functions , AGGREGATE and COMBINE . We initialize h ( 0 ) v = xv . After K rounds of aggregation , each node v ∈ V obtains its representation vector h ( K ) v . We use h ( K ) v and a mapping function f ( · ) , e.g. , a fully connected layer , to obtain the final results for a specific task such as node classification . Many GNN models have been proposed . We introduce three representative ones : Graph Convolutional Networks ( GCN ) ( Kipf & Welling , 2017 ) , GraphSAGE ( Hamilton et al. , 2017 ) , and Graph Attention Networks ( GAT ) ( Velickovic et al. , 2018 ) . GCN merges the combination and aggregation functions , as shown in Table 1 , where A ( · ) represents the activation function and W is a learnable parameter matrix . Different from GCN , GraphSAGE uses concatenation ‘ || ’ as the combination function , which can better preserve a node ’ s own information . Different aggregators ( e.g. , mean , max pooling ) are provided in GraphSAGE . However , GraphSAGE requires users to choose an aggregator to use for different graphs and tasks , which may lead to sub-optimal performance . GAT addresses this problem by an attention mechanism that learns coefficients of neighbors for aggregation . With the learned coefficients a ( k−1 ) i , j on all the edges ( including self-loops ) , GAT aggregates neighbors with a weighted sum aggregator . The attention mechanism can learn coefficients of neighbors in different graphs and achieves significant improvements over prior GNN models . 2.2 GRAPH SMOOTHNESS METRICS . GNNs usually contain an aggregation step to collect neighboring information and a combination step that merges this information with node features . We consider the context cv of node v as the node ’ s own information , which is initialized as the feature vector xv of v. We use sv to denote the surrounding of v , which represents the aggregated feature vector computed from v ’ s neighbors . Since the neighborhood aggregation can be seen as a convolution operation on a graph ( Defferrard et al. , 2016 ) , we generalize the aggregator as weight linear combination , which can be used to express most existing aggregators . Then , we can re-formulate the general GNN framework as a context-surrounding framework with two mapping functions f1 ( · ) and f2 ( · ) in round k as : c ( k ) vi = f1 ( c ( k−1 ) vi , s ( k−1 ) vi ) , s ( k−1 ) vi = f2 ( ∑ vj∈Nvi a ( k−1 ) i , j · c ( k−1 ) vj ) . ( 1 ) From equation ( 1 ) , the key difference between GNNs and traditional neural-network-based methods for Euclidean data is that GNNs can integrate extra information from the surrounding of a node into its context . In graph signal processing ( Ortega et al. , 2018 ) , features on nodes are regarded as signals and it is common to assume that observations contain both noises and true signals in a standard signal processing problem ( Rabiner & Gold , 1975 ) . Thus , we can decompose a context vector into two parts as c ( k ) vi = c̆ ( k ) vi + n̆ ( k ) vi , where c̆ ( k ) vi is the true signal and n̆ ( k ) vi is the noise . Theorem 1 . Assume that the noise n̆ ( k ) vi follows the same distribution for all nodes . If the noise power of n̆ ( k ) vi is defined by its variance σ2 , then the noise power of the surrounding input∑ vj∈Nvi a ( k−1 ) i , j · c ( k−1 ) vj is ∑ vj∈Nvi ( a ( k−1 ) i , j ) 2 · σ2 . The proof can be found in Appendix B. Theorem 1 shows that the surrounding input has less noise power than the context when a proper aggregator ( i.e. , coefficient a ( k−1 ) i , j ) is used . Specifically , the mean aggregator has the best denoising performance and the pooling aggregator ( e.g. , max-pooling ) can not reduce the noise power . For the sum aggregator , where all coefficients are equal to 1 , the noise power of the surrounding input is larger than that of the context . 2.2.1 FEATURE SMOOTHNESS . We first analyze the information gain from the surrounding without considering the noise . In the extreme case when the context is the same as the surrounding input , the surrounding input contributes no extra information to the context . To quantify the information obtained from the surrounding , we present the following definition based on information theory . Definition 2 ( Information Gain from Surrounding ) . For normalized feature space Xk = [ 0 , 1 ] dk , if∑ vj∈Nvi a ( k ) i , j = 1 , the feature space of ∑ vj∈Nvi a ( k ) i , j · c̆ ( k ) vj is also in Xk = [ 0 , 1 ] dk . The probability density function ( PDF ) of c̆ ( k ) vj over Xk is defined as C ( k ) , which is the ground truth and can be estimated by nonparametric methods with a set of samples , where each sample point c̆ ( k ) vi is sampled with probability |Nvi |/2|E| . Correspondingly , the PDF of ∑ vj∈Nvi a ( k ) i , j · c̆ ( k ) vj is S ( k ) , which can be estimated with a set of samples { ∑ vj∈Nvi a ( k ) i , j · c̆ ( k ) vj } , where each point is sampled with probability |Nvi |/2|E| . The information gain from the surrounding in round k can be computed by Kullback–Leibler divergence ( Kullback & Leibler , 1951 ) as DKL ( S ( k ) ||C ( k ) ) = ∫ Xk S ( k ) ( x ) · log S ( k ) ( x ) C ( k ) ( x ) dx . The Kullback–Leibler divergence is a measure of information loss when the context distribution is used to approximate the surrounding distribution ( Kurt , 2017 ) . Thus , we can use the divergence to measure the information gain from the surrounding into the context of a node . When all the context vectors are equal to their surrounding inputs , the distribution of the context is totally the same with that of the surrounding . In this case , the divergence is equal to 0 , which means that there is no extra information that the context can obtain from the surrounding . On the other hand , if the context and the surrounding of a node have different distributions , the divergence value is strictly positive . Note that in practice , the ground-truth distributions of the context and surrounding signals are unknown . In addition , for learnable aggregators , e.g. , the attention aggregator , the coefficients are unknown . Thus , we propose a metric λf to estimate the divergence . Graph smoothness ( Zhou & Schölkopf , 2004 ) is an effective measure of the signal frequency in graph signal processing ( Rabiner & Gold , 1975 ) . Inspired by that , we define the feature smoothness on a graph . Definition 3 ( Feature Smoothness ) . Consider the condition of the first round , where c ( 0 ) v = xv , we define the feature smoothness λf over normalized space X = [ 0 , 1 ] d as λf = ∣∣∣∣∣∣∑v∈V ( ∑v′∈Nv ( xv − xv′ ) ) 2∣∣∣∣∣∣1 |E| · d , where || · ||1 is the Manhattan norm . According to Definition 3 , a larger λf indicates that the feature signal of a graph has higher frequency , meaning that the feature vectors xv and xv′ are more likely dissimilar for two connected nodes v and v′ in the graph . In other words , nodes with dissimilar features tend to be connected . Intuitively , for a graph whose feature sets have high frequency , the context of a node can obtain more information gain from its surrounding . This is because the PDFs ( given in Definition 2 ) of the context and the surrounding have the same probability but fall in different places in space X . Formally , we state the relation between λf and the information gain from the surrounding in the following theorem . For simplicity , we let X = X0 , d = d0 , C = C ( 0 ) and S = S ( 0 ) . Theorem 4 . For a graph G with the set of features X in space [ 0 , 1 ] d and using the mean aggregator , the information gain from the surrounding DKL ( S||C ) is positively correlated to its feature smoothness λf , i.e. , DKL ( S||C ) ∼ λf . In particular , DKL ( S||C ) = 0 when λf = 0 . The proof can be found in Appendix C. According to Theorem 4 , a large λf means that a GNN model can obtain much information from graph data . Note that DKL ( S||C ) here is under the condition when using the mean aggregator . Others aggregators , e.g. , pooling and weight could have different DKL ( S||C ) values , even if the feature smoothness λf is a constant .
The paper proposes two graph smoothness metrics for measuring the usefulness of graph information. The feature smoothness indicates how much information can be gained by aggregating neighboring nodes while the label smoothness assesses the quality of this information. The authors show that Graph Neural Networks (GNNs) work best for tasks with high features smoothness and low label smoothness by utilizing information from surrounding nodes which also tends to have the same label. Based on these two metrics, the authors introduce a framework, called Context-Surrounding Graph Neural Network (CS-GNN), that utilizes important information from neighboring nodes of the same label while reduce the disturbance from neighboring nodes from different classes. The results demonstrate considerable improvement across 5 different tasks.
SP:ff4879a21fee38d85c20afbb9c7fcac541ee3714
Under what circumstances do local codes emerge in feed-forward neural networks
1 INTRODUCTION . With neural networks ( NNs ) being widely deployed in various tasks it is essential to understand how they work and what data is used to make their decisions . NNs used to be viewed as ‘ black boxes ’ , but recent results ( Nguyen et al. , 2016 ) have started to open that box . NNs came from the field of psychology as simple bio-inspired models , it has been debated whether information is represented in the brain in a distributed manner ( from parallel distributed processing , PDP ) or via a localist coding scheme . Although the distributed approach was the most popular , there are some results in neuroscience ( Quiroga et al. , 2005 ) and psychology ( McClelland & Rumelhart , 1981 ) that are commensurate with a localist coding scheme , including a report of LCs in RNNs ( Bowers et al. , 2014 ) . Recently , there has been an explosion of interest in NNs , especially deepNNs , as these algorithms are now commercially relevant , and this increase in their accuracy has been credited to sources of vastly more labelled data and novel training techniques like dropout ( Srivastava et al. , 2014 ) . Many newer researchers in NNs were perhaps unaware of the distributedlocalist coding debate within psychology , and thus looked for localist-like codes in their NNs , and found indicative ( of LC coding scheme ) evidence of detectors for objects ( Zhou et al. , 2018 ; 2015 ) , concepts ( Karpathy et al. , 2016 ; Lakretz et al. , 2019 ) , features ( Nguyen et al. , 2019 ; Erhan et al. , 2009 ) , textures ( Olah et al. , 2017 ) , single directions ( Morcos et al. , 2018 ) etc. , see ( Bowers , 2017 ) for a review . With faster and larger computers , it is possible , even with the increase in input data size , for deepNNs to ‘ memorise ’ the data-set ( an extreme form of overfitting ) : a process where the NN has simply learned a mapping between input and output vectors , as opposed to learning a rule which will allow it to generalise to unseen data that follows the underlying rule . Generalisation performance is often improved if NN training is stopped early , often when a validation set loss ( val loss ) stops improving , as the NN is prevented from further minimising its loss function by memorising the input ( training ) data . Single directions ( Morcos et al. , 2018 ) have been implicated in memorization of the data-set . Localist codes ( coding for a class A ) are defined as units which are activated at a high ( low ) level for all members ( that the NN gets correct ) and low ( high ) level for all members of the other classes ( class ¬A ) , i.e . the set of activations for class A is disjoint from the activations for class ¬A ( see figure 8 in the appendix ) , and these codes are very strict measure of selectivity . As such , LCs are very easy to interpret , and the presence of them in NNs would make it easy to understand how the NN is working . This paper takes no position on whether or not localist codes exist in the brain or in deep-NNs . instead we take the constructionist science approach of asking when would we expect LCs to appear , and what aspects of the system , data-set and training conditions favour or disfavour their emergence . As NNs are considered ( simplified ) models for the brain , we can also take into account biological plausibility . We hypothesized that LCs should emerge when there was an invariant in the data-set . As deep-NNs take a long time to train , it is hard to get representative statistics , so we look at very simple networks ( shallow : 3-layer and pseudo-deep : 4-layer ) where it is possible to do hundreds of repeats and thus get resilient trends . The main insight of this work is that LCs do emerge when there is an invariant in the data . To set up a system with such a ‘ short-cut ’ we use a simple binary vectors as inputs , which are built from prototypes , such that there are 1/10 input bits that are always 1 for each class and these are the invariant bits , the 0s of each prototype are then filled in with a random mix of 1 and 0 of a known weight , see figure 1 , thus , a given bit is always on for a given class , and maybe on or of for other classes . Note also , that in this set up , if the proportional weight of the prototype exceeds that of the random vector , then vectors belonging to the same class are ‘ closer ’ 1 to each other than those of separate classes , i.e . there is a larger between-class variance than within-class variance . The prototypes are also perturbed to increase the variance of the ‘ invariant ’ bits . If one views a deep convNN as a feature extraction machine ( lower and convolutional layers ) with a feature classification NN on top ( the higher fully connected layers ) , then it is reasonable to suppose that a given class is likely to share features at the top convolutional layer , which would result in the activation vectors at that layer having a higher between group variance than within group , or possibly even invariant features for a class ( perhaps object detectors ) , and so these experiments could give insight into the representation of data in the ‘ fc ’ layers of deep-NNs . 1.1 FINDINGS . 1 . LCs related to lower within-class variance than between-class variance in input 2 . LCs related to a NN internalising a rule 3 . No . of LCs related to difficulty of the problem and the computing power of the NN , with different behaviour for under- and over-resourced NNs . 4 . Large values of dropout increases LCs 5 . LCs correlate with generalisation performance 6 . Large data-sets , softmax and aggressive early stopping reduce the number of LCs 7 . Monitoring the number of LCs can be useful for figuring out when to stop training 2 METHODOLOGY . Data design Data input to a neural network can be understood as a code , { Cx } , with each trained input data vector designated as a codeword , Cx . The size of the code is related to the number of codewords ( i.e . the size of the training set ) , nx . Lx is the length of the codeword , generally 500bits in this paper . We used a binary alphabet , and the number of 1s in a codeword is the weight2 , wx of that codeword . 1There are two metrics that are relevant to measuring the distance between these vectors , the Hamming distance which is the number of bits that have to be switched to turn one vector into another and the cosine similarity , which is the angle between the vectors , we use Hamming distance here . 2This weight definition is not the same as connection weights in the neural network . To create a set of nP classes with a known structural similarity , the procedure in figure 1 was followed . We start with a set of nP prototypes , { Px , 1 ≤ x ≤ nP } , with blocks of 1s of length LP /nP , called prototype blocks , which code for a class . For example , if Lx were 12 and nP were 3 : P1 = [ 111100000000 ] and P2 = [ 000011110000 ] , P3 = [ 000000001111 ] , and this would gives prototypes that are a Hamming distance of 8 apart , and thus we know that our prototypes span inputdata space . To create members of each class , the prototype is used as a mask , with the 0 blocks replaced by blocks from a random vector , Rx . The weight of the random vectors , wR can be tuned to ensure that a set of vectors randomly clustered around the prototype vector are generated , such that members of the same category are closer to each other than those of the other categories ( N.B . the prototypes are not members of the category ) . A more realistic data-set is created by allowing the prototypes to be perturbed so that a percentage of the prototype block is randomly switched to 0s each time a new codeword is created , in accordance with the perturbation rate ( see P ′2 in figure 1 ) . This method creates a code with a known number of invariant bits for codewords in the same category . For example , in figure 1 , codewords C1 and C2 were both derived from P1 , and have a Hamming distance of 6 , where as C1 and C4 are in different classes and have a Hamming distance of 8 . Note , the difference between these numbers is larger in our experiments as Lx = 500 and there are 10 categories . We define ‘ sparseness ’ of a vector , Sx , as the fraction of bits that are ‘ 1 ’ s . LCs were highly unlikely to appear in the input code , and none were observed in random checks . Neural network design We have three-layer feed-forward network with Lx input neurons , nHLN hidden layer neurons ( HLN ) and Lo output neurons , using a sigmoidal activation function and no softmax on the output . For experiment 1 Lx is 500bits , mapped to 10 output classes , so the weight of prototype vectors wp , is 50 , and the wR is 150 ( so Sr=1/3 ) , the output vector is a 50bit-long distributed vector ( wo=25 ) . NNs were trained for 45,000 epochs , and each plotted point is a number of repeats between 10 and 15 . Experiment 2-7 varied thus : 2 : nx = { 250 , 500 } ; 3 : SR = { 1/9 , 2/9 , 1/3 } ; 4 : Lx = { 300 , 700 , 1000 } ; 5 : activation function= { ReLU , sigmoid } ; 6 : output vector is distributed or 1-hot ; 7 : takes nHLN = 500 , 1000 and decays thewP from 50 to 25 ( P from 1 to 0.5 ) . Experiment 8 is ∼250 repeats of experiment 1 , with values of dropout in { 0 , 0.2 , 0.5 , 0.7 , 0.9 } . Experiment 9 is a repeat of 1 , with activation noise added for networks with nHLN = { 100 , 500 , 1000 } . Experiment 10 measures the number of LCs over training time for nHLN = { 250 , 500 , 1000 , 2000 } . To do generalisation tests , a new test set is built with the same parameters as the training set , with ntrain = 10000 and applied to pre-run results ( from experiments 8 and 10 ) . Experiment 11 : for the 4-layer neural networks , nHLN of the first hidden layer is varied , the second is set to 250 , the output vectors are 1-HOT and different training parameters and values are given in table 11 . Experiment 12 : For the MLP experiments , we use MNIST data-set , with added 20 pixel invariants that code for the class which are either non-varying ( ‘ invariant ’ ) or drawn from a Gaussian distribution ( ‘ Gaussian ’ ) , see table 8 in the appendix . The invariant was either not applied ( ‘ standard ’ ) or applied to 50 % of 100 % of all images , or applied to the whole of 2 , 5 or 8 categories , and data is from 10 repeats . Experiment 13 : To see if LCs were associated with memorising the dataset , we trained NNs where the codewords were shuffled before being assigned to targets . NNs were run in Keras with a TensorFlow backend . Accuracy and generalisation . The accuracy reported by Keras counts an output vector as correct if each bit is within 50 % of the correct value , e.g .. the output vector [ 0.4 , 0.6 ] would map to the target vector [ 0 , 1 ] ( the outputs are binary ) , and in standard classification neural networks with a 1- hot target vectors these outputs would be very close to the target after the softmax operation . Thus , we label this accuracy ‘ classification accuracy ’ . If one wants to use a NN as a pattern matching machine , then one could put an arbitrary limit on how big an error between the output values and the targets , we chose 10 % , and the target [ 0 , 1 ] would need an output vector of [ ≤ 0.1 , ≥ 0.9 ] to be considered correct . We call this more stringent condition the ‘ pattern matching accuracy. ’ As the codes are built to a rule , it is possible to generate an arbitrarily large code , thus our test sets are at least 10 times larger than the training sets .
I have a lot of questions about the data used in the experiments. They are created according to the method explained in “Data design” (p.2). It is also summarized in the last paragraph of the first section as follows: ”there are 1/10 input bits that are always 1 for each class and these are the invariant bits, the 0s of each prototype are then filled in with a random mix of 1 and 0 of a known weight”. What is the intention behind this way of creating data? How general are the data created in this way as well as the analyses based on them? It seems to me that the data and thus the analyses lack the generality needed for the purpose of understanding behaviors of neural networks on real tasks/data.
SP:975ca0db2d36004b48911303fd7fd8b61e956774
Under what circumstances do local codes emerge in feed-forward neural networks
1 INTRODUCTION . With neural networks ( NNs ) being widely deployed in various tasks it is essential to understand how they work and what data is used to make their decisions . NNs used to be viewed as ‘ black boxes ’ , but recent results ( Nguyen et al. , 2016 ) have started to open that box . NNs came from the field of psychology as simple bio-inspired models , it has been debated whether information is represented in the brain in a distributed manner ( from parallel distributed processing , PDP ) or via a localist coding scheme . Although the distributed approach was the most popular , there are some results in neuroscience ( Quiroga et al. , 2005 ) and psychology ( McClelland & Rumelhart , 1981 ) that are commensurate with a localist coding scheme , including a report of LCs in RNNs ( Bowers et al. , 2014 ) . Recently , there has been an explosion of interest in NNs , especially deepNNs , as these algorithms are now commercially relevant , and this increase in their accuracy has been credited to sources of vastly more labelled data and novel training techniques like dropout ( Srivastava et al. , 2014 ) . Many newer researchers in NNs were perhaps unaware of the distributedlocalist coding debate within psychology , and thus looked for localist-like codes in their NNs , and found indicative ( of LC coding scheme ) evidence of detectors for objects ( Zhou et al. , 2018 ; 2015 ) , concepts ( Karpathy et al. , 2016 ; Lakretz et al. , 2019 ) , features ( Nguyen et al. , 2019 ; Erhan et al. , 2009 ) , textures ( Olah et al. , 2017 ) , single directions ( Morcos et al. , 2018 ) etc. , see ( Bowers , 2017 ) for a review . With faster and larger computers , it is possible , even with the increase in input data size , for deepNNs to ‘ memorise ’ the data-set ( an extreme form of overfitting ) : a process where the NN has simply learned a mapping between input and output vectors , as opposed to learning a rule which will allow it to generalise to unseen data that follows the underlying rule . Generalisation performance is often improved if NN training is stopped early , often when a validation set loss ( val loss ) stops improving , as the NN is prevented from further minimising its loss function by memorising the input ( training ) data . Single directions ( Morcos et al. , 2018 ) have been implicated in memorization of the data-set . Localist codes ( coding for a class A ) are defined as units which are activated at a high ( low ) level for all members ( that the NN gets correct ) and low ( high ) level for all members of the other classes ( class ¬A ) , i.e . the set of activations for class A is disjoint from the activations for class ¬A ( see figure 8 in the appendix ) , and these codes are very strict measure of selectivity . As such , LCs are very easy to interpret , and the presence of them in NNs would make it easy to understand how the NN is working . This paper takes no position on whether or not localist codes exist in the brain or in deep-NNs . instead we take the constructionist science approach of asking when would we expect LCs to appear , and what aspects of the system , data-set and training conditions favour or disfavour their emergence . As NNs are considered ( simplified ) models for the brain , we can also take into account biological plausibility . We hypothesized that LCs should emerge when there was an invariant in the data-set . As deep-NNs take a long time to train , it is hard to get representative statistics , so we look at very simple networks ( shallow : 3-layer and pseudo-deep : 4-layer ) where it is possible to do hundreds of repeats and thus get resilient trends . The main insight of this work is that LCs do emerge when there is an invariant in the data . To set up a system with such a ‘ short-cut ’ we use a simple binary vectors as inputs , which are built from prototypes , such that there are 1/10 input bits that are always 1 for each class and these are the invariant bits , the 0s of each prototype are then filled in with a random mix of 1 and 0 of a known weight , see figure 1 , thus , a given bit is always on for a given class , and maybe on or of for other classes . Note also , that in this set up , if the proportional weight of the prototype exceeds that of the random vector , then vectors belonging to the same class are ‘ closer ’ 1 to each other than those of separate classes , i.e . there is a larger between-class variance than within-class variance . The prototypes are also perturbed to increase the variance of the ‘ invariant ’ bits . If one views a deep convNN as a feature extraction machine ( lower and convolutional layers ) with a feature classification NN on top ( the higher fully connected layers ) , then it is reasonable to suppose that a given class is likely to share features at the top convolutional layer , which would result in the activation vectors at that layer having a higher between group variance than within group , or possibly even invariant features for a class ( perhaps object detectors ) , and so these experiments could give insight into the representation of data in the ‘ fc ’ layers of deep-NNs . 1.1 FINDINGS . 1 . LCs related to lower within-class variance than between-class variance in input 2 . LCs related to a NN internalising a rule 3 . No . of LCs related to difficulty of the problem and the computing power of the NN , with different behaviour for under- and over-resourced NNs . 4 . Large values of dropout increases LCs 5 . LCs correlate with generalisation performance 6 . Large data-sets , softmax and aggressive early stopping reduce the number of LCs 7 . Monitoring the number of LCs can be useful for figuring out when to stop training 2 METHODOLOGY . Data design Data input to a neural network can be understood as a code , { Cx } , with each trained input data vector designated as a codeword , Cx . The size of the code is related to the number of codewords ( i.e . the size of the training set ) , nx . Lx is the length of the codeword , generally 500bits in this paper . We used a binary alphabet , and the number of 1s in a codeword is the weight2 , wx of that codeword . 1There are two metrics that are relevant to measuring the distance between these vectors , the Hamming distance which is the number of bits that have to be switched to turn one vector into another and the cosine similarity , which is the angle between the vectors , we use Hamming distance here . 2This weight definition is not the same as connection weights in the neural network . To create a set of nP classes with a known structural similarity , the procedure in figure 1 was followed . We start with a set of nP prototypes , { Px , 1 ≤ x ≤ nP } , with blocks of 1s of length LP /nP , called prototype blocks , which code for a class . For example , if Lx were 12 and nP were 3 : P1 = [ 111100000000 ] and P2 = [ 000011110000 ] , P3 = [ 000000001111 ] , and this would gives prototypes that are a Hamming distance of 8 apart , and thus we know that our prototypes span inputdata space . To create members of each class , the prototype is used as a mask , with the 0 blocks replaced by blocks from a random vector , Rx . The weight of the random vectors , wR can be tuned to ensure that a set of vectors randomly clustered around the prototype vector are generated , such that members of the same category are closer to each other than those of the other categories ( N.B . the prototypes are not members of the category ) . A more realistic data-set is created by allowing the prototypes to be perturbed so that a percentage of the prototype block is randomly switched to 0s each time a new codeword is created , in accordance with the perturbation rate ( see P ′2 in figure 1 ) . This method creates a code with a known number of invariant bits for codewords in the same category . For example , in figure 1 , codewords C1 and C2 were both derived from P1 , and have a Hamming distance of 6 , where as C1 and C4 are in different classes and have a Hamming distance of 8 . Note , the difference between these numbers is larger in our experiments as Lx = 500 and there are 10 categories . We define ‘ sparseness ’ of a vector , Sx , as the fraction of bits that are ‘ 1 ’ s . LCs were highly unlikely to appear in the input code , and none were observed in random checks . Neural network design We have three-layer feed-forward network with Lx input neurons , nHLN hidden layer neurons ( HLN ) and Lo output neurons , using a sigmoidal activation function and no softmax on the output . For experiment 1 Lx is 500bits , mapped to 10 output classes , so the weight of prototype vectors wp , is 50 , and the wR is 150 ( so Sr=1/3 ) , the output vector is a 50bit-long distributed vector ( wo=25 ) . NNs were trained for 45,000 epochs , and each plotted point is a number of repeats between 10 and 15 . Experiment 2-7 varied thus : 2 : nx = { 250 , 500 } ; 3 : SR = { 1/9 , 2/9 , 1/3 } ; 4 : Lx = { 300 , 700 , 1000 } ; 5 : activation function= { ReLU , sigmoid } ; 6 : output vector is distributed or 1-hot ; 7 : takes nHLN = 500 , 1000 and decays thewP from 50 to 25 ( P from 1 to 0.5 ) . Experiment 8 is ∼250 repeats of experiment 1 , with values of dropout in { 0 , 0.2 , 0.5 , 0.7 , 0.9 } . Experiment 9 is a repeat of 1 , with activation noise added for networks with nHLN = { 100 , 500 , 1000 } . Experiment 10 measures the number of LCs over training time for nHLN = { 250 , 500 , 1000 , 2000 } . To do generalisation tests , a new test set is built with the same parameters as the training set , with ntrain = 10000 and applied to pre-run results ( from experiments 8 and 10 ) . Experiment 11 : for the 4-layer neural networks , nHLN of the first hidden layer is varied , the second is set to 250 , the output vectors are 1-HOT and different training parameters and values are given in table 11 . Experiment 12 : For the MLP experiments , we use MNIST data-set , with added 20 pixel invariants that code for the class which are either non-varying ( ‘ invariant ’ ) or drawn from a Gaussian distribution ( ‘ Gaussian ’ ) , see table 8 in the appendix . The invariant was either not applied ( ‘ standard ’ ) or applied to 50 % of 100 % of all images , or applied to the whole of 2 , 5 or 8 categories , and data is from 10 repeats . Experiment 13 : To see if LCs were associated with memorising the dataset , we trained NNs where the codewords were shuffled before being assigned to targets . NNs were run in Keras with a TensorFlow backend . Accuracy and generalisation . The accuracy reported by Keras counts an output vector as correct if each bit is within 50 % of the correct value , e.g .. the output vector [ 0.4 , 0.6 ] would map to the target vector [ 0 , 1 ] ( the outputs are binary ) , and in standard classification neural networks with a 1- hot target vectors these outputs would be very close to the target after the softmax operation . Thus , we label this accuracy ‘ classification accuracy ’ . If one wants to use a NN as a pattern matching machine , then one could put an arbitrary limit on how big an error between the output values and the targets , we chose 10 % , and the target [ 0 , 1 ] would need an output vector of [ ≤ 0.1 , ≥ 0.9 ] to be considered correct . We call this more stringent condition the ‘ pattern matching accuracy. ’ As the codes are built to a rule , it is possible to generate an arbitrarily large code , thus our test sets are at least 10 times larger than the training sets .
This paper aims to study when hidden units provide local codes by analyzing the hidden units of trained fully connected classification networks under various architectures and regularizers. The main text primarily studies networks trained on a dataset where binary inputs are structured to represent 10 classes with each input containing a subset of elements indicative of the class label. The work also studies fully connected networks trained on the MNIST dataset (with the addition of some pixels indicating each class label). After enumerating the number of local codes observed under these different settings, the authors conclude the following: (1) "common" properties of deep neural networks & modern datasets seem to decrease the number of local codes (2) specific architectural choices, regularization choices & dataset choices seem to increase the number local codes (i.e. increasing dropout, decreasing dataset size, using sigmoidal activations etc.). The work then state that these insights may suggest how to train networks to have local codes emerge.
SP:975ca0db2d36004b48911303fd7fd8b61e956774
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
1 INTRODUCTION . Recent studies on mode connectivity show that two independently trained deep neural network ( DNN ) models with the same architecture and loss function can be connected on their loss landscape using a high-accuracy/low-loss path characterized by a simple curve ( Garipov et al. , 2018 ; Gotmare et al. , 2018 ; Draxler et al. , 2018 ) . This insight on the loss landscape geometry provides us with easy access to a large number of similar-performing models on the low-loss path between two given models , and Garipov et al . ( 2018 ) use this to devise a new model ensembling method . Another line of recent research reveals interesting geometric properties relating to adversarial robustness of DNNs ( Fawzi et al. , 2017 ; 2018 ; Wang et al. , 2018b ; Yu et al. , 2018 ) . An adversarial data or model is defined to be one that is close to a bonafide data or model in some space , but exhibits unwanted or malicious behavior . Motivated by these geometric perspectives , in this study , we propose to employ mode connectivity to study and improve adversarial robustness of DNNs against different types of threats . A DNN can be possibly tampered by an adversary during different phases in its life cycle . For example , during the training phase , the training data can be corrupted with a designated trigger pattern associated with a target label to implant a backdoor for trojan attack on DNNs ( Gu et al. , 2019 ; Liu et al. , 2018 ) . During the inference phase when a trained model is deployed for task-solving , prediction-evasive attacks are plausible ( Biggio & Roli , 2018 ; Goodfellow et al. , 2015 ; Zhao et al. , 2018 ) , even when the model internal details are unknown to an attacker ( Chen et al. , 2017 ; Ilyas et al. , 2018 ; Zhao et al. , 2019a ) . In this research , we will demonstrate that by using mode connectivity in loss landscapes , we can repair backdoored or error-injected DNNs . We also show that mode 1The code is available at https : //github.com/IBM/model-sanitization connectivity analysis reveals the existence of a robustness loss barrier on the path connecting regular and adversarially-trained models . We motivate the novelty and benefit of using mode connectivity for mitigating training-phase adversarial threats through the following practical scenario : as training DNNs is both time- and resource-consuming , it has become a common trend for users to leverage pre-trained models released in the public domain2 . Users may then perform model fine-tuning or transfer learning with a small set of bonafide data that they have . However , publicly available pre-trained models may carry an unknown but significant risk of tampering by an adversary . It can also be challenging to detect this tampering , as in the case of a backdoor attack3 , since a backdoored model will behave like a regular model in the absence of the embedded trigger . Therefore , it is practically helpful to provide tools to users who wish to utilize pre-trained models while mitigating such adversarial threats . We show that our proposed method using mode connectivity with limited amount of bonafide data can repair backdoored or error-injected DNNs , while greatly countering their adversarial effects . Our main contributions are summarized as follows : • For backdoor and error-injection attacks , we show that the path trained using limited bonafide data connecting two tampered models can be used to repair and redeem the attacked models , thereby resulting in high-accuracy and low-risk models . The performance of mode connectivity is significantly better than several baselines including fine-tuning , training from scratch , pruning , and random weight perturbations . We also provide technical explanations for the effectiveness of our path connection method based on model weight space exploration and similarity analysis of input gradients for clean and tampered data . • For evasion attacks , we use mode connectivity to study standard and adversarial-robustness loss landscapes . We find that between a regular and an adversarially-trained model , training a path with standard loss reveals no barrier , whereas the robustness loss on the same path reveals a barrier . This insight provides a geometric interpretation of the “ no free lunch ” hypothesis in adversarial robustness ( Tsipras et al. , 2019 ; Dohmatob , 2018 ; Bubeck et al. , 2019 ) . We also provide technical explanations for the high correlation observed between the robustness loss and the largest eigenvalue of the input Hessian matrix on the path . • Our experimental results on different DNN architectures ( ResNet and VGG ) and datasets ( CIFAR10 and SVHN ) corroborate the effectiveness of using mode connectivity in loss landscapes to understand and improve adversarial robustness . We also show that our path connection is resilient to the considered adaptive attacks that are aware of our defense . To the best of our knowledge , this is the first work that proposes using mode connectivity approaches for adversarial robustness . 2 BACKGROUND AND RELATED WORK . 2.1 MODE CONNECTIVITY IN LOSS LANDSCAPES . Let w1 and w2 be two sets of model weights corresponding to two neural networks independently trained by minimizing any user-specified loss l ( w ) , such as the cross-entropy loss . Moreover , let φθ ( t ) with t ∈ [ 0 , 1 ] be a continuous piece-wise smooth parametric curve , with parameters θ , such that its two ends are φθ ( 0 ) = w1 and φθ ( 1 ) = w2 . To find a high-accuracy path between w1 and w2 , it is proposed to find the parameters θ that minimize the expectation over a uniform distribution on the curve ( Garipov et al. , 2018 ) , L ( θ ) = Et∼qθ ( t ) [ l ( φθ ( t ) ) ] ( 1 ) where qθ ( t ) is the distribution for sampling the models on the path indexed by t. Since qθ ( t ) depends on θ , in order to render the training of high-accuracy path connection more computationally tractable , ( Garipov et al. , 2018 ; Gotmare et al. , 2018 ) proposed to instead use the following loss term , L ( θ ) = Et∼U ( 0,1 ) [ l ( φθ ( t ) ) ] ( 2 ) 2For example , the Model Zoo project : https : //modelzoo.co 3See the recent call for proposals on Trojans in AI announced by IARPA : https : //www.iarpa.gov/ index.php/research-programs/trojai/trojai-baa where U ( 0 , 1 ) is the uniform distribution on [ 0 , 1 ] . The following functions are commonly used for characterizing the parametric curve function φθ ( t ) . Polygonal chain ( Gomes et al. , 2012 ) . The two trained networks w1 and w2 serve as the endpoints of the chain and the bends of the chain are parameterized by θ . For instance , the case of a chain with one bend is φθ ( t ) = { 2 ( tθ + ( 0.5− t ) ω1 ) , 0 ≤ t ≤ 0.5 2 ( ( t− 0.5 ) ω2 + ( 1− t ) θ ) , 0.5 ≤ t ≤ 1 . ( 3 ) Bezier curve ( Farouki , 2012 ) . A Bezier curve provides a convenient parametrization of smoothness on the paths connecting endpoints . For instance , a quadratic Bezier curve with endpoints w1 and w2 is given by φθ ( t ) = ( 1− t ) 2ω1 + 2t ( 1− t ) θ + t2ω2 , 0 ≤ t ≤ 1 . ( 4 ) It is worth noting that , while current research on mode connectivity mainly focuses on generalization analysis ( Garipov et al. , 2018 ; Gotmare et al. , 2018 ; Draxler et al. , 2018 ; Wang et al. , 2018a ) and has found remarkable applications such as fast model ensembling ( Garipov et al. , 2018 ) , our results show that its implication on adversarial robustness through the lens of loss landscape analysis is a promising , yet largely unexplored , research direction . Yu et al . ( 2018 ) scratched the surface but focused on interpreting decision surface of input space and only considered evasion attacks . 2.2 BACKDOOR , EVASION , AND ERROR-INJECTION ADVERSARIAL ATTACKS . Backdoor attack . Backdoor attack on DNNs is often accomplished by designing a designated trigger pattern with a target label implanted to a subset of training data , which is a specific form of data poisoning ( Biggio et al. , 2012 ; Shafahi et al. , 2018 ; Jagielski et al. , 2018 ) . A backdoored model trained on the corrupted data will output the target label for any data input with the trigger ; and it will behave as a normal model when the trigger is absent . For mitigating backdoor attacks , majority of research focuses on backdoor detection or filtering anomalous data samples from training data for re-training ( Chen et al. , 2018 ; Wang et al. , 2019 ; Tran et al. , 2018 ) , while our aim is to repair backdoored models for models using mode connectivity and limited amount of bonafide data . Evasion attack . Evasion attack is a type of inference-phase adversarial threat that generates adversarial examples by mounting slight modification on a benign data sample to manipulate model prediction ( Biggio & Roli , 2018 ) . For image classification models , evasion attack can be accomplished by adding imperceptible noises to natural images and resulting in misclassification ( Goodfellow et al. , 2015 ; Carlini & Wagner , 2017 ; Xu et al. , 2018 ) . Different from training-phase attacks , evasion attack does not assume access to training data . Moreover , it can be executed even when the model details are unknown to an adversary , via black-box or transfer attacks ( Papernot et al. , 2017 ; Chen et al. , 2017 ; Zhao et al. , 2020 ) . Error-injection attack . Different from attacks modifying data inputs , error-injection attack injects errors to model weights at the inference phase and aims to cause misclassification of certain input samples ( Liu et al. , 2017 ; Zhao et al. , 2019b ) . At the hardware level of a deployed machine learning system , it can be made plausible via laser beam ( Barenghi et al. , 2012 ) and row hammer ( Van Der Veen et al. , 2016 ) to change or flip the logic values of the corresponding bits and thus modifying the model parameters saved in memory . 3 MAIN RESULTS . Here we report the experimental results , provide technical explanations , and elucidate the effectiveness of using mode connectivity for studying and enhancing adversarial robustness in three representative themes : ( i ) backdoor attack ; ( ii ) error-injection attack ; and ( iii ) evasion attack . Our experiments were conducted on different network architectures ( VGG and ResNet ) and datasets ( CIFAR-10 and SVHN ) . The details on experiment setups are given in Appendix A . When connecting models , we use the cross entropy loss and the quadratic Bezier curve as described in ( 4 ) . In what follows , we begin by illustrating the problem setups bridging mode connectivity and adversarial robustness , summarizing the results of high-accuracy ( low-loss ) pathways between untampered models for reference , and then delving into detailed discussions . Depending on the context , we will use the terms error rate and accuracy on clean/adversarial samples interchangeably . The error rate of adversarial samples is equivalent to their attack failure rate as well as 100 % - attack accuracy .
This paper studies leveraging mode connectivity to defend against different types of attacks, including backdoor attacks, adversarial examples, and error-injection attacks. They perform a comprehensive evaluation to show the benign test accuracy and attack success rate over the models in the connected path between pairs of models with the same or different properties, e.g., both are attacked, both are benign, one attacked and one benign, etc., where the connected path is learned using existing algorithms to find the high-accuracy path over two different models. Their evaluation suggests that in certain attack scenarios, exploring the mode connectivity could help find a model that has a high benign accuracy, while with a significantly lower attack success rate than the models at the end points.
SP:a77494ee26aff245e217b630d3212aeee3d4496c
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
1 INTRODUCTION . Recent studies on mode connectivity show that two independently trained deep neural network ( DNN ) models with the same architecture and loss function can be connected on their loss landscape using a high-accuracy/low-loss path characterized by a simple curve ( Garipov et al. , 2018 ; Gotmare et al. , 2018 ; Draxler et al. , 2018 ) . This insight on the loss landscape geometry provides us with easy access to a large number of similar-performing models on the low-loss path between two given models , and Garipov et al . ( 2018 ) use this to devise a new model ensembling method . Another line of recent research reveals interesting geometric properties relating to adversarial robustness of DNNs ( Fawzi et al. , 2017 ; 2018 ; Wang et al. , 2018b ; Yu et al. , 2018 ) . An adversarial data or model is defined to be one that is close to a bonafide data or model in some space , but exhibits unwanted or malicious behavior . Motivated by these geometric perspectives , in this study , we propose to employ mode connectivity to study and improve adversarial robustness of DNNs against different types of threats . A DNN can be possibly tampered by an adversary during different phases in its life cycle . For example , during the training phase , the training data can be corrupted with a designated trigger pattern associated with a target label to implant a backdoor for trojan attack on DNNs ( Gu et al. , 2019 ; Liu et al. , 2018 ) . During the inference phase when a trained model is deployed for task-solving , prediction-evasive attacks are plausible ( Biggio & Roli , 2018 ; Goodfellow et al. , 2015 ; Zhao et al. , 2018 ) , even when the model internal details are unknown to an attacker ( Chen et al. , 2017 ; Ilyas et al. , 2018 ; Zhao et al. , 2019a ) . In this research , we will demonstrate that by using mode connectivity in loss landscapes , we can repair backdoored or error-injected DNNs . We also show that mode 1The code is available at https : //github.com/IBM/model-sanitization connectivity analysis reveals the existence of a robustness loss barrier on the path connecting regular and adversarially-trained models . We motivate the novelty and benefit of using mode connectivity for mitigating training-phase adversarial threats through the following practical scenario : as training DNNs is both time- and resource-consuming , it has become a common trend for users to leverage pre-trained models released in the public domain2 . Users may then perform model fine-tuning or transfer learning with a small set of bonafide data that they have . However , publicly available pre-trained models may carry an unknown but significant risk of tampering by an adversary . It can also be challenging to detect this tampering , as in the case of a backdoor attack3 , since a backdoored model will behave like a regular model in the absence of the embedded trigger . Therefore , it is practically helpful to provide tools to users who wish to utilize pre-trained models while mitigating such adversarial threats . We show that our proposed method using mode connectivity with limited amount of bonafide data can repair backdoored or error-injected DNNs , while greatly countering their adversarial effects . Our main contributions are summarized as follows : • For backdoor and error-injection attacks , we show that the path trained using limited bonafide data connecting two tampered models can be used to repair and redeem the attacked models , thereby resulting in high-accuracy and low-risk models . The performance of mode connectivity is significantly better than several baselines including fine-tuning , training from scratch , pruning , and random weight perturbations . We also provide technical explanations for the effectiveness of our path connection method based on model weight space exploration and similarity analysis of input gradients for clean and tampered data . • For evasion attacks , we use mode connectivity to study standard and adversarial-robustness loss landscapes . We find that between a regular and an adversarially-trained model , training a path with standard loss reveals no barrier , whereas the robustness loss on the same path reveals a barrier . This insight provides a geometric interpretation of the “ no free lunch ” hypothesis in adversarial robustness ( Tsipras et al. , 2019 ; Dohmatob , 2018 ; Bubeck et al. , 2019 ) . We also provide technical explanations for the high correlation observed between the robustness loss and the largest eigenvalue of the input Hessian matrix on the path . • Our experimental results on different DNN architectures ( ResNet and VGG ) and datasets ( CIFAR10 and SVHN ) corroborate the effectiveness of using mode connectivity in loss landscapes to understand and improve adversarial robustness . We also show that our path connection is resilient to the considered adaptive attacks that are aware of our defense . To the best of our knowledge , this is the first work that proposes using mode connectivity approaches for adversarial robustness . 2 BACKGROUND AND RELATED WORK . 2.1 MODE CONNECTIVITY IN LOSS LANDSCAPES . Let w1 and w2 be two sets of model weights corresponding to two neural networks independently trained by minimizing any user-specified loss l ( w ) , such as the cross-entropy loss . Moreover , let φθ ( t ) with t ∈ [ 0 , 1 ] be a continuous piece-wise smooth parametric curve , with parameters θ , such that its two ends are φθ ( 0 ) = w1 and φθ ( 1 ) = w2 . To find a high-accuracy path between w1 and w2 , it is proposed to find the parameters θ that minimize the expectation over a uniform distribution on the curve ( Garipov et al. , 2018 ) , L ( θ ) = Et∼qθ ( t ) [ l ( φθ ( t ) ) ] ( 1 ) where qθ ( t ) is the distribution for sampling the models on the path indexed by t. Since qθ ( t ) depends on θ , in order to render the training of high-accuracy path connection more computationally tractable , ( Garipov et al. , 2018 ; Gotmare et al. , 2018 ) proposed to instead use the following loss term , L ( θ ) = Et∼U ( 0,1 ) [ l ( φθ ( t ) ) ] ( 2 ) 2For example , the Model Zoo project : https : //modelzoo.co 3See the recent call for proposals on Trojans in AI announced by IARPA : https : //www.iarpa.gov/ index.php/research-programs/trojai/trojai-baa where U ( 0 , 1 ) is the uniform distribution on [ 0 , 1 ] . The following functions are commonly used for characterizing the parametric curve function φθ ( t ) . Polygonal chain ( Gomes et al. , 2012 ) . The two trained networks w1 and w2 serve as the endpoints of the chain and the bends of the chain are parameterized by θ . For instance , the case of a chain with one bend is φθ ( t ) = { 2 ( tθ + ( 0.5− t ) ω1 ) , 0 ≤ t ≤ 0.5 2 ( ( t− 0.5 ) ω2 + ( 1− t ) θ ) , 0.5 ≤ t ≤ 1 . ( 3 ) Bezier curve ( Farouki , 2012 ) . A Bezier curve provides a convenient parametrization of smoothness on the paths connecting endpoints . For instance , a quadratic Bezier curve with endpoints w1 and w2 is given by φθ ( t ) = ( 1− t ) 2ω1 + 2t ( 1− t ) θ + t2ω2 , 0 ≤ t ≤ 1 . ( 4 ) It is worth noting that , while current research on mode connectivity mainly focuses on generalization analysis ( Garipov et al. , 2018 ; Gotmare et al. , 2018 ; Draxler et al. , 2018 ; Wang et al. , 2018a ) and has found remarkable applications such as fast model ensembling ( Garipov et al. , 2018 ) , our results show that its implication on adversarial robustness through the lens of loss landscape analysis is a promising , yet largely unexplored , research direction . Yu et al . ( 2018 ) scratched the surface but focused on interpreting decision surface of input space and only considered evasion attacks . 2.2 BACKDOOR , EVASION , AND ERROR-INJECTION ADVERSARIAL ATTACKS . Backdoor attack . Backdoor attack on DNNs is often accomplished by designing a designated trigger pattern with a target label implanted to a subset of training data , which is a specific form of data poisoning ( Biggio et al. , 2012 ; Shafahi et al. , 2018 ; Jagielski et al. , 2018 ) . A backdoored model trained on the corrupted data will output the target label for any data input with the trigger ; and it will behave as a normal model when the trigger is absent . For mitigating backdoor attacks , majority of research focuses on backdoor detection or filtering anomalous data samples from training data for re-training ( Chen et al. , 2018 ; Wang et al. , 2019 ; Tran et al. , 2018 ) , while our aim is to repair backdoored models for models using mode connectivity and limited amount of bonafide data . Evasion attack . Evasion attack is a type of inference-phase adversarial threat that generates adversarial examples by mounting slight modification on a benign data sample to manipulate model prediction ( Biggio & Roli , 2018 ) . For image classification models , evasion attack can be accomplished by adding imperceptible noises to natural images and resulting in misclassification ( Goodfellow et al. , 2015 ; Carlini & Wagner , 2017 ; Xu et al. , 2018 ) . Different from training-phase attacks , evasion attack does not assume access to training data . Moreover , it can be executed even when the model details are unknown to an adversary , via black-box or transfer attacks ( Papernot et al. , 2017 ; Chen et al. , 2017 ; Zhao et al. , 2020 ) . Error-injection attack . Different from attacks modifying data inputs , error-injection attack injects errors to model weights at the inference phase and aims to cause misclassification of certain input samples ( Liu et al. , 2017 ; Zhao et al. , 2019b ) . At the hardware level of a deployed machine learning system , it can be made plausible via laser beam ( Barenghi et al. , 2012 ) and row hammer ( Van Der Veen et al. , 2016 ) to change or flip the logic values of the corresponding bits and thus modifying the model parameters saved in memory . 3 MAIN RESULTS . Here we report the experimental results , provide technical explanations , and elucidate the effectiveness of using mode connectivity for studying and enhancing adversarial robustness in three representative themes : ( i ) backdoor attack ; ( ii ) error-injection attack ; and ( iii ) evasion attack . Our experiments were conducted on different network architectures ( VGG and ResNet ) and datasets ( CIFAR-10 and SVHN ) . The details on experiment setups are given in Appendix A . When connecting models , we use the cross entropy loss and the quadratic Bezier curve as described in ( 4 ) . In what follows , we begin by illustrating the problem setups bridging mode connectivity and adversarial robustness , summarizing the results of high-accuracy ( low-loss ) pathways between untampered models for reference , and then delving into detailed discussions . Depending on the context , we will use the terms error rate and accuracy on clean/adversarial samples interchangeably . The error rate of adversarial samples is equivalent to their attack failure rate as well as 100 % - attack accuracy .
This paper proposes an adversarial defense method based on mode connectivity. The goal of the method is to repair tampered networks using a limited number of clean data examples. The authors consider two types of adversarial attacks: backdoor attacks and error-injection attacks. The proposed method takes two potentially tampered networks, then constructs a low-loss path connecting the weight vectors of the given models in the space of network parameters (the path is constructed using the small set of clean data examples), finally an intermediate point on the path is used as a weight vector corresponding to the “repaired” model. The authors analyze the properties of the paths and show that intermediate points on the mode-connecting paths deliver both high clean-data accuracy and low attack success rate. In the experiments the proposed method shows better results compared to baseline defense techniques including fine-tuning, training from scratch, and pruning followed by fine-tuning. The paper also analyzes evasion adversarial attacks from the perspective of mode-connectivity and observes the existence of barriers in the landscape of robustness loss on the paths connecting regular and adversarially-trained models.
SP:a77494ee26aff245e217b630d3212aeee3d4496c
Learning Structured Communication for Multi-agent Reinforcement Learning
1 INTRODUCTION . Reinforcement learning ( RL ) has achieved remarkable success in solving single-agent sequential decision problems under interactive and complicated environments , such as games ( Mnih et al. , 2015 ; Silver et al. , 2016 ) and robotics ( Lillicrap et al. , 2016 ) . In many real world applications such as intelligent transportation systems ( Adler & Blue , 2002 ) and unmanned systems ( Semsar-Kazerooni & Khorasani , 2009 ) , not only one , but usually a large number of agents are involved in the learning tasks . Such a setting naturally leads to the popular multi-agent reinforcement learning ( MARL ) problems , where the key research challenges include how to design scalable and efficient learning schemes under an unstationary environment ( caused by partial observation and/or the dynamics of other agents ’ policies ) , with large and/or dynamic problem dimension , and complicated and uncertain relationship between agents . Learning to communicate among agents has been regarded as an effective manner to strengthen the inter-agent collaboration and ultimately improve the quality of policies learned by MARL . Various communication-based MARL algorithms have been devised recently , e.g. , DIAL ( Foerster et al. , 2016 ) , CommNet ( Sukhbaatar et al. , 2016 ) , ATOC ( Jiang & Lu , 2018 ) , IC3Net ( Singh et al. , 2019 ) and TarMAC ( Das et al. , 2019 ) . These schemes aim to improve the inter-agent collaboration by learning communication strategy to exchange information between agents . However , there are still two bottlenecks unresolved , especially when faced a large number of agents . One bottleneck lies in that achieving effective communication and global collaboration is difficult with limited resources , such as narrow communication bandwidth and energy . In particular , DIAL and TarMAC require each agent to communicate with all the other agents , i.e. , a fully-connected communication network ( Figure 1 ( a ) ) , which is not feasible for large scale scenarios with geographically apart agents . CommNet and IC3 assume a star network ( Figure 1 ( b ) ) with a central node coordinating the global collaboration of agents , which again does not allow large scale scenarios with long range communications . ATOC introduces an interesting attention scheme to build a tree communication network ( Figure 1 ( c ) ) . While the tree network can be scaled , global collaboration has to be realized through inefficient multi-hop and sequential communications . In a word , improper communication topologies will limit the cooperation ability in large scale scenarios . Another bottleneck is the difficulty of extracting essential information to exchange between agents for achieving high-performance MARL , especially when the number of agents grows . Most of the existing works simply concatenate , take the mean or use the LSTM to extract information to be exchanged . First two lack in considering the inter-relationship between agents , and LSTM assumes that there is a fixed sequence of message passing between agents , that is , the relationship between agents is predefined . Recently , TarMAC utilized an attention scheme to aggregate messages by considering the relationship from each agent to all others . However , the improper communication topology still hinders the information extraction . The communication structure needs to be jointly designed with the information extraction scheme to achieve further improved learning performance . To address the above two issues , we propose a novel structured communication-based algorithm , called learning structured communication ( LSC ) . Our LSC combines a structured communication network module and a communication-based policy module , which aims to establish a scalable hierarchically structured network and information exchange scheme for large scale MARL . In particular , a hierarchically structured communication network ( Figure 1 ( d ) ) is dynamically learned based on local partial observations of agents . In the hierarchically structured network , all agents are grouped into clusters , where global collaboration can be achieved via intra-group and inter-group communications . In contrast to the other three types in Figure 1 , the proposed hierarchical communication network is more flexible and scalable , with fewer resources needed to achieve long-range and global collaboration . The procedure to establish such a hierarchically structured communication network is shown in Figure 2 . To better utilize the relationship between agents given the hierarchically structured communication network and obtain more effective information extraction , graph neural network ( GNN ) ( Scarselli et al. , 2008 ) is employed . In GNN , each communication step involves information embedding and aggregation . Benefiting from the unordered aggregation power and the dynamic graph adaptability of GNN , the proposed LSC algorithm can extract valuable information effectively . The GNN-based information extraction procedure is depicted in Figure 3 . This paper is devoted to the learning of communication structure among agents . To our knowledge , this is the first work of hierarchical structured learning to communication for MARL . It allows to learn communication structure adaptively instead of using predefined forms . Specifically : i ) To improve scalability for a large number of agents , a hierarchical structure is devised that divides the agents into higher-level central agents and sub-level normal ones . As such , the communication network is sparsified . While it still allows for more effective global cooperation via message passing among the central agents , compared with the star/tree structures . ii ) For effective communication and global cooperation , the message representation learning is deeply integrated into the information aggregating and permeating through the network , via graph neural network ( GNN ) , which is a natural combination with the hierarchical communication structure . iii ) Extensive experiments on both MAgent and StarCraft2 show our approach achieves state-of-theart scalability and effectiveness on large-scale MARL problems . 2 RELATED WORK AND PRELIMINARIES . Many multi-agent reinforcement learning algorithms without communication in the inference procedure have experienced fast development . Recent works like MADDPG ( Lowe et al. , 2017 ) , QMIX ( Rashid et al. , 2018 ) , COMA ( Foerster et al. , 2018 ) and MAAC ( Iqbal & Sha , 2019 ) adopt a centralized training and decentralized implementing framework . All agents ’ local observations and actions are considered to improve the learning stability . These algorithms are generally not suitable for large-scale case due to explosive growing number of agents . Communication-based MARL algorithms have been showed effective for large-scale agent cooperation . Earlier works assume that all agents need to communicate with each other . DIAL ( Foerster et al. , 2016 ) learns to communication through back-propagating all other agents ’ gradients to the message generator network . Similarly , CommNet ( Sukhbaatar et al. , 2016 ) sends all agents ’ hidden states to the shared communication channel and further learns the message based on the average of all other hidden states . MFRL ( Yang et al. , 2018 ) approximates the influence of other agents by averaging the actions of surrounding neighbor agents , which could mitigate the dimensional disaster for large-scale cases . However , this can be considered as a predefined communication pattern , which is unable to adapt to complex large-scale scenarios . Communication between all agents will lead to high communication complexity and difficulty of useful information extraction . DGN ( Jiang et al. , 2018 ) employs graph convolution network ( GCN ) to extract relationships between agents which could result in better collaboration . However , it considers all agents equivalently and assumes the communication of each agent has to involve all neighbor agents which limits to adapt to more practical bandwidth-limited environments . IC3Net ( Singh et al. , 2019 ) uses a communication gate to decide whether to communicate with the center , but adopt the same star structure like CommNet which requires high bandwidth and can hard to extract valuable information with only one center . ATOC ( Jiang & Lu , 2018 ) and TarMAC ( Das et al. , 2019 ) introduce the attention mechanism to determine when to communicate and whom to communicate with , respectively . TarMAC focuses more on message aggregation rather than the communication structure . SchedNet ( Kim et al. , 2019 ) aims to learn a weight-based scheduler to determine the communication sequence and priority . From the perspective of employing GNN into MARL , MAGNet ( Malysheva et al. , 2018 ) that utilizes a relevance graph representation of the environment and a message passing mechanism to help agents learning . However , it requires heuristic rules to establish the graph which is hard to achieve in complex environments . RFM ( Tacchetti et al. , 2019 ) use graph to represent the relationship between different entities , aiming to provide interpretable representations . Before the main method , we introduce some preliminaries to facilitate the presentation . Partial Observable Stochastic Games . In stochastic games , agents learn policies by maximizing their cumulative rewards through interacting with the environment and other agents . The partial observable stochastic games ( POSG ) can be characterized as a tuple 〈 I , S , b0 , A , O , P , Pe , R 〉 where I denotes the set of agents indexed from 1 to n ; S denotes the finite set of states ; b0 represents the initial state distribution and A denotes the set of joint actions . Ai is the action space of agent i , a = 〈a1 , · , an〉 denotes a joint action ; O denotes the joint observations and Oi is the observation space for agent i , o = 〈o1 , · , on〉 denotes a joint observation ; P denotes the Markovian transition distribution with P ( s̃ , o ∣∣s , a ) as the probability of state s transit to s̃ and result o after taking action a. Pe ( o|s ) is the Markovian observation emission probability function . R : S × A → Rn means the reward function for agents . The overall task of the MARL problem can be solved by properly objective function modeling , which also indicates the relationship among agents , e.g. , cooperation , competition or mixed . Graph Neural Network . Graph neural network ( GNN ) ( Scarselli et al. , 2008 ) is a deep embedding framework to handle graph-based data on a graph G = ( V , E ) . vi denotes the node feature vector for node vi ∈ V ( for Nv nodes ) , ek denotes the edge feature vector for edge ek ∈ E ( for Ne edges ) with rk , sk be the receiver and sender of edge ek respectively . The vector u denotes the global feature . The graph network framework in ( Battaglia et al. , 2018 ) is employed , which divides computation on graph data to several blocks to gain flexible processing ability . Each block introduces the aggregation and embedding functions to handle graph data . There are many variants of GNN , like messagepassing neural network ( Gilmer et al. , 2017 ) and non local neural networks ( Wang et al. , 2018 ) . By treating every agent as a node and each communication message exchanging as the edge in a graph , the observations and messages as the attributes of nodes and edges , respectively . The whole communication process can be formulated to a graph neural network . The relationships among agents can be effectively extracted to enable efficient communication message learning . Independent Deep Q-Learning . Deep Q-Network ( DQN ) ( Mnih et al. , 2015 ) is popular in deep reinforcement learning , which is one of the few RL algorithms applicable for large-scale MARL . In each step , each agent observes state s and takes an action a based on policy π . It receives reward r and next state s̃ from environment . To maximize the cumulative reward R = ∑ t rt , DQN learns the action-value function Qπ ( s , a ) = Es∼P , a∼π ( s ) [ Rt|st = s , at = a ] by minimizing L ( θ ) = Es , a , r , s̃ [ ỹ −Q ( s , a ; θ ) ] , where ỹ = r + γmaxãQ ( s̃ , ã ; θ ) . The agent follows -greedy policy , that is , selects the action that maximizes the Q-value with probability 1- or randomly . The Independent Deep Q-Learning ( IDQN ) ( Tampuu et al. , 2017 ) is an extension of DQN by ignoring the influence of other agents for multi-agent case . Every agent learns a Q-function Qa ( ua|s ; θa ) based on its own observation and received reward . Our algorithm employs DQN as the basic RL algorithm based on the following two considerations : 1 ) our algorithm is dedicated to discuss the learning communication mechanism in largescaleMARL scenarios , as a result we can choose a concise and effective basic RL algorithm like thewell-known DQN ; 2 ) data collection in large-scale MARL environments is extremely inefficiently , while DQN has excellent data efficiency as an offline RL algorithm .
The authors learn structured communication patterns between multiple RL agents. Their framework uses a Structured Communication Network Module and Communication-based Policy Module. These use a hierarchical decomposition of the multi-agent system and a graph neural network that operates over the resulting abstract agent (groups). The authors evaluate on two environments, where this approach outperforms other ways to communication protocols.
SP:ac1536424c9e62fa3ea6c40507a90a720679b23d
Learning Structured Communication for Multi-agent Reinforcement Learning
1 INTRODUCTION . Reinforcement learning ( RL ) has achieved remarkable success in solving single-agent sequential decision problems under interactive and complicated environments , such as games ( Mnih et al. , 2015 ; Silver et al. , 2016 ) and robotics ( Lillicrap et al. , 2016 ) . In many real world applications such as intelligent transportation systems ( Adler & Blue , 2002 ) and unmanned systems ( Semsar-Kazerooni & Khorasani , 2009 ) , not only one , but usually a large number of agents are involved in the learning tasks . Such a setting naturally leads to the popular multi-agent reinforcement learning ( MARL ) problems , where the key research challenges include how to design scalable and efficient learning schemes under an unstationary environment ( caused by partial observation and/or the dynamics of other agents ’ policies ) , with large and/or dynamic problem dimension , and complicated and uncertain relationship between agents . Learning to communicate among agents has been regarded as an effective manner to strengthen the inter-agent collaboration and ultimately improve the quality of policies learned by MARL . Various communication-based MARL algorithms have been devised recently , e.g. , DIAL ( Foerster et al. , 2016 ) , CommNet ( Sukhbaatar et al. , 2016 ) , ATOC ( Jiang & Lu , 2018 ) , IC3Net ( Singh et al. , 2019 ) and TarMAC ( Das et al. , 2019 ) . These schemes aim to improve the inter-agent collaboration by learning communication strategy to exchange information between agents . However , there are still two bottlenecks unresolved , especially when faced a large number of agents . One bottleneck lies in that achieving effective communication and global collaboration is difficult with limited resources , such as narrow communication bandwidth and energy . In particular , DIAL and TarMAC require each agent to communicate with all the other agents , i.e. , a fully-connected communication network ( Figure 1 ( a ) ) , which is not feasible for large scale scenarios with geographically apart agents . CommNet and IC3 assume a star network ( Figure 1 ( b ) ) with a central node coordinating the global collaboration of agents , which again does not allow large scale scenarios with long range communications . ATOC introduces an interesting attention scheme to build a tree communication network ( Figure 1 ( c ) ) . While the tree network can be scaled , global collaboration has to be realized through inefficient multi-hop and sequential communications . In a word , improper communication topologies will limit the cooperation ability in large scale scenarios . Another bottleneck is the difficulty of extracting essential information to exchange between agents for achieving high-performance MARL , especially when the number of agents grows . Most of the existing works simply concatenate , take the mean or use the LSTM to extract information to be exchanged . First two lack in considering the inter-relationship between agents , and LSTM assumes that there is a fixed sequence of message passing between agents , that is , the relationship between agents is predefined . Recently , TarMAC utilized an attention scheme to aggregate messages by considering the relationship from each agent to all others . However , the improper communication topology still hinders the information extraction . The communication structure needs to be jointly designed with the information extraction scheme to achieve further improved learning performance . To address the above two issues , we propose a novel structured communication-based algorithm , called learning structured communication ( LSC ) . Our LSC combines a structured communication network module and a communication-based policy module , which aims to establish a scalable hierarchically structured network and information exchange scheme for large scale MARL . In particular , a hierarchically structured communication network ( Figure 1 ( d ) ) is dynamically learned based on local partial observations of agents . In the hierarchically structured network , all agents are grouped into clusters , where global collaboration can be achieved via intra-group and inter-group communications . In contrast to the other three types in Figure 1 , the proposed hierarchical communication network is more flexible and scalable , with fewer resources needed to achieve long-range and global collaboration . The procedure to establish such a hierarchically structured communication network is shown in Figure 2 . To better utilize the relationship between agents given the hierarchically structured communication network and obtain more effective information extraction , graph neural network ( GNN ) ( Scarselli et al. , 2008 ) is employed . In GNN , each communication step involves information embedding and aggregation . Benefiting from the unordered aggregation power and the dynamic graph adaptability of GNN , the proposed LSC algorithm can extract valuable information effectively . The GNN-based information extraction procedure is depicted in Figure 3 . This paper is devoted to the learning of communication structure among agents . To our knowledge , this is the first work of hierarchical structured learning to communication for MARL . It allows to learn communication structure adaptively instead of using predefined forms . Specifically : i ) To improve scalability for a large number of agents , a hierarchical structure is devised that divides the agents into higher-level central agents and sub-level normal ones . As such , the communication network is sparsified . While it still allows for more effective global cooperation via message passing among the central agents , compared with the star/tree structures . ii ) For effective communication and global cooperation , the message representation learning is deeply integrated into the information aggregating and permeating through the network , via graph neural network ( GNN ) , which is a natural combination with the hierarchical communication structure . iii ) Extensive experiments on both MAgent and StarCraft2 show our approach achieves state-of-theart scalability and effectiveness on large-scale MARL problems . 2 RELATED WORK AND PRELIMINARIES . Many multi-agent reinforcement learning algorithms without communication in the inference procedure have experienced fast development . Recent works like MADDPG ( Lowe et al. , 2017 ) , QMIX ( Rashid et al. , 2018 ) , COMA ( Foerster et al. , 2018 ) and MAAC ( Iqbal & Sha , 2019 ) adopt a centralized training and decentralized implementing framework . All agents ’ local observations and actions are considered to improve the learning stability . These algorithms are generally not suitable for large-scale case due to explosive growing number of agents . Communication-based MARL algorithms have been showed effective for large-scale agent cooperation . Earlier works assume that all agents need to communicate with each other . DIAL ( Foerster et al. , 2016 ) learns to communication through back-propagating all other agents ’ gradients to the message generator network . Similarly , CommNet ( Sukhbaatar et al. , 2016 ) sends all agents ’ hidden states to the shared communication channel and further learns the message based on the average of all other hidden states . MFRL ( Yang et al. , 2018 ) approximates the influence of other agents by averaging the actions of surrounding neighbor agents , which could mitigate the dimensional disaster for large-scale cases . However , this can be considered as a predefined communication pattern , which is unable to adapt to complex large-scale scenarios . Communication between all agents will lead to high communication complexity and difficulty of useful information extraction . DGN ( Jiang et al. , 2018 ) employs graph convolution network ( GCN ) to extract relationships between agents which could result in better collaboration . However , it considers all agents equivalently and assumes the communication of each agent has to involve all neighbor agents which limits to adapt to more practical bandwidth-limited environments . IC3Net ( Singh et al. , 2019 ) uses a communication gate to decide whether to communicate with the center , but adopt the same star structure like CommNet which requires high bandwidth and can hard to extract valuable information with only one center . ATOC ( Jiang & Lu , 2018 ) and TarMAC ( Das et al. , 2019 ) introduce the attention mechanism to determine when to communicate and whom to communicate with , respectively . TarMAC focuses more on message aggregation rather than the communication structure . SchedNet ( Kim et al. , 2019 ) aims to learn a weight-based scheduler to determine the communication sequence and priority . From the perspective of employing GNN into MARL , MAGNet ( Malysheva et al. , 2018 ) that utilizes a relevance graph representation of the environment and a message passing mechanism to help agents learning . However , it requires heuristic rules to establish the graph which is hard to achieve in complex environments . RFM ( Tacchetti et al. , 2019 ) use graph to represent the relationship between different entities , aiming to provide interpretable representations . Before the main method , we introduce some preliminaries to facilitate the presentation . Partial Observable Stochastic Games . In stochastic games , agents learn policies by maximizing their cumulative rewards through interacting with the environment and other agents . The partial observable stochastic games ( POSG ) can be characterized as a tuple 〈 I , S , b0 , A , O , P , Pe , R 〉 where I denotes the set of agents indexed from 1 to n ; S denotes the finite set of states ; b0 represents the initial state distribution and A denotes the set of joint actions . Ai is the action space of agent i , a = 〈a1 , · , an〉 denotes a joint action ; O denotes the joint observations and Oi is the observation space for agent i , o = 〈o1 , · , on〉 denotes a joint observation ; P denotes the Markovian transition distribution with P ( s̃ , o ∣∣s , a ) as the probability of state s transit to s̃ and result o after taking action a. Pe ( o|s ) is the Markovian observation emission probability function . R : S × A → Rn means the reward function for agents . The overall task of the MARL problem can be solved by properly objective function modeling , which also indicates the relationship among agents , e.g. , cooperation , competition or mixed . Graph Neural Network . Graph neural network ( GNN ) ( Scarselli et al. , 2008 ) is a deep embedding framework to handle graph-based data on a graph G = ( V , E ) . vi denotes the node feature vector for node vi ∈ V ( for Nv nodes ) , ek denotes the edge feature vector for edge ek ∈ E ( for Ne edges ) with rk , sk be the receiver and sender of edge ek respectively . The vector u denotes the global feature . The graph network framework in ( Battaglia et al. , 2018 ) is employed , which divides computation on graph data to several blocks to gain flexible processing ability . Each block introduces the aggregation and embedding functions to handle graph data . There are many variants of GNN , like messagepassing neural network ( Gilmer et al. , 2017 ) and non local neural networks ( Wang et al. , 2018 ) . By treating every agent as a node and each communication message exchanging as the edge in a graph , the observations and messages as the attributes of nodes and edges , respectively . The whole communication process can be formulated to a graph neural network . The relationships among agents can be effectively extracted to enable efficient communication message learning . Independent Deep Q-Learning . Deep Q-Network ( DQN ) ( Mnih et al. , 2015 ) is popular in deep reinforcement learning , which is one of the few RL algorithms applicable for large-scale MARL . In each step , each agent observes state s and takes an action a based on policy π . It receives reward r and next state s̃ from environment . To maximize the cumulative reward R = ∑ t rt , DQN learns the action-value function Qπ ( s , a ) = Es∼P , a∼π ( s ) [ Rt|st = s , at = a ] by minimizing L ( θ ) = Es , a , r , s̃ [ ỹ −Q ( s , a ; θ ) ] , where ỹ = r + γmaxãQ ( s̃ , ã ; θ ) . The agent follows -greedy policy , that is , selects the action that maximizes the Q-value with probability 1- or randomly . The Independent Deep Q-Learning ( IDQN ) ( Tampuu et al. , 2017 ) is an extension of DQN by ignoring the influence of other agents for multi-agent case . Every agent learns a Q-function Qa ( ua|s ; θa ) based on its own observation and received reward . Our algorithm employs DQN as the basic RL algorithm based on the following two considerations : 1 ) our algorithm is dedicated to discuss the learning communication mechanism in largescaleMARL scenarios , as a result we can choose a concise and effective basic RL algorithm like thewell-known DQN ; 2 ) data collection in large-scale MARL environments is extremely inefficiently , while DQN has excellent data efficiency as an offline RL algorithm .
This paper proposes a method of learning a hierarchical communication graph for improving collaborative multi-agent reinforcement learning, particularly with large numbers of agents. The method is compared to a suitable range of baseline approaches across two complex environments. The initial results presented seem promising, but further work is needed to ensure the results are reproducible and repeatable.
SP:ac1536424c9e62fa3ea6c40507a90a720679b23d
Siamese Attention Networks
1 INTRODUCTION . Deep learning networks with attention operators have demonstrated great capabilities of solving challenging problems in various tasks such as computer vision ( Xu et al. , 2015 ; Lu et al. , 2016 ) , natural language processing ( Bahdanau et al. , 2015 ; Vaswani et al. , 2017 ) , and network embedding ( Veličković et al. , 2017 ) . Attention operators are capable of capturing long-range relationships and brings significant performance boosts ( Li et al. , 2018 ; Malinowski et al. , 2018 ) . The application scenarios of attention operators range from 1-D data like texts to high-order and high-dimensional data such as images and videos . However , attention operators suffer from the excessive usage of computational resources when applied on high-order or high-dimensional data . The memory and computational cost increases dramatically with the increase of input orders and dimensions . This prevents attention operators from being applied in broader scenarios . To address this limitation , some studies focus on reducing spatial sizes of inputs such as down-sampling input data ( Wang et al. , 2018 ) or attending selected part of data ( Huang et al. , 2018 ) . However , such kind of methods inevitably results in information and performance loss . In this work , we propose a novel and efficient attention operator known as Siamese attention operator ( SAO ) to dramatically reduce the usage of computational resources . We observe that the excessive computational resource usage is mainly caused by the similarity function and coefficients normalization function used in attention operators . To address this limitation , we propose the Siamese similarity function that employs a feed-forward network to compute similarity scores . By applying the same network to both input vectors , Siamese similarity function processes the symmetry property . By using Siamese similarity function to compute similarity scores , we propose the Siamese attention operator , which results in a significant saving on computational resources . Based on the Siamese attention operator , we design a family of efficient modules , which leads to our compact deep models known as Siamese attention networks ( SANets ) . Our SANets significantly outperform other state-of-the-art compact models on image classification tasks . Experiments on image restoration tasks demonstrate that our methods are efficient and effective in general application scenarios . 2 BACKGROUND AND RELATED WORK . In this section , we describe the attention operator that has been widely applied in various tasks and on various types of data including texts , images , and videos . 2.1 ATTENTION OPERATOR . The inputs of an attention operator include three matrices ; those are a query matrix Q = [ q1 , q2 , · · · , qm ] ∈ Rd×m with each qi ∈ Rd , a key matrix K = [ k1 , k2 , · · · , kn ] ∈ Rd×n with each ki ∈ Rd , and a value matrix V = [ v1 , v2 , · · · , vn ] ∈ Rp×n with each vi ∈ Rp . To compute the response of each query vector qi , the attention operator calculates similarity scores between qi and each key vector kj using a similarity function . Frequently used similarity functions include dot product ( Luong et al. , 2015 ) , concatenation ( Bahdanau et al. , 2015 ) , Gaussian function , and embedded Gaussian function . It has been shown that dot product is the most effective one ( Wang et al. , 2018 ) , which computes sim ( qi , kj ) = kTj qi . After the normalization with a softmax operator , the response is computed by taking a weighted sum over value vectors ∑N j=1 vjsoftmax ( k T j qi ) . For all query vectors , the attention operator computes O = V softmax ( KTQ ) , ( 1 ) where softmax ( · ) is the column-wise softmax operator . The matrix multiplication between KT and Q computes a intermediate output matrix E that stores similarity scores between each query vector qi and each key vector kj . The column-wise softmax operator normalizes E and makes every column sum to 1 . Multiplication between V and the normalized E gives the output O ∈ Rp×m . Self-attention operator ( Vaswani et al. , 2017 ; Devlin et al. , 2018 ) is a special case of the attention operator with Q = K = V . In practice , we usually firstly perform linear transformations on input matrices . For notation simplicity , we use original input matrices in following discussions . The computational cost of the operations in Eq . ( 1 ) is O ( m× n× ( d+ p ) ) , and the memory required to store the intermediate output E is O ( m× n ) . If m = n and d = p , the time and space complexities of Eq . ( 1 ) are O ( n2 × d ) and O ( n2 ) , respectively . The matrix multiplication order in Eq . ( 1 ) is determined by the softmax operator , which acts as the normalization function . Wang et al . ( 2018 ) proposed to use scaling by 1/N as the normalization function on similarity scores . By this , the response of qi is calculated 1N ∑N j=1 vjk T j qi . The attention operator using scaling by 1/N computes all responses as : O = 1 N ( V KT ) Q . ( 2 ) By computing V KT first , the time and space complexities of Eq . ( 2 ) are O ( Nd2 ) and O ( d2 ) , respectively . When N > d , this saves computational resources compared to the attention operator in Eq . ( 1 ) . In practice , we usually have N > d in some parts of a neural network , especially on high-order data . 2.2 ATTENTION OPERATORS ON HIGH-ORDER DATA . Non-local operators ( Wang et al. , 2018 ) are essentially self-attention operators on high-order data like images and videos . Take 2-D data as an example , the input to a non-local operator is an image , which can be represented as a third-order tensor X ∈ Rh×w×c . Here , h , w , and c denote the height , width , and number of channels , respectively . The non-local operator converts the tensor into a matrix X ( 3 ) ∈ Rc×hw by unfolding along mode-3 ( Kolda & Bader , 2009 ) . Then the matrix is fed into an attention operator by setting Q = K = V = X ( 3 ) . The output of the attention operator is converted back to a third-order tensor that is used as the final output . A challenging problem of non-local operators is the excessive usage of computational resources . If h = w , the time and space complexity of the non-local operator is O ( h4 × d ) and O ( h4 ) , respectively . The computational cost becomes even bigger on higher-order data like videos . The excessive usage of computational resources limits the application of attention operators in broader scenarios . 3 SIAMESE ATTENTION NETWORKS . In this work , we propose a learnable similarity function known as the Siamese similarity function . This function uses a single-layer feed-forward network to compute similarity scores . Based on Siamese similarity function , we propose the Siamese attention operator , which dramatically reduces computational cost . We also describe how to build Siamese attention networks using this operator . 3.1 SIAMESE SIMILARITY FUNCTION . We analyze the problem of learning a similarity function given two vectors . To learn a similarity function , we employ a single-layer feed-forward neural network . Given two vectors a ∈ Rd and b ∈ Rd , the similarity score is computed using a trainable vector w ∈ R2d as : simw ( a , b ) = [ aT , bT ] w = d∑ i=1 ai × wi + bi × wd+i = aTwa + bTwb , ( 3 ) where w = [ wTa , w T b ] T with wa ∈ Rd and wb ∈ Rd . Here , we ignore the bias term for notation simplicity . We consider the similarity function defined in Eq . ( 3 ) as two feed-forward networks that process two vectors separately . The similarity score is the sum of outputs from two different networks . Unlike distance metrics , the non-negativity or triangle inequality do not need to hold from similarity functions . But we usually expect similarity measures to be symmetric , which means it outputs the same similarity score when two input arguments are swapped . Apparently , the similarity function defined by Eq . ( 3 ) does not have this property . To retain the symmetry property , we employ the same network while using both vectors to compute the similarity score . This leads to our proposed Siamese similarity function ( Sia-sim ) , which follows the principle of Siamese networks ( Bromley et al. , 1994 ; Bertinetto et al. , 2016 ) . The Siamese similarity function computes the similarity score between a and b as : Sia-simw ( a , b ) = d∑ i=1 ( ai + bi ) × wi = ( a+ b ) Tw , = Sia-simw ( b , a ) ( 4 ) where w ∈ Rd is a trainable parameter vector . Although the time complexity of computing Sia-sim is the same as that of dot product , we show that Sia-sim leads to a very efficient attention operator in Section 3.2 . Figure 1 provides an illustration of the similarity functions defined in Eq . ( 3 ) and Eq . ( 4 ) . 3.2 SIAMESE ATTENTION OPERATOR . We describe the Siamese attention operator in the context of 1-D data , but it can be easily applied on high-order data by unfolding them into matrices . In this case , the inputs to an attention operator are Q ∈ Rd×N , K ∈ Rd×N , and V ∈ Rd×N . We replace the similarity function in the attention operator by our Siamese similarity function , leading to the Siamese attention operator ( SAO ) . Given a query vector qi in Q , SAO computes the response oi as : oi = 1 N N∑ j=1 vj ( qi + kj ) Tw = 1 N N∑ j=1 ( vjq T i w + vjk T j w ) = 1 N N∑ j=1 vj qTi w + 1N N∑ j=1 vjk T j w =vwTqi + 1 N V KTw , ( 5 ) where v = 1N ∑N j=1 V : j ∈ Rd . SAO computes responses of all query vectors as : O = vwTQ+ 1 N V KTw1TN , ( 6 ) where 1N is a vector of ones of size N . Note that we use 1N here to make it mathematically precise . In practice , the term 1NV K Tw is the same to all query vectors . This means we only need to compute it once and share it for the computation of all responses . By computing KTw first , the time complexity of this term is O ( N×d ) . Similarly , the time complexity for computing the first term in Eq . ( 6 ) is O ( N × d ) . Thus , the overall time complexity of SAO is O ( N × d ) . Notably , when Q = K , we can save the computational cost by computing either wTQ or KTw . Table 1 provides the comparison of SAO and other attention operators . It can be seen from the comparison results that our SAO can significantly save computational resources compared to other attention operators . In Eq . ( 5 ) , the first response term vwTqi changes as the query vector qi , which we call a local response term . The second term 1NV K Tw is the same for all query vectors , which is a global response term . The local response term provides customized information to query vectors , while the global response term may include global information for SAO . In the experimental study part , we demonstrate the importance of the global response term to SAO .
In this paper, the authors propose a new mechanism to perform the attention operators. The similarity between a key and a query is performed as the dot product between a trainable weight and the addition of the key and query. The proposed Siamese attention operator is much more efficient than prior attention methods in terms of speed. The evaluation on a few computer vision tasks shows the presented method performs as well as the typical attention methods, but it runs much faster.
SP:b44bdec4ffc5f79048deedf805b2835067bca899
Siamese Attention Networks
1 INTRODUCTION . Deep learning networks with attention operators have demonstrated great capabilities of solving challenging problems in various tasks such as computer vision ( Xu et al. , 2015 ; Lu et al. , 2016 ) , natural language processing ( Bahdanau et al. , 2015 ; Vaswani et al. , 2017 ) , and network embedding ( Veličković et al. , 2017 ) . Attention operators are capable of capturing long-range relationships and brings significant performance boosts ( Li et al. , 2018 ; Malinowski et al. , 2018 ) . The application scenarios of attention operators range from 1-D data like texts to high-order and high-dimensional data such as images and videos . However , attention operators suffer from the excessive usage of computational resources when applied on high-order or high-dimensional data . The memory and computational cost increases dramatically with the increase of input orders and dimensions . This prevents attention operators from being applied in broader scenarios . To address this limitation , some studies focus on reducing spatial sizes of inputs such as down-sampling input data ( Wang et al. , 2018 ) or attending selected part of data ( Huang et al. , 2018 ) . However , such kind of methods inevitably results in information and performance loss . In this work , we propose a novel and efficient attention operator known as Siamese attention operator ( SAO ) to dramatically reduce the usage of computational resources . We observe that the excessive computational resource usage is mainly caused by the similarity function and coefficients normalization function used in attention operators . To address this limitation , we propose the Siamese similarity function that employs a feed-forward network to compute similarity scores . By applying the same network to both input vectors , Siamese similarity function processes the symmetry property . By using Siamese similarity function to compute similarity scores , we propose the Siamese attention operator , which results in a significant saving on computational resources . Based on the Siamese attention operator , we design a family of efficient modules , which leads to our compact deep models known as Siamese attention networks ( SANets ) . Our SANets significantly outperform other state-of-the-art compact models on image classification tasks . Experiments on image restoration tasks demonstrate that our methods are efficient and effective in general application scenarios . 2 BACKGROUND AND RELATED WORK . In this section , we describe the attention operator that has been widely applied in various tasks and on various types of data including texts , images , and videos . 2.1 ATTENTION OPERATOR . The inputs of an attention operator include three matrices ; those are a query matrix Q = [ q1 , q2 , · · · , qm ] ∈ Rd×m with each qi ∈ Rd , a key matrix K = [ k1 , k2 , · · · , kn ] ∈ Rd×n with each ki ∈ Rd , and a value matrix V = [ v1 , v2 , · · · , vn ] ∈ Rp×n with each vi ∈ Rp . To compute the response of each query vector qi , the attention operator calculates similarity scores between qi and each key vector kj using a similarity function . Frequently used similarity functions include dot product ( Luong et al. , 2015 ) , concatenation ( Bahdanau et al. , 2015 ) , Gaussian function , and embedded Gaussian function . It has been shown that dot product is the most effective one ( Wang et al. , 2018 ) , which computes sim ( qi , kj ) = kTj qi . After the normalization with a softmax operator , the response is computed by taking a weighted sum over value vectors ∑N j=1 vjsoftmax ( k T j qi ) . For all query vectors , the attention operator computes O = V softmax ( KTQ ) , ( 1 ) where softmax ( · ) is the column-wise softmax operator . The matrix multiplication between KT and Q computes a intermediate output matrix E that stores similarity scores between each query vector qi and each key vector kj . The column-wise softmax operator normalizes E and makes every column sum to 1 . Multiplication between V and the normalized E gives the output O ∈ Rp×m . Self-attention operator ( Vaswani et al. , 2017 ; Devlin et al. , 2018 ) is a special case of the attention operator with Q = K = V . In practice , we usually firstly perform linear transformations on input matrices . For notation simplicity , we use original input matrices in following discussions . The computational cost of the operations in Eq . ( 1 ) is O ( m× n× ( d+ p ) ) , and the memory required to store the intermediate output E is O ( m× n ) . If m = n and d = p , the time and space complexities of Eq . ( 1 ) are O ( n2 × d ) and O ( n2 ) , respectively . The matrix multiplication order in Eq . ( 1 ) is determined by the softmax operator , which acts as the normalization function . Wang et al . ( 2018 ) proposed to use scaling by 1/N as the normalization function on similarity scores . By this , the response of qi is calculated 1N ∑N j=1 vjk T j qi . The attention operator using scaling by 1/N computes all responses as : O = 1 N ( V KT ) Q . ( 2 ) By computing V KT first , the time and space complexities of Eq . ( 2 ) are O ( Nd2 ) and O ( d2 ) , respectively . When N > d , this saves computational resources compared to the attention operator in Eq . ( 1 ) . In practice , we usually have N > d in some parts of a neural network , especially on high-order data . 2.2 ATTENTION OPERATORS ON HIGH-ORDER DATA . Non-local operators ( Wang et al. , 2018 ) are essentially self-attention operators on high-order data like images and videos . Take 2-D data as an example , the input to a non-local operator is an image , which can be represented as a third-order tensor X ∈ Rh×w×c . Here , h , w , and c denote the height , width , and number of channels , respectively . The non-local operator converts the tensor into a matrix X ( 3 ) ∈ Rc×hw by unfolding along mode-3 ( Kolda & Bader , 2009 ) . Then the matrix is fed into an attention operator by setting Q = K = V = X ( 3 ) . The output of the attention operator is converted back to a third-order tensor that is used as the final output . A challenging problem of non-local operators is the excessive usage of computational resources . If h = w , the time and space complexity of the non-local operator is O ( h4 × d ) and O ( h4 ) , respectively . The computational cost becomes even bigger on higher-order data like videos . The excessive usage of computational resources limits the application of attention operators in broader scenarios . 3 SIAMESE ATTENTION NETWORKS . In this work , we propose a learnable similarity function known as the Siamese similarity function . This function uses a single-layer feed-forward network to compute similarity scores . Based on Siamese similarity function , we propose the Siamese attention operator , which dramatically reduces computational cost . We also describe how to build Siamese attention networks using this operator . 3.1 SIAMESE SIMILARITY FUNCTION . We analyze the problem of learning a similarity function given two vectors . To learn a similarity function , we employ a single-layer feed-forward neural network . Given two vectors a ∈ Rd and b ∈ Rd , the similarity score is computed using a trainable vector w ∈ R2d as : simw ( a , b ) = [ aT , bT ] w = d∑ i=1 ai × wi + bi × wd+i = aTwa + bTwb , ( 3 ) where w = [ wTa , w T b ] T with wa ∈ Rd and wb ∈ Rd . Here , we ignore the bias term for notation simplicity . We consider the similarity function defined in Eq . ( 3 ) as two feed-forward networks that process two vectors separately . The similarity score is the sum of outputs from two different networks . Unlike distance metrics , the non-negativity or triangle inequality do not need to hold from similarity functions . But we usually expect similarity measures to be symmetric , which means it outputs the same similarity score when two input arguments are swapped . Apparently , the similarity function defined by Eq . ( 3 ) does not have this property . To retain the symmetry property , we employ the same network while using both vectors to compute the similarity score . This leads to our proposed Siamese similarity function ( Sia-sim ) , which follows the principle of Siamese networks ( Bromley et al. , 1994 ; Bertinetto et al. , 2016 ) . The Siamese similarity function computes the similarity score between a and b as : Sia-simw ( a , b ) = d∑ i=1 ( ai + bi ) × wi = ( a+ b ) Tw , = Sia-simw ( b , a ) ( 4 ) where w ∈ Rd is a trainable parameter vector . Although the time complexity of computing Sia-sim is the same as that of dot product , we show that Sia-sim leads to a very efficient attention operator in Section 3.2 . Figure 1 provides an illustration of the similarity functions defined in Eq . ( 3 ) and Eq . ( 4 ) . 3.2 SIAMESE ATTENTION OPERATOR . We describe the Siamese attention operator in the context of 1-D data , but it can be easily applied on high-order data by unfolding them into matrices . In this case , the inputs to an attention operator are Q ∈ Rd×N , K ∈ Rd×N , and V ∈ Rd×N . We replace the similarity function in the attention operator by our Siamese similarity function , leading to the Siamese attention operator ( SAO ) . Given a query vector qi in Q , SAO computes the response oi as : oi = 1 N N∑ j=1 vj ( qi + kj ) Tw = 1 N N∑ j=1 ( vjq T i w + vjk T j w ) = 1 N N∑ j=1 vj qTi w + 1N N∑ j=1 vjk T j w =vwTqi + 1 N V KTw , ( 5 ) where v = 1N ∑N j=1 V : j ∈ Rd . SAO computes responses of all query vectors as : O = vwTQ+ 1 N V KTw1TN , ( 6 ) where 1N is a vector of ones of size N . Note that we use 1N here to make it mathematically precise . In practice , the term 1NV K Tw is the same to all query vectors . This means we only need to compute it once and share it for the computation of all responses . By computing KTw first , the time complexity of this term is O ( N×d ) . Similarly , the time complexity for computing the first term in Eq . ( 6 ) is O ( N × d ) . Thus , the overall time complexity of SAO is O ( N × d ) . Notably , when Q = K , we can save the computational cost by computing either wTQ or KTw . Table 1 provides the comparison of SAO and other attention operators . It can be seen from the comparison results that our SAO can significantly save computational resources compared to other attention operators . In Eq . ( 5 ) , the first response term vwTqi changes as the query vector qi , which we call a local response term . The second term 1NV K Tw is the same for all query vectors , which is a global response term . The local response term provides customized information to query vectors , while the global response term may include global information for SAO . In the experimental study part , we demonstrate the importance of the global response term to SAO .
The authors introduce a novel self-attention operator for neural networks. Their self-attention operator computes similarity between elements a and b as (a+b)^Tw where w is a learned parameter and does not use the softmax operator. This leads to improvements in space and time complexity compared to regular self-attention which uses the dot product (a^Tb).
SP:b44bdec4ffc5f79048deedf805b2835067bca899
Smart Ternary Quantization
1 INTRODUCTION . Deep Neural Networks ( DNN ) models have achieved tremendous attraction because of their success on a wide variety of tasks including computer vision , automatic speech recognition , natural language processing , and reinforcement learning ( Goodfellow et al. , 2016 ) . More specifically , in computer vision DNN have led to a series of breakthrough for image classification ( Krizhevsky et al. , 2017 ) , ( Simonyan & Zisserman , 2014 ) , ( Szegedy et al. , 2015 ) , and object detection ( Redmon et al. , 2015 ) , ( Liu et al. , 2015 ) , ( Ren et al. , 2015 ) . DNN models are computationally intensive and require large memory to store the model parameters . Computation and storage resource requirement becomes an impediment to deploy such models in many edge devices due to lack of memory , computation power , battery , etc . This motivated the researchers to develop compression techniques to reduce the cost for such models . Recently , several techniques have been introduced in the literature to solve the storage and computational limitations of edge devices . Among them , quantization methods focus on representing the weights of a neural network in lower precision than the usual 32-bits float representation , saving on the memory footprint of the model . Binary quantization ( Courbariaux et al. , 2015 ) , ( Hubara et al. , 2016 ) , ( Rastegari et al. , 2016 ) , ( Zhou et al. , 2016 ) , ( Lin et al. , 2017 ) represent weights with 1 bit precision and ternary quantization ( Lin et al. , 2015 ) , ( Li & Liu , 2016 ) , ( Zhu et al. , 2016 ) with 2 bits precision . While the latter frameworks lead to significant memory reduction compared to their full precision counterpart , they are constrained to quantize the model with 1 bit or 2 bits , on demand . We relax this constraint , and present Smart Ternary Quantization ( STQ ) that allows mixing 1 bit and 2 bits layers while training the network . Consequently , this approach automatically quantizes weights into binary or ternary depending upon a trainable control parameter . We show that this approach leads to mixed bit precision models that beats ternary networks both in terms of accuracy and memory consumption . Here we only focus on quantizing layers because it is easier to implement layer-wise quantization at inference time after training . However , this method can be adapted for mixed precision training of sub-network , block , filter , or weight easily . To the best of our knowledge this is the first attempt to design a single training algorithm for low-bit mixed precision training . 2 RELATED WORK . There are two main components in DNN ’ s , namely , weight and activation . These two components are usually computed in full precision , i.e . floating point 32-bits . This work focuses on quantizing the weights of the network , i.e . generalizing BinaryConnect ( BC ) of Courbariaux et al . ( 2015 ) and Ternary Weight Network ( TWN ) of Li & Liu ( 2016 ) towards automatic 1 or 2 bits mixed-precision using a single training algorithm . In BC the real value weights w are binarized to wb ∈ { −1 , +1 } during the forward pass . To map a full precision weight to a binary weight , the deterministic sign function is used , wb = sign ( w ) = { +1 w ≥ 0 , −1 w < 0 . ( 1 ) The derivative of the sign function is zero on R \ { 0 } . During back propagation , this cancels out the gradient of the loss with respect to the weights after the sign function . Therefore , those weights can not get updated . To bypass this problem , Courbariaux et al . ( 2015 ) use a clipped straight-through estimator ∂L ∂w = ∂L ∂wb 1|w|≤1 ( w ) ( 2 ) where L is the loss function and 1A ( . ) is the indicator function on the set A . In other words ( 2 ) approximates the sign function by the linear function f ( x ) = x within [ −1 , +1 ] and by a constant elsewhere . During back propagation , the weights are updated only within [ −1 , +1 ] . The binarized weights are updated with their corresponding full precision gradients . Rastegari et al . ( 2016 ) add a scaling factor to reduce the gap between binary and full-precision model ’ s accuracy , defining Binary Weight Network ( BWN ) . The real value weights W in each layer are quantized as µ × { −1 , +1 } where µ = E [ |W| ] ∈ R. Zhou et al . ( 2016 ) , generalize the latter work and approximates the full precision weights with more than one bit while Lin et al . ( 2017 ) approximate weights with a linear combination of multiple binary weight bases . 2.1 TERNARY WEIGHT NETWORKS . Ternary Weight Network ( TWN ) ( Li & Liu , 2016 ) is a neural network with weights constrained to { −1 , 0 , +1 } . The weight resolution is reduced from 32 bits to 2 bits , replacing full precision weights with ternary weights . TWN aims to fill the gap between full precision and binary precision weight . Compared to binary weight networks , ternary weight networks have stronger expressive capability . As pointed out in ( Li & Liu , 2016 ) , for a 3×3 weight filter in a convolutional neural network , there is 23×3 = 512 possible variation with binary precision and 33×3 = 19683 with ternary precision . Li & Liu ( 2016 ) find the closest ternary weights matrix Wt to its corresponding real value weight matrix W using { µ̂ , Ŵt = arg min α , Wt ‖W− µWt‖22 , s.t . µ ≥ 0 , wtij ∈ { −1 , 0 , 1 } , i , j = 1 , 2 , ... , n. ( 3 ) The ternary weight Wt is achieved by applying a symmetric threshold ∆ Wt = +1 wij > ∆ , 0 |wij | ≤ ∆ , −1 wij < −∆ . ( 4 ) Li & Liu ( 2016 ) define a weight dependant threshold ∆ and a scaling factor µ that approximately solves ( 3 ) . TWN is trained using stochastic gradient descent . Similar to BC and BWN schemes ; ternary-value weights are only used for the forward pass and back propagation , but not for the parameter updates . At inference , the scaling factor can be folded with the input X X W ≈ X ( µWt ) = ( µX ) Wt , ( 5 ) where denotes the convolution . Zhu et al . ( 2016 ) proposed a more general ternary method which reduces the precision of weights in neural network to ternary values . However , they quantize the weights to asymmetric values { −µ1 , 0 , +µ2 } using two full-precision scaling coefficients µ1 and µ2 for each layer of neural network . While the method achieve better accuracy as opposed to TWN , its hardware implementation becomes a challenge , because there are two unequal full precision scaling factors to deal with . Our method provides a compromise between BC and TWN and trains weights with a single trainable scaling factor µ . Weights jumps between ternary { −µ , 0 , +µ } and binary { −µ , +µ } . This provides a single algorithm for 1 or 2 bits mixed precision . 2.2 REGULARIZATION . Regularization term is the key to prevent over-fitting problem and to obtain robust generalization for unseen data . Standard regularization functions , such as L2 or L1 encourage weights to be concentrated about the origin . However , in case of binary network with binary valued weights , it is more appropriate to have a regularization function to encourage the weights about µ × { −1 , +1 } , with a scaling factor µ > 0 such as R1 ( w , µ ) = ∣∣|w| − µ∣∣ , ( 6 ) proposed in Belbahri et al . ( 2019 ) . A straightforward generalization for ternary quantization is R2 ( w , µ ) = ∣∣∣∣∣∣|w| − µ2 ∣∣− µ2 ∣∣∣∣ . ( 7 ) Regularizer ( 6 ) encourages weights about { −µ , +µ } , and ( 7 ) about { −µ , 0 , +µ } . The two functions are depicted in Figure 1 . These regularization functions are only useful when the quantization depth is set before training start . We propose a more flexible version to smoothly move between these two functions using a shape parameter . 3 SMART TERNARY QUANTIZATION . Here we propose an adaptive regularization function that switches between binary regularization of ( 6 ) and ternary regularization of ( 7 ) R ( w , µ , β ) = min ( ∣∣|w| − µ∣∣ , tan ( β ) |w| ) , ( 8 ) in which µ is a trainable scaling factor , and β ∈ ( π4 , π 2 ) controls the transition between ( 6 ) and ( 7 ) . As a special case β → π2 converges to the binary regularizer ( 6 ) and β → π 4 coincide with the ternary regularizer ( 7 ) , depicted in Figure 2 . A large value of tan ( β ) repels estimated weights away from zero thus yielding binary quantization , and small tan ( β ) values encourage zero weights . The shape parameter β controls the quantization depth . Quantization is done per layer , therefore we let β very per layer . We recommend to regularize β about π2 i.e . preferring binary quantization apriori R ( w , µ , β ) = min ( ∣∣|w| − µ∣∣ , tan ( β ) |w| ) + γ| cot ( β ) | , ( 9 ) in which γ controls the proportion of binary to ternary layers . For a single filter W the regularization function is a sum over its elements R ( W , µ , β ) = I∑ i=1 J∑ j=1 min ( ∣∣|wij | − µ∣∣ , tan ( β ) |wij | ) + γ| cot ( β ) | . ( 10 ) Large values of γ encourage binary layers . In each layer , weights are pushed to binary or ternary values , depending on the trained value of the corresponding β . A generalization of ( 9 ) towards Lp norms of Belbahri et al . ( 2019 ) is also possible . However , here we only focus on regularizer constructed using the L1 norm as the accuracy did not change significantly by using Lp norm with different values of p. The introduced regularization function is added to the empirical loss function L ( . ) . The objective function defined on weights W , scaling factors µ , and quantization depths β is optimized using back propagation L ( W , µ , β ) = L ( W ) + L∑ l=1 λl Kl∑ k=1 R ( Wkl , µkl , βl ) , ( 11 ) where k indexes the channel , and l indexes the layer . One may use a different regularization constant λl for each layer to keep the impact of the regularization term balanced across layers , indeed different layers may involve different number of parameters . We set λl = λ # Wl where λ is a constant , and # Wl is the number of weights in layer l. We propose to use the same threshold-based function of Li & Liu ( 2016 ) ( 3 ) , but with a fixed threshold ∆l per layer l. Note that Li & Liu ( 2016 ) propose a weight-dependant threshold . We let the possibility for the weights to only accumulate about { −µ , +µ } and not about 0 , depending on β . One may set ∆l to have the same balanced weights in { −µ , 0 , +µ } at initialization for all layers and let the weights evolve during training . Formally , if σl is the standard deviation of the initial Gaussian weights in layer l , we propose ∆l = 0.2× σl . The probability that a single weight lies in the range [ −∆l , ∆l ] is ≈ 0.16 . All the weights falling in this range will be quantized as zeros after applying the threshold function . Weights are naturally pushed to binary or ternary values depending on βl during training . Eventually , a threshold δ close to π2 ≈ 1.57 defines the final quantization depth for each layer . Final quantization depth of layer l : { Binary βl ≥ δ , Ternary βl < δ
This paper studies mixed-precision quantization in deep networks where each layer can be either binarized or ternarized. The authors propose an adaptive regularization function that can be pushed to either 2-bit or 3-bit through different parameterization, in order to automatically determine the precision of each layer. Experiments are performed on small-scale image classification data sets MNIST and CIFAR-10.
SP:faa869ec6fa32409248e46b957223595524e88df
Smart Ternary Quantization
1 INTRODUCTION . Deep Neural Networks ( DNN ) models have achieved tremendous attraction because of their success on a wide variety of tasks including computer vision , automatic speech recognition , natural language processing , and reinforcement learning ( Goodfellow et al. , 2016 ) . More specifically , in computer vision DNN have led to a series of breakthrough for image classification ( Krizhevsky et al. , 2017 ) , ( Simonyan & Zisserman , 2014 ) , ( Szegedy et al. , 2015 ) , and object detection ( Redmon et al. , 2015 ) , ( Liu et al. , 2015 ) , ( Ren et al. , 2015 ) . DNN models are computationally intensive and require large memory to store the model parameters . Computation and storage resource requirement becomes an impediment to deploy such models in many edge devices due to lack of memory , computation power , battery , etc . This motivated the researchers to develop compression techniques to reduce the cost for such models . Recently , several techniques have been introduced in the literature to solve the storage and computational limitations of edge devices . Among them , quantization methods focus on representing the weights of a neural network in lower precision than the usual 32-bits float representation , saving on the memory footprint of the model . Binary quantization ( Courbariaux et al. , 2015 ) , ( Hubara et al. , 2016 ) , ( Rastegari et al. , 2016 ) , ( Zhou et al. , 2016 ) , ( Lin et al. , 2017 ) represent weights with 1 bit precision and ternary quantization ( Lin et al. , 2015 ) , ( Li & Liu , 2016 ) , ( Zhu et al. , 2016 ) with 2 bits precision . While the latter frameworks lead to significant memory reduction compared to their full precision counterpart , they are constrained to quantize the model with 1 bit or 2 bits , on demand . We relax this constraint , and present Smart Ternary Quantization ( STQ ) that allows mixing 1 bit and 2 bits layers while training the network . Consequently , this approach automatically quantizes weights into binary or ternary depending upon a trainable control parameter . We show that this approach leads to mixed bit precision models that beats ternary networks both in terms of accuracy and memory consumption . Here we only focus on quantizing layers because it is easier to implement layer-wise quantization at inference time after training . However , this method can be adapted for mixed precision training of sub-network , block , filter , or weight easily . To the best of our knowledge this is the first attempt to design a single training algorithm for low-bit mixed precision training . 2 RELATED WORK . There are two main components in DNN ’ s , namely , weight and activation . These two components are usually computed in full precision , i.e . floating point 32-bits . This work focuses on quantizing the weights of the network , i.e . generalizing BinaryConnect ( BC ) of Courbariaux et al . ( 2015 ) and Ternary Weight Network ( TWN ) of Li & Liu ( 2016 ) towards automatic 1 or 2 bits mixed-precision using a single training algorithm . In BC the real value weights w are binarized to wb ∈ { −1 , +1 } during the forward pass . To map a full precision weight to a binary weight , the deterministic sign function is used , wb = sign ( w ) = { +1 w ≥ 0 , −1 w < 0 . ( 1 ) The derivative of the sign function is zero on R \ { 0 } . During back propagation , this cancels out the gradient of the loss with respect to the weights after the sign function . Therefore , those weights can not get updated . To bypass this problem , Courbariaux et al . ( 2015 ) use a clipped straight-through estimator ∂L ∂w = ∂L ∂wb 1|w|≤1 ( w ) ( 2 ) where L is the loss function and 1A ( . ) is the indicator function on the set A . In other words ( 2 ) approximates the sign function by the linear function f ( x ) = x within [ −1 , +1 ] and by a constant elsewhere . During back propagation , the weights are updated only within [ −1 , +1 ] . The binarized weights are updated with their corresponding full precision gradients . Rastegari et al . ( 2016 ) add a scaling factor to reduce the gap between binary and full-precision model ’ s accuracy , defining Binary Weight Network ( BWN ) . The real value weights W in each layer are quantized as µ × { −1 , +1 } where µ = E [ |W| ] ∈ R. Zhou et al . ( 2016 ) , generalize the latter work and approximates the full precision weights with more than one bit while Lin et al . ( 2017 ) approximate weights with a linear combination of multiple binary weight bases . 2.1 TERNARY WEIGHT NETWORKS . Ternary Weight Network ( TWN ) ( Li & Liu , 2016 ) is a neural network with weights constrained to { −1 , 0 , +1 } . The weight resolution is reduced from 32 bits to 2 bits , replacing full precision weights with ternary weights . TWN aims to fill the gap between full precision and binary precision weight . Compared to binary weight networks , ternary weight networks have stronger expressive capability . As pointed out in ( Li & Liu , 2016 ) , for a 3×3 weight filter in a convolutional neural network , there is 23×3 = 512 possible variation with binary precision and 33×3 = 19683 with ternary precision . Li & Liu ( 2016 ) find the closest ternary weights matrix Wt to its corresponding real value weight matrix W using { µ̂ , Ŵt = arg min α , Wt ‖W− µWt‖22 , s.t . µ ≥ 0 , wtij ∈ { −1 , 0 , 1 } , i , j = 1 , 2 , ... , n. ( 3 ) The ternary weight Wt is achieved by applying a symmetric threshold ∆ Wt = +1 wij > ∆ , 0 |wij | ≤ ∆ , −1 wij < −∆ . ( 4 ) Li & Liu ( 2016 ) define a weight dependant threshold ∆ and a scaling factor µ that approximately solves ( 3 ) . TWN is trained using stochastic gradient descent . Similar to BC and BWN schemes ; ternary-value weights are only used for the forward pass and back propagation , but not for the parameter updates . At inference , the scaling factor can be folded with the input X X W ≈ X ( µWt ) = ( µX ) Wt , ( 5 ) where denotes the convolution . Zhu et al . ( 2016 ) proposed a more general ternary method which reduces the precision of weights in neural network to ternary values . However , they quantize the weights to asymmetric values { −µ1 , 0 , +µ2 } using two full-precision scaling coefficients µ1 and µ2 for each layer of neural network . While the method achieve better accuracy as opposed to TWN , its hardware implementation becomes a challenge , because there are two unequal full precision scaling factors to deal with . Our method provides a compromise between BC and TWN and trains weights with a single trainable scaling factor µ . Weights jumps between ternary { −µ , 0 , +µ } and binary { −µ , +µ } . This provides a single algorithm for 1 or 2 bits mixed precision . 2.2 REGULARIZATION . Regularization term is the key to prevent over-fitting problem and to obtain robust generalization for unseen data . Standard regularization functions , such as L2 or L1 encourage weights to be concentrated about the origin . However , in case of binary network with binary valued weights , it is more appropriate to have a regularization function to encourage the weights about µ × { −1 , +1 } , with a scaling factor µ > 0 such as R1 ( w , µ ) = ∣∣|w| − µ∣∣ , ( 6 ) proposed in Belbahri et al . ( 2019 ) . A straightforward generalization for ternary quantization is R2 ( w , µ ) = ∣∣∣∣∣∣|w| − µ2 ∣∣− µ2 ∣∣∣∣ . ( 7 ) Regularizer ( 6 ) encourages weights about { −µ , +µ } , and ( 7 ) about { −µ , 0 , +µ } . The two functions are depicted in Figure 1 . These regularization functions are only useful when the quantization depth is set before training start . We propose a more flexible version to smoothly move between these two functions using a shape parameter . 3 SMART TERNARY QUANTIZATION . Here we propose an adaptive regularization function that switches between binary regularization of ( 6 ) and ternary regularization of ( 7 ) R ( w , µ , β ) = min ( ∣∣|w| − µ∣∣ , tan ( β ) |w| ) , ( 8 ) in which µ is a trainable scaling factor , and β ∈ ( π4 , π 2 ) controls the transition between ( 6 ) and ( 7 ) . As a special case β → π2 converges to the binary regularizer ( 6 ) and β → π 4 coincide with the ternary regularizer ( 7 ) , depicted in Figure 2 . A large value of tan ( β ) repels estimated weights away from zero thus yielding binary quantization , and small tan ( β ) values encourage zero weights . The shape parameter β controls the quantization depth . Quantization is done per layer , therefore we let β very per layer . We recommend to regularize β about π2 i.e . preferring binary quantization apriori R ( w , µ , β ) = min ( ∣∣|w| − µ∣∣ , tan ( β ) |w| ) + γ| cot ( β ) | , ( 9 ) in which γ controls the proportion of binary to ternary layers . For a single filter W the regularization function is a sum over its elements R ( W , µ , β ) = I∑ i=1 J∑ j=1 min ( ∣∣|wij | − µ∣∣ , tan ( β ) |wij | ) + γ| cot ( β ) | . ( 10 ) Large values of γ encourage binary layers . In each layer , weights are pushed to binary or ternary values , depending on the trained value of the corresponding β . A generalization of ( 9 ) towards Lp norms of Belbahri et al . ( 2019 ) is also possible . However , here we only focus on regularizer constructed using the L1 norm as the accuracy did not change significantly by using Lp norm with different values of p. The introduced regularization function is added to the empirical loss function L ( . ) . The objective function defined on weights W , scaling factors µ , and quantization depths β is optimized using back propagation L ( W , µ , β ) = L ( W ) + L∑ l=1 λl Kl∑ k=1 R ( Wkl , µkl , βl ) , ( 11 ) where k indexes the channel , and l indexes the layer . One may use a different regularization constant λl for each layer to keep the impact of the regularization term balanced across layers , indeed different layers may involve different number of parameters . We set λl = λ # Wl where λ is a constant , and # Wl is the number of weights in layer l. We propose to use the same threshold-based function of Li & Liu ( 2016 ) ( 3 ) , but with a fixed threshold ∆l per layer l. Note that Li & Liu ( 2016 ) propose a weight-dependant threshold . We let the possibility for the weights to only accumulate about { −µ , +µ } and not about 0 , depending on β . One may set ∆l to have the same balanced weights in { −µ , 0 , +µ } at initialization for all layers and let the weights evolve during training . Formally , if σl is the standard deviation of the initial Gaussian weights in layer l , we propose ∆l = 0.2× σl . The probability that a single weight lies in the range [ −∆l , ∆l ] is ≈ 0.16 . All the weights falling in this range will be quantized as zeros after applying the threshold function . Weights are naturally pushed to binary or ternary values depending on βl during training . Eventually , a threshold δ close to π2 ≈ 1.57 defines the final quantization depth for each layer . Final quantization depth of layer l : { Binary βl ≥ δ , Ternary βl < δ
The paper discusses a generalization to low bit quantization and combines the approaches of binary and ternary quantization methods. Past methods such as Binary Connect and Binary Weights Network have shown that you can train a network efficiently with 1-bit quantization, and methods such as Ternary Weights Network demonstrate 2-bit quantization with weights taking one of {-1, 0, 1} * mu, with mu being a scale computed per weight tensor. The authors generalize these two methods so that the choice of binary vs ternary weights can be made per layer automatically during training. The primary contribution to make that work is by incorporating a generic regularizer with addition hyper-parameters to trade-off between the binary weight regularizer and ternary weight regularizer. In addition to that, the regularization also includes a prior to make the layers prefer binary weights by default. This is done by adding a cost that penalizes the choice of ternary weights for each layer.
SP:faa869ec6fa32409248e46b957223595524e88df
Which Tasks Should Be Learned Together in Multi-task Learning?
1 INTRODUCTION . Many applications , especially robotics and autonomous vehicles , are chiefly interested in using multi-task learning to reduce the inference time and computational complexity required to estimate many characteristics of visual input . For example , an autonomous vehicle may need to detect the location of pedestrians , determine a per-pixel depth , and predict objects ’ trajectories , all within tens of milliseconds . In multi-task learning , multiple tasks are solved at the same time , typically with a single neural network . In addition to reduced inference time , solving a set of tasks jointly rather than independently can , in theory , have other benefits such as improved prediction accuracy , increased data efficiency , and reduced training time . Unfortunately , the quality of predictions are often observed to suffer when a network is tasked with making multiple predictions . This is because learning objectives can have complex and unknown dynamics and may compete . In fact , multi-task performance can suffer so much that smaller independent networks are often superior ( as we will see in the experiments section ) . We refer to any situation in which the competing priorities of the network cause poor task performance as crosstalk . On the other hand , when task objectives do not interfere much with each other , performance on both tasks can be maintained or even improved when jointly trained . Intuitively , this loss or gain of quality seems to depend on the relationship between the jointly trained tasks . Prior work has studied the relationship between tasks for transfer learning ( Zamir et al . ( 2018 ) ) . However , we find that transfer relationships are not highly predictive of multi-task relationships . In addition to studying multi-task relationships , we attempt to determine how to produce good prediction accuracy under a limited inference time budget by assigning competing tasks to separate networks and cooperating tasks to the same network . More concretely , this leads to the following problem : Given a set of tasks , T , and a computational budget b ( e.g. , maximum allowable inference time ) , what is the optimal way to assign tasks to networks with combined cost≤ b such that a combined measure of task performances is maximized ? To this end , we develop a computational framework for choosing the best tasks to group together in order to have a small number of separate deep neural networks that completely cover the task set and that maximize task performance under a given computational budget . We make the intriguing Encoder Shared Representation Decoder Decoder Semantic Segmentation Depth Estimation Encoder Shared Representation DecoderDecoder Edge Detection Keypoint Detection Half Sized Encoder Decoder Surface Normal Prediction A B C Input Image Discarded Decoder Discarded Surface Normal Prediction Discarded Decoder Discarded Surface Normal Prediction Figure 1 : Given five tasks to solve , there are many ways that they can be split into task groups for multitask learning . How do we find the best one ? We propose a computational framework that , for instance , suggests the following grouping to achieve the lowest total loss , using a computational budget of 2.5 units : train network A to solve Semantic Segmentation , Depth Estimation , and Surface Normal Prediction ; train network B to solve Keypoint Detection , Edge Detection , and Surface Normal Prediction ; train network C with a less computationally expensive encoder to solve Surface Normal Prediction alone ; including Surface Normals as an output in the first two networks were found advantageous for improving the other outputs , while the best Normals were predicted by the third network . This task grouping outperforms all other feasible ones , including learning all five tasks in one large network or using five dedicated smaller networks . observation that the inclusion of an additional task in a network can potentially improve the accuracy of the other tasks , even though the performance of the added task might be poor . This can be viewed as regularizing or guiding the loss of one task by adding an additional loss , as often employed in curriculum learning or network regularization Bengio et al . ( 2009 ) . Achieving this , of course , depends on picking the proper regularizing task – our system can take advantage of this phenomenon , as schematically shown in Figure 1 . This paper has two main contributions . In Section 3 , we outline a framework for systematically assigning tasks to networks in order to achieve the best total prediction accuracy with a limited inference-time budget . We then analyze the resulting accuracy and show that selecting the best assignment of tasks to groups is critical for good performance . Secondly , in Section 6 , we analyze situations in which multi-task learning helps and when it doesn ’ t , quantify the compatibilities of various task combinations for multi-task learning , compare them to the transfer learning task affinities , and discuss the implications . Moreover , we analyze the factors that influence multi-task affinities . 2 PRIOR WORK . Multi-Task Learning : See Ruder ( 2017 ) for a good overview of multi-task learning . The authors identify two clusters of contemporary techniques that we believe cover the space well , hard parameter sharing and soft parameter sharing . In brief , the primary difference between the majority of the existing works and our study is that we wish to understand the relationships between tasks and find compatible groupings of tasks for any given set of tasks , rather than designing a neural network architecture to solve a particular fixed set of tasks well . A known contemporary example of hard parameter sharing in computer vision is UberNet ( Kokkinos ( 2017 ) ) . The authors tackle 7 computer vision problems using hard parameter sharing . The authors focus on reducing the computational cost of training for hard parameter sharing , but experience a rapid degradation in performance as more tasks are added to the network . Hard parameter sharing is also used in many other works such as ( Thrun ( 1996 ) ; Caruana ( 1997 ) ; Nekrasov et al . ( 2018 ) ; Dvornik et al . ( 2017 ) ; Kendall et al . ( 2018 ) ; Bilen & Vedaldi ( 2016 ) ; Pentina & Lampert ( 2017 ) ; Doersch & Zisserman ( 2017 ) ; Zamir et al . ( 2016 ) ; Long et al . ( 2017 ) ; Mercier et al . ( 2018 ) ; d. Miranda et al . ( 2012 ) ; Zhou et al . ( 2018 ) ; Rudd et al . ( 2016 ) ) . Other works , such as ( Sener & Koltun ( 2018 ) ) and ( Chen et al . ( 2018b ) ) , aim to dynamically reweight each task ’ s loss during training . The former work finds weights that provably lead to a Pareto-optimal solution , while the latter attempts to find weights that balance the influence of each task on network weights . Finally , ( Bingel & Søgaard ( 2017 ) ) studies task interaction for NLP . In soft or partial parameter sharing , either there is a separate set of parameters per task , or a significant fraction of the parameters are unshared . The models are tied together either by information sharing or by requiring parameters to be similar . Examples include ( Dai et al . ( 2016 ) ; Duong et al . ( 2015 ) ; Misra et al . ( 2016 ) ; Tessler et al . ( 2017 ) ; Yang & Hospedales ( 2017 ) ; Lu et al . ( 2017 ) ) . The canonical example of soft parameter sharing can be seen in ( Duong et al . ( 2015 ) ) . The authors are interested in designing a deep dependency parser for languages such as Irish that do not have much treebank data available . They tie the weights of two networks together by adding an L2 distance penalty between corresponding weights and show substantial improvement . Another example of soft parameter sharing is Cross-stitch Networks ( Misra et al . ( 2016 ) ) . Starting with separate networks for two tasks , the authors add ‘ cross-stitch units ’ between them , which allow each network to peek at the other network ’ s hidden layers . This approach reduces but does not eliminate task interfearence , and the overall performance is less sensitive to the relative loss weights . Unlike our method , none of the aforementioned works attempt to discover good groups of tasks to train together . Also , soft parameter sharing does not reduce inference time , a major goal of ours . Transfer Learning : Transfer learning ( Pratt ( 1993 ) ; Helleputte & Dupont ( 2009 ) ; Silver & Bennett ( 2008 ) ; Finn et al . ( 2016 ) ; Mihalkova et al . ( 2007 ) ; Niculescu-Mizil & Caruana ( 2007 ) ; Luo et al . ( 2017 ) ; Razavian et al . ( 2014 ) ; Pan & Yang ( 2010 ) ; Mallya & Lazebnik ( 2018 ) ; Fernando et al . ( 2017 ) ; Rusu et al . ( 2016 ) ) is similar to multi-task learning in that solutions are learned for multiple tasks . Unlike multi-task learning , however , transfer learning methods often assume that a model for a source task is given and then adapt that model to a target task . Transfer learning methods generally neither seek any benefit for source tasks nor a reduction in inference time as their main objective . Neural Architecture Search ( NAS ) : Many recent works search the space of deep learning architectures to find ones that perform well ( Zoph & Le , 2017 ; Liu et al. , 2018 ; Pham et al. , 2018 ; Xie et al. , 2019 ; Elsken et al. , 2019 ; Zhou et al. , 2019 ; Baker et al. , 2017 ; Real et al. , 2018 ) . This is related to our work as we search the space of task groupings . Just as with NAS , the found task groupings often perform better than human-engineered ones . Task Relationships : Our work is most related to Taskonomy ( Zamir et al . ( 2018 ) ) , where the authors studied the relationships between visual tasks for transfer learning and introduced a dataset with over 4 million images and corresponding labels for 26 tasks . This was followed by a number of recent works , which further analyzed task relationships ( Pal & Balasubramanian ( 2019 ) ; Dwivedi & Roig . ( 2019 ) ; Achille et al . ( 2019 ) ; Wang et al . ( 2019 ) ) for transfer learning . While they extract relationships between these tasks for transfer learning , we are interested in the multi-task learning setting . Interestingly , we find notable differences between transfer task affinity and multi-task affinity . Their method also differs in that they are interested in labeled-data efficiency and not inference-time efficiency . Finally , the transfer quantification approach taken by Taskonomy ( readout functions ) is only capable of finding relationships between the high-level bottleneck representations developed for each task , whereas structural similarities between tasks at all levels are potentially relevant for multi-task learning . 3 TASK GROUPING FRAMEWORK . Our goal is to find an assignment of tasks to networks that results in the best overall loss . Our strategy is to select from a large set of candidate networks to include in our final solution . We define the problem as follows : We want to minimize the overall loss on a set of tasks T = { t1 , t2 , ... , tk } given a limited inference time budget , b , which is the total amount of time we have to complete all tasks . Each neural network that solves some subset of T and that could potentially be a part of the final solution is denoted by n. It has an associated inference time cost , cn , and a loss for each task , L ( n , ti ) ( which is ∞ for each task the network does not attempt to solve ) . A solution S is a set of networks that together solve all tasks . The computational cost of a solution is cost ( S ) = ∑ n∈S cn . The loss of a solution on a task , L ( S , ti ) , is the lowest loss on that task among the solution ’ s networks1 , L ( S , ti ) = minn∈S L ( n , ti ) . The overall performance for a solution is L ( S ) = ∑ ti∈T L ( S , ti ) . We want to find the solution with the lowest overall loss and a cost that is under our budget , Sb = argminS : cost ( S ) ≤b L ( S ) . 1In principle , it may be possible to create an even better-performing ensemble when multiple networks solve the same task , though we do not explore this .
This paper focuses on how to partition a bunch of tasks in several groups and then it use multi-task learning to improve the performance. The paper makes an observation that multi-task relationships are not entirely correlated to transfer relationships and proposes a computational framework to optimize the assignment of tasks to network under a given computational budget constraint. It experiments on different combinations of the tasks and uses two heuristics to reduce the training overheads, early stopping approximation and higher order approximation.
SP:61e4186bf0f3ce2e595196285f5f19e45d67a0d8
Which Tasks Should Be Learned Together in Multi-task Learning?
1 INTRODUCTION . Many applications , especially robotics and autonomous vehicles , are chiefly interested in using multi-task learning to reduce the inference time and computational complexity required to estimate many characteristics of visual input . For example , an autonomous vehicle may need to detect the location of pedestrians , determine a per-pixel depth , and predict objects ’ trajectories , all within tens of milliseconds . In multi-task learning , multiple tasks are solved at the same time , typically with a single neural network . In addition to reduced inference time , solving a set of tasks jointly rather than independently can , in theory , have other benefits such as improved prediction accuracy , increased data efficiency , and reduced training time . Unfortunately , the quality of predictions are often observed to suffer when a network is tasked with making multiple predictions . This is because learning objectives can have complex and unknown dynamics and may compete . In fact , multi-task performance can suffer so much that smaller independent networks are often superior ( as we will see in the experiments section ) . We refer to any situation in which the competing priorities of the network cause poor task performance as crosstalk . On the other hand , when task objectives do not interfere much with each other , performance on both tasks can be maintained or even improved when jointly trained . Intuitively , this loss or gain of quality seems to depend on the relationship between the jointly trained tasks . Prior work has studied the relationship between tasks for transfer learning ( Zamir et al . ( 2018 ) ) . However , we find that transfer relationships are not highly predictive of multi-task relationships . In addition to studying multi-task relationships , we attempt to determine how to produce good prediction accuracy under a limited inference time budget by assigning competing tasks to separate networks and cooperating tasks to the same network . More concretely , this leads to the following problem : Given a set of tasks , T , and a computational budget b ( e.g. , maximum allowable inference time ) , what is the optimal way to assign tasks to networks with combined cost≤ b such that a combined measure of task performances is maximized ? To this end , we develop a computational framework for choosing the best tasks to group together in order to have a small number of separate deep neural networks that completely cover the task set and that maximize task performance under a given computational budget . We make the intriguing Encoder Shared Representation Decoder Decoder Semantic Segmentation Depth Estimation Encoder Shared Representation DecoderDecoder Edge Detection Keypoint Detection Half Sized Encoder Decoder Surface Normal Prediction A B C Input Image Discarded Decoder Discarded Surface Normal Prediction Discarded Decoder Discarded Surface Normal Prediction Figure 1 : Given five tasks to solve , there are many ways that they can be split into task groups for multitask learning . How do we find the best one ? We propose a computational framework that , for instance , suggests the following grouping to achieve the lowest total loss , using a computational budget of 2.5 units : train network A to solve Semantic Segmentation , Depth Estimation , and Surface Normal Prediction ; train network B to solve Keypoint Detection , Edge Detection , and Surface Normal Prediction ; train network C with a less computationally expensive encoder to solve Surface Normal Prediction alone ; including Surface Normals as an output in the first two networks were found advantageous for improving the other outputs , while the best Normals were predicted by the third network . This task grouping outperforms all other feasible ones , including learning all five tasks in one large network or using five dedicated smaller networks . observation that the inclusion of an additional task in a network can potentially improve the accuracy of the other tasks , even though the performance of the added task might be poor . This can be viewed as regularizing or guiding the loss of one task by adding an additional loss , as often employed in curriculum learning or network regularization Bengio et al . ( 2009 ) . Achieving this , of course , depends on picking the proper regularizing task – our system can take advantage of this phenomenon , as schematically shown in Figure 1 . This paper has two main contributions . In Section 3 , we outline a framework for systematically assigning tasks to networks in order to achieve the best total prediction accuracy with a limited inference-time budget . We then analyze the resulting accuracy and show that selecting the best assignment of tasks to groups is critical for good performance . Secondly , in Section 6 , we analyze situations in which multi-task learning helps and when it doesn ’ t , quantify the compatibilities of various task combinations for multi-task learning , compare them to the transfer learning task affinities , and discuss the implications . Moreover , we analyze the factors that influence multi-task affinities . 2 PRIOR WORK . Multi-Task Learning : See Ruder ( 2017 ) for a good overview of multi-task learning . The authors identify two clusters of contemporary techniques that we believe cover the space well , hard parameter sharing and soft parameter sharing . In brief , the primary difference between the majority of the existing works and our study is that we wish to understand the relationships between tasks and find compatible groupings of tasks for any given set of tasks , rather than designing a neural network architecture to solve a particular fixed set of tasks well . A known contemporary example of hard parameter sharing in computer vision is UberNet ( Kokkinos ( 2017 ) ) . The authors tackle 7 computer vision problems using hard parameter sharing . The authors focus on reducing the computational cost of training for hard parameter sharing , but experience a rapid degradation in performance as more tasks are added to the network . Hard parameter sharing is also used in many other works such as ( Thrun ( 1996 ) ; Caruana ( 1997 ) ; Nekrasov et al . ( 2018 ) ; Dvornik et al . ( 2017 ) ; Kendall et al . ( 2018 ) ; Bilen & Vedaldi ( 2016 ) ; Pentina & Lampert ( 2017 ) ; Doersch & Zisserman ( 2017 ) ; Zamir et al . ( 2016 ) ; Long et al . ( 2017 ) ; Mercier et al . ( 2018 ) ; d. Miranda et al . ( 2012 ) ; Zhou et al . ( 2018 ) ; Rudd et al . ( 2016 ) ) . Other works , such as ( Sener & Koltun ( 2018 ) ) and ( Chen et al . ( 2018b ) ) , aim to dynamically reweight each task ’ s loss during training . The former work finds weights that provably lead to a Pareto-optimal solution , while the latter attempts to find weights that balance the influence of each task on network weights . Finally , ( Bingel & Søgaard ( 2017 ) ) studies task interaction for NLP . In soft or partial parameter sharing , either there is a separate set of parameters per task , or a significant fraction of the parameters are unshared . The models are tied together either by information sharing or by requiring parameters to be similar . Examples include ( Dai et al . ( 2016 ) ; Duong et al . ( 2015 ) ; Misra et al . ( 2016 ) ; Tessler et al . ( 2017 ) ; Yang & Hospedales ( 2017 ) ; Lu et al . ( 2017 ) ) . The canonical example of soft parameter sharing can be seen in ( Duong et al . ( 2015 ) ) . The authors are interested in designing a deep dependency parser for languages such as Irish that do not have much treebank data available . They tie the weights of two networks together by adding an L2 distance penalty between corresponding weights and show substantial improvement . Another example of soft parameter sharing is Cross-stitch Networks ( Misra et al . ( 2016 ) ) . Starting with separate networks for two tasks , the authors add ‘ cross-stitch units ’ between them , which allow each network to peek at the other network ’ s hidden layers . This approach reduces but does not eliminate task interfearence , and the overall performance is less sensitive to the relative loss weights . Unlike our method , none of the aforementioned works attempt to discover good groups of tasks to train together . Also , soft parameter sharing does not reduce inference time , a major goal of ours . Transfer Learning : Transfer learning ( Pratt ( 1993 ) ; Helleputte & Dupont ( 2009 ) ; Silver & Bennett ( 2008 ) ; Finn et al . ( 2016 ) ; Mihalkova et al . ( 2007 ) ; Niculescu-Mizil & Caruana ( 2007 ) ; Luo et al . ( 2017 ) ; Razavian et al . ( 2014 ) ; Pan & Yang ( 2010 ) ; Mallya & Lazebnik ( 2018 ) ; Fernando et al . ( 2017 ) ; Rusu et al . ( 2016 ) ) is similar to multi-task learning in that solutions are learned for multiple tasks . Unlike multi-task learning , however , transfer learning methods often assume that a model for a source task is given and then adapt that model to a target task . Transfer learning methods generally neither seek any benefit for source tasks nor a reduction in inference time as their main objective . Neural Architecture Search ( NAS ) : Many recent works search the space of deep learning architectures to find ones that perform well ( Zoph & Le , 2017 ; Liu et al. , 2018 ; Pham et al. , 2018 ; Xie et al. , 2019 ; Elsken et al. , 2019 ; Zhou et al. , 2019 ; Baker et al. , 2017 ; Real et al. , 2018 ) . This is related to our work as we search the space of task groupings . Just as with NAS , the found task groupings often perform better than human-engineered ones . Task Relationships : Our work is most related to Taskonomy ( Zamir et al . ( 2018 ) ) , where the authors studied the relationships between visual tasks for transfer learning and introduced a dataset with over 4 million images and corresponding labels for 26 tasks . This was followed by a number of recent works , which further analyzed task relationships ( Pal & Balasubramanian ( 2019 ) ; Dwivedi & Roig . ( 2019 ) ; Achille et al . ( 2019 ) ; Wang et al . ( 2019 ) ) for transfer learning . While they extract relationships between these tasks for transfer learning , we are interested in the multi-task learning setting . Interestingly , we find notable differences between transfer task affinity and multi-task affinity . Their method also differs in that they are interested in labeled-data efficiency and not inference-time efficiency . Finally , the transfer quantification approach taken by Taskonomy ( readout functions ) is only capable of finding relationships between the high-level bottleneck representations developed for each task , whereas structural similarities between tasks at all levels are potentially relevant for multi-task learning . 3 TASK GROUPING FRAMEWORK . Our goal is to find an assignment of tasks to networks that results in the best overall loss . Our strategy is to select from a large set of candidate networks to include in our final solution . We define the problem as follows : We want to minimize the overall loss on a set of tasks T = { t1 , t2 , ... , tk } given a limited inference time budget , b , which is the total amount of time we have to complete all tasks . Each neural network that solves some subset of T and that could potentially be a part of the final solution is denoted by n. It has an associated inference time cost , cn , and a loss for each task , L ( n , ti ) ( which is ∞ for each task the network does not attempt to solve ) . A solution S is a set of networks that together solve all tasks . The computational cost of a solution is cost ( S ) = ∑ n∈S cn . The loss of a solution on a task , L ( S , ti ) , is the lowest loss on that task among the solution ’ s networks1 , L ( S , ti ) = minn∈S L ( n , ti ) . The overall performance for a solution is L ( S ) = ∑ ti∈T L ( S , ti ) . We want to find the solution with the lowest overall loss and a cost that is under our budget , Sb = argminS : cost ( S ) ≤b L ( S ) . 1In principle , it may be possible to create an even better-performing ensemble when multiple networks solve the same task , though we do not explore this .
This paper works on the problem if training a set of networks to solve a set of tasks. The authors try to discover an optimal task split into the networks so that the test performances are maximized given a fixed testing resource budget. By default, this requires searching over the entire task combination space and is too slow. The authors propose two strategies for fast approximating the enumerative search. Experiments show their searched combinations give better performance in the fixed-budget testing setting than several alternatives.
SP:61e4186bf0f3ce2e595196285f5f19e45d67a0d8
Finite Depth and Width Corrections to the Neural Tangent Kernel
1 INTRODUCTION . Modern neural networks are typically overparameterized : they have many more parameters than the size of the datasets on which they are trained . That some setting of parameters in such networks can interpolate the data is therefore not surprising . But it is a priori unexpected that not only can such interpolating parameter values can be found by stochastic gradient descent ( SGD ) on the highly non-convex empirical risk but also that the resulting network function generalizes to unseen data . In an overparameterized neural network N ( x ) the individual parameters can be difficult to interpret , and one way to understand training is to rewrite the SGD updates ∆θp = − λ ∂L ∂θp , p = 1 , . . . , P of trainable parameters θ = { θp } Pp=1 with a loss L and learning rate λ as kernel gradient descent updates for the values N ( x ) of the function computed by the network : ∆N ( x ) = − λ 〈KN ( x , · ) , ∇L ( · ) 〉 = − λ |B| |B|∑ j=1 KN ( x , xj ) ∂L ∂N ( xj , yj ) . ( 1 ) Here B = { ( x1 , y1 ) , . . . , ( x|B| , y|B| ) } is the current batch , the inner product is the empirical ` 2 inner product over B , and KN is the neural tangent kernel ( NTK ) : KN ( x , x ′ ) = P∑ p=1 ∂N ∂θp ( x ) ∂N ∂θp ( x′ ) . Relation ( 1 ) is valid to first order in λ . It translates between two ways of thinking about the difficulty of neural network optimization : ( i ) The parameter space view where the loss L , a complicated function of θ ∈ R # parameters , is minimized using gradient descent with respect to a simple ( Euclidean ) metric ; ( ii ) The function space view where the loss L , which is a simple function of the network mapping x 7→ N ( x ) , is minimized over the manifold MN of all functions representable by the architecture of N using gradient descent with respect to a potentially complicated Riemannian metric KN on MN . A remarkable observation of Jacot et al . ( 2018 ) is thatKN simplifies dramatically when the network depth d is fixed and its width n tends to infinity . In this setting , by the universal approximation theorem ( Cybenko , 1989 ; Hornik et al. , 1989 ) , the manifold MN fills out any ( reasonable ) ambient linear space of functions . The results in Jacot et al . ( 2018 ) then show that the kernelKN in this limit is frozen throughout training to the infinite width limit of its average E [ KN ] at initialization , which depends on the depth and non-linearity of N but not on the dataset . This mapping between parameter space SGD and kernel gradient descent for a fixed kernel can be viewed as two separate statements . First , at initialization , the distribution of KN converges in the infinite width limit to the delta function on the infinite width limit of its mean E [ KN ] . Second , the infinite width limit of SGD dynamics in function space is kernel gradient descent for this limiting mean kernel for any fixed number of SGD iterations . As long as the loss L is well-behaved with respect to the network outputs N ( x ) and E [ KN ] is non-degenerate in the subspace of function space given by values on inputs from the dataset , SGD for infinitely wide networks will converge with probability 1 to a minimum of the loss . Further , kernel method-based theorems show that even in this infinitely overparameterized regime neural networks will have non-vacuous guarantees on generalization ( Wei et al. , 2018 ) . However , as ( Wei et al. , 2018 ) shows , the regularized neural networks at finite width can have better sample complexity the corresponding infinite width kernel method . But replacing neural network training by gradient descent for a fixed kernel in function space is also not completely satisfactory for several reasons . First , it suggests that no feature learning occurs during training for infinitely wide networks in the sense that the kernel E [ KN ] ( and hence its associated feature map ) is data-independent . In fact , empirically , networks with finite but large width trained with initially large learning rates often outperform NTK predictions at infinite width ( Arora et al. , 2019 ) . One interpretation is that , at finite width , KN evolves through training , learning datadependent features not captured by the infinite width limit of its mean at initialization . In part for such reasons , it is important to study both empirically and theoretically finite width corrections to KN . Another interpretation is that the specific NTK scaling of weights at initialization ( Chizat & Bach , 2018b ; a ; Mei et al. , 2019 ; 2018 ; Rotskoff & Vanden-Eijnden , 2018a ; b ) and the implicit small learning rate limit ( Li et al. , 2019 ) obscure important aspects of SGD dynamics . Second , even in the infinite width limit , although KN is deterministic , it has no simple analytical formula for deep networks , since it is defined via a layer by layer recursion . In particular , the exact dependence , even in the infinite width limit , of KN on network depth is not well understood . Moreover , the joint statistical effects of depth and width on KN in finite size networks remain unclear , and the purpose of this article is to shed light on the simultaneous effects of depth and width on KN for finite but large widths n and any depth d. Our results apply to fully connected ReLU networks at initialization for which our main contributions are : 1 . In contrast to the regime in which the depth d is fixed but the width n is large , KN is not approximately deterministic at initialization so long as d/n is bounded away from 0 . Specifically , for a fixed input x the normalized on-diagonal second moment ofKN satisfies E [ KN ( x , x ) 2 ] E [ KN ( x , x ) ] 2 ' exp ( 5d/n ) ( 1 +O ( d/n2 ) ) . Thus , when d/n is bounded away from 0 , even when both n , d are large , the standard deviation of KN ( x , x ) is at least as large as its mean , showing that its distribution at initialization is not close to a delta function . See Theorem 1 . 2 . Moreover , when L is the square loss , the average of the SGD update ∆KN ( x , x ) to KN ( x , x ) from a batch of size one containing x satisfies E [ ∆KN ( x , x ) ] E [ KN ( x , x ) ] ' d 2 nn0 exp ( 5d/n ) ( 1 +O ( d/n2 ) ) , where n0 is the input dimension . Therefore , if d2/nn0 > 0 , the NTK will have the potential to evolve in a data-dependent way . Moreover , if n0 is comparable to n and d/n > 0 then it is possible that this evolution will have a well-defined expansion in d/n . See Theorem 2 . In both statements above , ' means is bounded above and below by universal constants . We emphasize that our results hold at finite d , n and the implicit constants in both ' and in the error terms O ( d/n2 ) are independent of d , n.Moreover , our precise results , stated in §2 below , hold for networks with variable layer widths . We have denoted network width by n only for the sake of exposition . The appropriate generalization of d/n to networks with varying layer widths is the parameter β : = d∑ i=1 1 nj , which in light of the estimates in ( 1 ) and ( 2 ) plays the role of an inverse temperature . 1.1 PRIOR WORK . A number of articles ( Bietti & Mairal , 2019 ; Dyer & Gur-Ari , 2019 ; Lee et al. , 2019 ; Yang , 2019 ) have followed up on the original NTK work Jacot et al . ( 2018 ) . Related in spirit to our results is the work Dyer & Gur-Ari ( 2019 ) , which uses Feynman diagrams to study finite width corrections to general correlations functions ( and in particular the NTK ) . The most complete results obtained by Dyer & Gur-Ari ( 2019 ) are for deep linear networks but a number of estimates hold general non-linear networks as well . The results there , like in essentially all previous work , fix the depth d and let the layer widths n tend to infinity . In contrast , our results ( as well as those of Hanin ( 2018 ) ; Hanin & Nica ( 2018 ) ; Hanin & Rolnick ( 2018 ) ) , do not treat d as a constant , suggesting that the 1/n expansions ( e.g . in Dyer & Gur-Ari ( 2019 ) ) can be promoted to d/n expansions . Also , the sum-over-path approach to studying correlation functions in randomly initialized ReLU nets was previously taken up for the forward pass by Hanin & Rolnick ( 2018 ) and for the backward pass by Hanin ( 2018 ) and Hanin & Nica ( 2018 ) . We also point the reader to Theorems 3.1 and 3.2 in Arora et al . ( 2019 ) , which provide quantitative rates of convergence for both the neural tangent kernel and the resulting full optimization trajectory of neural networks at large but finite width ( and fixed depth ) . 2 FORMAL STATEMENT OF RESULTS . Consider a ReLU network N with input dimension n0 , hidden layer widths n1 , . . . , nd−1 , and output dimension nd = 1 . We will assume that the output layer of N is linear and initialize the biases in N to zero . Therefore , for any input x ∈ Rn0 , the network N computes N ( x ) = x ( d ) given by x ( 0 ) = x , y ( i ) : = Ŵ ( i ) x ( i−1 ) , x ( i ) : = ReLU ( y ( i ) ) , i = 1 , . . . , d , ( 2 ) where for i = 1 , . . . , d− 1 Ŵ ( d ) : = ( 1/ni−1 ) −1/2W ( i ) , Ŵ ( i ) : = ( 2/ni−1 ) −1/2W ( i ) , W ( i ) α , β ∼ µ i.i.d. , ( 3 ) and µ is a fixed probability measure on R that we assume has a density with respect to Lebesgue measure and satisfies : µ is symmetric around 0 , Var [ µ ] = 1 , ∫ ∞ −∞ x4dµ ( x ) = µ4 < ∞ . ( 4 ) The three assumptions in ( 4 ) hold for virtually all standard network initialization schemes with the exception of orthogonal weight initialization . But believe our results extend hold also for this case but not do take up this issue . The on-diagonal NTK is KN ( x , x ) : = d∑ j=1 nj−1∑ α=1 nj∑ β=1 ( ∂N ∂W ( j ) α , β ( x ) ) 2 + d∑ j=1 nj∑ β=1 ( ∂N ∂b ( j ) β ( x ) ) 2 , ( 5 ) and we emphasize that although we have initialized the biases to zero , they are not removed from the list of trainable parameters . Our first result is the following : Theorem 1 ( Mean and Variance of NKT on Diagonal at Init ) . We have E [ KN ( x , x ) ] = d ( 1 2 + ‖x‖22 n0 ) . Moreover , we have that E [ KN ( x , x ) 2 ] is bounded above and below by universal constants times exp ( 5β ) d2 ‖x‖42n20 + d ‖x‖ 2 2 n0 d∑ j=1 e −5 ∑j i=1 1 ni + d∑ i , j=1 i≤j e −5 ∑j i=1 1 ni , β = d∑ i=1 1 ni times a multiplicative error ( 1 +O ( ∑d i=1 1 n2i ) ) . In particular , if all the hidden layer widths are equal ( i.e . ni = n , for i = 1 , . . . , d− 1 ) , we have E [ KN ( x , x ) 2 ] E [ KN ( x , x ) ] 2 ' exp ( 5β ) ( 1 +O ( β/n ) ) , β = d/n , where f ' g means f is bounded above and below by universal constants times g. This result shows that in the deep and wide double scaling limit ni , d→∞ , 0 < lim ni , d→∞ d∑ i=1 1 ni < ∞ , the NTK does not converge to a constant in probability . This is contrast to the wide and shallow regime where ni →∞ and d < ∞ is fixed . Our next result shows that when L is the square loss KN ( x , x ) is not frozen during training . To state it , fix an input x ∈ Rn0 to N and define ∆KN ( x , x ) to be the update from one step of SGD with a batch of size 1 containing x ( and learning rate λ ) . Theorem 2 ( Mean of Time Derivative of NTK on Diagonal at Init ) . We have that E [ λ−1∆KN ( x , x ) ] is bounded above and below by universal constants times‖x‖42n20 d∑ i1 , i2=1 ii < i2 i2−1∑ ` =i1 e −5/n ` −6 ∑ ` i=i1 1 ni n ` + ‖x‖22 n0 d∑ ii , i2=1 i1 < i2 e −5 ∑i1 i=1 1 ni i2−1∑ ` =i1 e −6 ∑ ` −1 i=i1+1 1 ni n ` exp ( 5β ) times a multiplicative error of size ( 1 +O ( ∑d i=1 1 n2i ) ) , where β = ∑d i=1 1/ni , as in Theorem 1 . In particular , if all the hidden layer widths are equal ( i.e . ni = n , for i = 1 , . . . , d− 1 ) , we find E [ ∆KN ( x , x ) ] E [ KN ( x , x ) ] ' dβ n0 exp ( 5β ) ( 1 +O ( β/n ) ) , β = d/n . Observe that when d is fixed and ni = n → ∞ , the pre-factor in front of exp ( 5β ) scales like 1/n . This is in keeping with the results from Dyer & Gur-Ari ( 2019 ) and Jacot et al . ( 2018 ) . Moreover , it shows that if d , n , n0 grow in any way so that dβ/n0 = d2/nn0 → 0 , the update ∆KN ( x , x ) to KN ( x , x ) from the batch { x } at initialization will have mean 0 . It is unclear whether this will be true also for larger batches and when the arguments of KN are not equal . In contrast , if ni ' n and β = d/n is bounded away from 0 , ∞ , and the n0 is proportional to d , the average update E [ ∆KN ( x , x ) ] has the same order of magnitude as E [ KN ( x ) ] .
This paper studies the finite depth and width corrections to the neural tangent kernel (NTK) in fully-connected ReLU networks. It gives sharp upper and lower bounds on the variance of NTK(x, x), which reveals an exponential dependence on a quantity beta=d/n, where d is depth, and n is hidden width. This implies that when beta is bounded away from 0, NTK(x, x) is not deterministic at initialization. The paper further analyzes the change of NTK(x, x) after one step of SGD on a single datapoint x, and shows that the change also depends exponentially on beta.
SP:fd4bc8557b3fd87ae1682252a55de0940854a2e8
Finite Depth and Width Corrections to the Neural Tangent Kernel
1 INTRODUCTION . Modern neural networks are typically overparameterized : they have many more parameters than the size of the datasets on which they are trained . That some setting of parameters in such networks can interpolate the data is therefore not surprising . But it is a priori unexpected that not only can such interpolating parameter values can be found by stochastic gradient descent ( SGD ) on the highly non-convex empirical risk but also that the resulting network function generalizes to unseen data . In an overparameterized neural network N ( x ) the individual parameters can be difficult to interpret , and one way to understand training is to rewrite the SGD updates ∆θp = − λ ∂L ∂θp , p = 1 , . . . , P of trainable parameters θ = { θp } Pp=1 with a loss L and learning rate λ as kernel gradient descent updates for the values N ( x ) of the function computed by the network : ∆N ( x ) = − λ 〈KN ( x , · ) , ∇L ( · ) 〉 = − λ |B| |B|∑ j=1 KN ( x , xj ) ∂L ∂N ( xj , yj ) . ( 1 ) Here B = { ( x1 , y1 ) , . . . , ( x|B| , y|B| ) } is the current batch , the inner product is the empirical ` 2 inner product over B , and KN is the neural tangent kernel ( NTK ) : KN ( x , x ′ ) = P∑ p=1 ∂N ∂θp ( x ) ∂N ∂θp ( x′ ) . Relation ( 1 ) is valid to first order in λ . It translates between two ways of thinking about the difficulty of neural network optimization : ( i ) The parameter space view where the loss L , a complicated function of θ ∈ R # parameters , is minimized using gradient descent with respect to a simple ( Euclidean ) metric ; ( ii ) The function space view where the loss L , which is a simple function of the network mapping x 7→ N ( x ) , is minimized over the manifold MN of all functions representable by the architecture of N using gradient descent with respect to a potentially complicated Riemannian metric KN on MN . A remarkable observation of Jacot et al . ( 2018 ) is thatKN simplifies dramatically when the network depth d is fixed and its width n tends to infinity . In this setting , by the universal approximation theorem ( Cybenko , 1989 ; Hornik et al. , 1989 ) , the manifold MN fills out any ( reasonable ) ambient linear space of functions . The results in Jacot et al . ( 2018 ) then show that the kernelKN in this limit is frozen throughout training to the infinite width limit of its average E [ KN ] at initialization , which depends on the depth and non-linearity of N but not on the dataset . This mapping between parameter space SGD and kernel gradient descent for a fixed kernel can be viewed as two separate statements . First , at initialization , the distribution of KN converges in the infinite width limit to the delta function on the infinite width limit of its mean E [ KN ] . Second , the infinite width limit of SGD dynamics in function space is kernel gradient descent for this limiting mean kernel for any fixed number of SGD iterations . As long as the loss L is well-behaved with respect to the network outputs N ( x ) and E [ KN ] is non-degenerate in the subspace of function space given by values on inputs from the dataset , SGD for infinitely wide networks will converge with probability 1 to a minimum of the loss . Further , kernel method-based theorems show that even in this infinitely overparameterized regime neural networks will have non-vacuous guarantees on generalization ( Wei et al. , 2018 ) . However , as ( Wei et al. , 2018 ) shows , the regularized neural networks at finite width can have better sample complexity the corresponding infinite width kernel method . But replacing neural network training by gradient descent for a fixed kernel in function space is also not completely satisfactory for several reasons . First , it suggests that no feature learning occurs during training for infinitely wide networks in the sense that the kernel E [ KN ] ( and hence its associated feature map ) is data-independent . In fact , empirically , networks with finite but large width trained with initially large learning rates often outperform NTK predictions at infinite width ( Arora et al. , 2019 ) . One interpretation is that , at finite width , KN evolves through training , learning datadependent features not captured by the infinite width limit of its mean at initialization . In part for such reasons , it is important to study both empirically and theoretically finite width corrections to KN . Another interpretation is that the specific NTK scaling of weights at initialization ( Chizat & Bach , 2018b ; a ; Mei et al. , 2019 ; 2018 ; Rotskoff & Vanden-Eijnden , 2018a ; b ) and the implicit small learning rate limit ( Li et al. , 2019 ) obscure important aspects of SGD dynamics . Second , even in the infinite width limit , although KN is deterministic , it has no simple analytical formula for deep networks , since it is defined via a layer by layer recursion . In particular , the exact dependence , even in the infinite width limit , of KN on network depth is not well understood . Moreover , the joint statistical effects of depth and width on KN in finite size networks remain unclear , and the purpose of this article is to shed light on the simultaneous effects of depth and width on KN for finite but large widths n and any depth d. Our results apply to fully connected ReLU networks at initialization for which our main contributions are : 1 . In contrast to the regime in which the depth d is fixed but the width n is large , KN is not approximately deterministic at initialization so long as d/n is bounded away from 0 . Specifically , for a fixed input x the normalized on-diagonal second moment ofKN satisfies E [ KN ( x , x ) 2 ] E [ KN ( x , x ) ] 2 ' exp ( 5d/n ) ( 1 +O ( d/n2 ) ) . Thus , when d/n is bounded away from 0 , even when both n , d are large , the standard deviation of KN ( x , x ) is at least as large as its mean , showing that its distribution at initialization is not close to a delta function . See Theorem 1 . 2 . Moreover , when L is the square loss , the average of the SGD update ∆KN ( x , x ) to KN ( x , x ) from a batch of size one containing x satisfies E [ ∆KN ( x , x ) ] E [ KN ( x , x ) ] ' d 2 nn0 exp ( 5d/n ) ( 1 +O ( d/n2 ) ) , where n0 is the input dimension . Therefore , if d2/nn0 > 0 , the NTK will have the potential to evolve in a data-dependent way . Moreover , if n0 is comparable to n and d/n > 0 then it is possible that this evolution will have a well-defined expansion in d/n . See Theorem 2 . In both statements above , ' means is bounded above and below by universal constants . We emphasize that our results hold at finite d , n and the implicit constants in both ' and in the error terms O ( d/n2 ) are independent of d , n.Moreover , our precise results , stated in §2 below , hold for networks with variable layer widths . We have denoted network width by n only for the sake of exposition . The appropriate generalization of d/n to networks with varying layer widths is the parameter β : = d∑ i=1 1 nj , which in light of the estimates in ( 1 ) and ( 2 ) plays the role of an inverse temperature . 1.1 PRIOR WORK . A number of articles ( Bietti & Mairal , 2019 ; Dyer & Gur-Ari , 2019 ; Lee et al. , 2019 ; Yang , 2019 ) have followed up on the original NTK work Jacot et al . ( 2018 ) . Related in spirit to our results is the work Dyer & Gur-Ari ( 2019 ) , which uses Feynman diagrams to study finite width corrections to general correlations functions ( and in particular the NTK ) . The most complete results obtained by Dyer & Gur-Ari ( 2019 ) are for deep linear networks but a number of estimates hold general non-linear networks as well . The results there , like in essentially all previous work , fix the depth d and let the layer widths n tend to infinity . In contrast , our results ( as well as those of Hanin ( 2018 ) ; Hanin & Nica ( 2018 ) ; Hanin & Rolnick ( 2018 ) ) , do not treat d as a constant , suggesting that the 1/n expansions ( e.g . in Dyer & Gur-Ari ( 2019 ) ) can be promoted to d/n expansions . Also , the sum-over-path approach to studying correlation functions in randomly initialized ReLU nets was previously taken up for the forward pass by Hanin & Rolnick ( 2018 ) and for the backward pass by Hanin ( 2018 ) and Hanin & Nica ( 2018 ) . We also point the reader to Theorems 3.1 and 3.2 in Arora et al . ( 2019 ) , which provide quantitative rates of convergence for both the neural tangent kernel and the resulting full optimization trajectory of neural networks at large but finite width ( and fixed depth ) . 2 FORMAL STATEMENT OF RESULTS . Consider a ReLU network N with input dimension n0 , hidden layer widths n1 , . . . , nd−1 , and output dimension nd = 1 . We will assume that the output layer of N is linear and initialize the biases in N to zero . Therefore , for any input x ∈ Rn0 , the network N computes N ( x ) = x ( d ) given by x ( 0 ) = x , y ( i ) : = Ŵ ( i ) x ( i−1 ) , x ( i ) : = ReLU ( y ( i ) ) , i = 1 , . . . , d , ( 2 ) where for i = 1 , . . . , d− 1 Ŵ ( d ) : = ( 1/ni−1 ) −1/2W ( i ) , Ŵ ( i ) : = ( 2/ni−1 ) −1/2W ( i ) , W ( i ) α , β ∼ µ i.i.d. , ( 3 ) and µ is a fixed probability measure on R that we assume has a density with respect to Lebesgue measure and satisfies : µ is symmetric around 0 , Var [ µ ] = 1 , ∫ ∞ −∞ x4dµ ( x ) = µ4 < ∞ . ( 4 ) The three assumptions in ( 4 ) hold for virtually all standard network initialization schemes with the exception of orthogonal weight initialization . But believe our results extend hold also for this case but not do take up this issue . The on-diagonal NTK is KN ( x , x ) : = d∑ j=1 nj−1∑ α=1 nj∑ β=1 ( ∂N ∂W ( j ) α , β ( x ) ) 2 + d∑ j=1 nj∑ β=1 ( ∂N ∂b ( j ) β ( x ) ) 2 , ( 5 ) and we emphasize that although we have initialized the biases to zero , they are not removed from the list of trainable parameters . Our first result is the following : Theorem 1 ( Mean and Variance of NKT on Diagonal at Init ) . We have E [ KN ( x , x ) ] = d ( 1 2 + ‖x‖22 n0 ) . Moreover , we have that E [ KN ( x , x ) 2 ] is bounded above and below by universal constants times exp ( 5β ) d2 ‖x‖42n20 + d ‖x‖ 2 2 n0 d∑ j=1 e −5 ∑j i=1 1 ni + d∑ i , j=1 i≤j e −5 ∑j i=1 1 ni , β = d∑ i=1 1 ni times a multiplicative error ( 1 +O ( ∑d i=1 1 n2i ) ) . In particular , if all the hidden layer widths are equal ( i.e . ni = n , for i = 1 , . . . , d− 1 ) , we have E [ KN ( x , x ) 2 ] E [ KN ( x , x ) ] 2 ' exp ( 5β ) ( 1 +O ( β/n ) ) , β = d/n , where f ' g means f is bounded above and below by universal constants times g. This result shows that in the deep and wide double scaling limit ni , d→∞ , 0 < lim ni , d→∞ d∑ i=1 1 ni < ∞ , the NTK does not converge to a constant in probability . This is contrast to the wide and shallow regime where ni →∞ and d < ∞ is fixed . Our next result shows that when L is the square loss KN ( x , x ) is not frozen during training . To state it , fix an input x ∈ Rn0 to N and define ∆KN ( x , x ) to be the update from one step of SGD with a batch of size 1 containing x ( and learning rate λ ) . Theorem 2 ( Mean of Time Derivative of NTK on Diagonal at Init ) . We have that E [ λ−1∆KN ( x , x ) ] is bounded above and below by universal constants times‖x‖42n20 d∑ i1 , i2=1 ii < i2 i2−1∑ ` =i1 e −5/n ` −6 ∑ ` i=i1 1 ni n ` + ‖x‖22 n0 d∑ ii , i2=1 i1 < i2 e −5 ∑i1 i=1 1 ni i2−1∑ ` =i1 e −6 ∑ ` −1 i=i1+1 1 ni n ` exp ( 5β ) times a multiplicative error of size ( 1 +O ( ∑d i=1 1 n2i ) ) , where β = ∑d i=1 1/ni , as in Theorem 1 . In particular , if all the hidden layer widths are equal ( i.e . ni = n , for i = 1 , . . . , d− 1 ) , we find E [ ∆KN ( x , x ) ] E [ KN ( x , x ) ] ' dβ n0 exp ( 5β ) ( 1 +O ( β/n ) ) , β = d/n . Observe that when d is fixed and ni = n → ∞ , the pre-factor in front of exp ( 5β ) scales like 1/n . This is in keeping with the results from Dyer & Gur-Ari ( 2019 ) and Jacot et al . ( 2018 ) . Moreover , it shows that if d , n , n0 grow in any way so that dβ/n0 = d2/nn0 → 0 , the update ∆KN ( x , x ) to KN ( x , x ) from the batch { x } at initialization will have mean 0 . It is unclear whether this will be true also for larger batches and when the arguments of KN are not equal . In contrast , if ni ' n and β = d/n is bounded away from 0 , ∞ , and the n0 is proportional to d , the average update E [ ∆KN ( x , x ) ] has the same order of magnitude as E [ KN ( x ) ] .
The paper investigates a novel infinite width limit taking depth to infinite at the same time. This is beyond conventional theoretical studies for infinite width networks where depth is kept finite when the width is taken to be infinite. The main object that paper studies is the neural tangent kernel which is of great interest to the theoretical deep learning community as it describes gradient descent dynamics in a tractable way.
SP:fd4bc8557b3fd87ae1682252a55de0940854a2e8
Composition-based Multi-Relational Graph Convolutional Networks
1 INTRODUCTION . Graphs are one of the most expressive data-structures which have been used to model a variety of problems . Traditional neural network architectures like Convolutional Neural Networks ( Krizhevsky et al. , 2012 ) and Recurrent Neural Networks ( Hochreiter & Schmidhuber , 1997 ) are constrained to handle only Euclidean data . Recently , Graph Convolutional Networks ( GCNs ) ( Bruna et al. , 2013 ; Defferrard et al. , 2016 ) have been proposed to address this shortcoming , and have been successfully applied to several domains such as social networks ( Hamilton et al. , 2017 ) , knowledge graphs ( Schlichtkrull et al. , 2017 ; Shang et al. , 2019 ) , natural language processing ( Marcheggiani & Titov , 2017 ; Vashishth et al. , 2018a ; b ; 2019 ) , drug discovery ( Ramsundar et al. , 2019 ) , crystal property prediction ( Sanyal et al. , 2018 ) , and natural sciences ( Fout et al. , 2017 ) . However , most of the existing research on GCNs ( Kipf & Welling , 2016 ; Hamilton et al. , 2017 ; Veličković et al. , 2018 ) have focused on learning representations of nodes in simple undirected graphs . A more general and pervasive class of graphs are multi-relational graphs1 . A notable example of such graphs is knowledge graphs . Most of the existing GCN based approaches for handling relational graphs ( Marcheggiani & Titov , 2017 ; Schlichtkrull et al. , 2017 ) suffer from overparameterization and are limited to learning only node representations . Hence , such methods are not directly applicable for tasks such as link prediction which require relation embedding vectors . Initial attempts at learning representations for relations in graphs ( Monti et al. , 2018 ; Beck et al. , 2018 ) have shown some performance gains on tasks like node classification and neural machine translation . There has been extensive research on embedding Knowledge Graphs ( KG ) ( Nickel et al. , 2016 ; Wang et al. , 2017 ) where representations of both nodes and relations are jointly learned . These methods are restricted to learning embeddings using link prediction objective . Even though GCNs can ∗Equally Contributed †Work done while at IISc , Bangalore 1In this paper , multi-relational graphs refer to graphs with edges that have labels and directions . learn from task-specific objectives such as classification , their application has been largely restricted to non-relational graph setting . Thus , there is a need for a framework which can utilize KG embedding techniques for learning task-specific node and relation embeddings . In this paper , we propose COMPGCN , a novel GCN framework for multi-relational graphs which systematically leverages entity-relation composition operations from knowledge graph embedding techniques . COMPGCN addresses the shortcomings of previously proposed GCN models by jointly learning vector representations for both nodes and relations in the graph . An overview of COMPGCN is presented in Figure 1 . The contributions of our work can be summarized as follows : 1 . We propose COMPGCN , a novel framework for incorporating multi-relational information in Graph Convolutional Networks which leverages a variety of composition operations from knowledge graph embedding techniques to jointly embed both nodes and relations in a graph . 2 . We demonstrate that COMPGCN framework generalizes several existing multi-relational GCN methods ( Proposition 4.1 ) and also scales with the increase in number of relations in the graph ( Section 6.3 ) . 3 . Through extensive experiments on tasks such as node classification , link prediction , and graph classification , we demonstrate the effectiveness of our proposed method . The source code of COMPGCN and datasets used in the paper have been made available at http : //github.com/malllabiisc/CompGCN . 2 RELATED WORK . Graph Convolutional Networks : GCNs generalize Convolutional Neural Networks ( CNNs ) to non-Euclidean data . GCNs were first introduced by Bruna et al . ( 2013 ) and later made scalable through efficient localized filters in the spectral domain ( Defferrard et al. , 2016 ) . A first-order approximation of GCNs using Chebyshev polynomials has been proposed by Kipf & Welling ( 2016 ) . Recently , several of its extensions have also been formulated ( Hamilton et al. , 2017 ; Veličković et al. , 2018 ; Xu et al. , 2019 ; Vashishth et al. , 2019 ; Yadav et al. , 2019 ) . Most of the existing GCN methods follow Message Passing Neural Networks ( MPNN ) framework ( Gilmer et al. , 2017 ) for node aggregation . Our proposed method can be seen as an instantiation of the MPNN framework . However , it is specialized for relational graphs . GCNs for Multi-Relational Graph : An extension of GCNs for relational graphs is proposed by Marcheggiani & Titov ( 2017 ) . However , they only consider direction-specific filters and ignore relations due to over-parameterization . Schlichtkrull et al . ( 2017 ) address this shortcoming by proposing basis and block-diagonal decomposition of relation specific filters . Weighted Graph Convolutional Network ( Shang et al. , 2019 ) utilizes learnable relational specific scalar weights during GCN aggregation . While these methods show performance gains on node classification and link prediction , they are limited to embedding only the nodes of the graph . Contemporary to our work , Ye et al . ( 2019 ) have also proposed an extension of GCNs for embedding both nodes and relations in multirelational graphs . However , our proposed method is a more generic framework which can leverage any KG composition operator . We compare against their method in Section 6.1 . Knowledge Graph Embedding : Knowledge graph ( KG ) embedding is a widely studied field ( Nickel et al. , 2016 ; Wang et al. , 2017 ) with application in tasks like link prediction and question answering ( Bordes et al. , 2014 ) . Most of KG embedding approaches define a score function and train node and relation embeddings such that valid triples are assigned a higher score than the invalid ones . Based on the type of score function , KG embedding method are classified as translational ( Bordes et al. , 2013 ; Wang et al. , 2014b ) , semantic matching based ( Yang et al. , 2014 ; Nickel et al. , 2016 ) and neural network based ( Socher et al. , 2013 ; Dettmers et al. , 2018 ; Vashishth et al. , 2019 ) . In our work , we evaluate the performance of COMPGCN on link prediction with methods of all three types . 3 BACKGROUND . In this section , we give a brief overview of Graph Convolutional Networks ( GCNs ) for undirected graphs and its extension to directed relational graphs . GCN on Undirected Graphs : Given a graph G = ( V , E , X ) , where V denotes the set of vertices , E is the set of edges , and X ∈ R|V|×d0 represents d0-dimensional input features of each node . The node representation obtained from a single GCN layer is defined as : H = f ( ÂXW ) . Here , Â = D̃− 1 2 ( A+ I ) D̃− 1 2 is the normalized adjacency matrix with added self-connections and D̃ is defined as D̃ii = ∑ j ( A + I ) ij . The model parameter is denoted by W ∈ Rd0×d1 and f is some activation function . The GCN representation H encodes the immediate neighborhood of each node in the graph . For capturing multi-hop dependencies in the graph , several GCN layers can be stacked , one on the top of another as follows : Hk+1 = f ( ÂHkW k ) , where k denotes the number of layers , W k ∈ Rdk×dk+1 is layer-specific parameter and H0 = X . GCN on Multi-Relational Graphs : For a multi-relational graph G = ( V , R , E , X ) , where R denotes the set of relations , and each edge ( u , v , r ) represents that the relation r ∈ R exist from node u to v. The GCN formulation as devised by Marcheggiani & Titov ( 2017 ) is based on the assumption that information in a directed edge flows along both directions . Hence , for each edge ( u , v , r ) ∈ E , an inverse edge ( v , u , r−1 ) is included in G. The representations obtained after k layers of directed GCN is given by Hk+1 = f ( ÂHkW kr ) . ( 1 ) Here , W kr denotes the relation specific parameters of the model . However , the above formulation leads to over-parameterization with an increase in the number of relations and hence , Marcheggiani & Titov ( 2017 ) use direction-specific weight matrices . Schlichtkrull et al . ( 2017 ) address overparameterization by proposing basis and block-diagonal decomposition of W kr . 4 COMPGCN DETAILS . In this section , we provide a detailed description of our proposed method , COMPGCN . The overall architecture is shown in Figure 1 . We represent a multi-relational graph by G = ( V , R , E , X , Z ) as defined in Section 3 where Z ∈ R|R|×d0 denotes the initial relation features . Our model is motivated by the first-order approximation of GCNs using Chebyshev polynomials ( Kipf & Welling , 2016 ) . Following Marcheggiani & Titov ( 2017 ) , we also allow the information in a directed edge to flow along both directions . Hence , we extend E and R with corresponding inverse edges and relations , i.e. , E ′ = E ∪ { ( v , u , r−1 ) | ( u , v , r ) ∈ E } ∪ { ( u , u , > ) | u ∈ V ) } , and R′ = R ∪ Rinv ∪ { > } , where Rinv = { r−1 | r ∈ R } denotes the inverse relations and > indicates the self loop . 4.1 RELATION-BASED COMPOSITION . Unlike most of the existing methods which embed only nodes in the graph , COMPGCN learns a d-dimensional representation hr ∈ Rd , ∀r ∈ R along with node embeddings hv ∈ Rd , ∀v ∈ V . Representing relations as vectors alleviates the problem of over-parameterization while applying GCNs on relational graphs . Further , it allows COMPGCN to exploit any available relation features ( Z ) as initial representations . To incorporate relation embeddings into the GCN formulation , we leverage the entity-relation composition operations used in Knowledge Graph embedding approaches ( Bordes et al. , 2013 ; Nickel et al. , 2016 ) , which are of the form eo = φ ( es , er ) . Here , φ : Rd × Rd → Rd is a composition operator , s , r , and o denote subject , relation and object in the knowledge graph and e ( · ) ∈ Rd denotes their corresponding embeddings . In this paper , we restrict ourselves to non-parameterized operations like subtraction ( Bordes et al. , 2013 ) , multiplication ( Yang et al. , 2014 ) and circular-correlation ( Nickel et al. , 2016 ) . However , COMPGCN can be extended to parameterized operations like Neural Tensor Networks ( NTN ) ( Socher et al. , 2013 ) and ConvE ( Dettmers et al. , 2018 ) . We defer their analysis as future work . As we show in Section 6 , the choice of composition operation is important in deciding the quality of the learned embeddings . Hence , superior composition operations for Knowledge Graphs developed in future can be adopted to improve COMPGCN ’ s performance further .
In this paper, the authors developed GCN on multi-relational graphs and proposed CompGCN. In comparison with existing multi-relational GCN, CompGCN leverages insights from knowledge graph embedding and learns representations of both nodes and relations, with the aim to alleviate the problem of over-parameterization. Moreover, to improve the scalability w.r.t. the number of relations, the initial relation representations are expressed as a linear combination of a fixed number of basis vectors. In contrast to existing works, the basis vectors are only defined for initialization but not for every GCN layer. The authors also compared the proposed CompGCN with other existing GCN variants and summarized the relationships between CompGCN and other models. In the experiments, three tasks including link prediction, node classification and graph classification were performed to evaluate the performance of the proposed method. By comparing with existing methods, the effectiveness of the proposed method was demonstrated.
SP:82afe0f6d661432c3124eb14e9a83699e251143d
Composition-based Multi-Relational Graph Convolutional Networks
1 INTRODUCTION . Graphs are one of the most expressive data-structures which have been used to model a variety of problems . Traditional neural network architectures like Convolutional Neural Networks ( Krizhevsky et al. , 2012 ) and Recurrent Neural Networks ( Hochreiter & Schmidhuber , 1997 ) are constrained to handle only Euclidean data . Recently , Graph Convolutional Networks ( GCNs ) ( Bruna et al. , 2013 ; Defferrard et al. , 2016 ) have been proposed to address this shortcoming , and have been successfully applied to several domains such as social networks ( Hamilton et al. , 2017 ) , knowledge graphs ( Schlichtkrull et al. , 2017 ; Shang et al. , 2019 ) , natural language processing ( Marcheggiani & Titov , 2017 ; Vashishth et al. , 2018a ; b ; 2019 ) , drug discovery ( Ramsundar et al. , 2019 ) , crystal property prediction ( Sanyal et al. , 2018 ) , and natural sciences ( Fout et al. , 2017 ) . However , most of the existing research on GCNs ( Kipf & Welling , 2016 ; Hamilton et al. , 2017 ; Veličković et al. , 2018 ) have focused on learning representations of nodes in simple undirected graphs . A more general and pervasive class of graphs are multi-relational graphs1 . A notable example of such graphs is knowledge graphs . Most of the existing GCN based approaches for handling relational graphs ( Marcheggiani & Titov , 2017 ; Schlichtkrull et al. , 2017 ) suffer from overparameterization and are limited to learning only node representations . Hence , such methods are not directly applicable for tasks such as link prediction which require relation embedding vectors . Initial attempts at learning representations for relations in graphs ( Monti et al. , 2018 ; Beck et al. , 2018 ) have shown some performance gains on tasks like node classification and neural machine translation . There has been extensive research on embedding Knowledge Graphs ( KG ) ( Nickel et al. , 2016 ; Wang et al. , 2017 ) where representations of both nodes and relations are jointly learned . These methods are restricted to learning embeddings using link prediction objective . Even though GCNs can ∗Equally Contributed †Work done while at IISc , Bangalore 1In this paper , multi-relational graphs refer to graphs with edges that have labels and directions . learn from task-specific objectives such as classification , their application has been largely restricted to non-relational graph setting . Thus , there is a need for a framework which can utilize KG embedding techniques for learning task-specific node and relation embeddings . In this paper , we propose COMPGCN , a novel GCN framework for multi-relational graphs which systematically leverages entity-relation composition operations from knowledge graph embedding techniques . COMPGCN addresses the shortcomings of previously proposed GCN models by jointly learning vector representations for both nodes and relations in the graph . An overview of COMPGCN is presented in Figure 1 . The contributions of our work can be summarized as follows : 1 . We propose COMPGCN , a novel framework for incorporating multi-relational information in Graph Convolutional Networks which leverages a variety of composition operations from knowledge graph embedding techniques to jointly embed both nodes and relations in a graph . 2 . We demonstrate that COMPGCN framework generalizes several existing multi-relational GCN methods ( Proposition 4.1 ) and also scales with the increase in number of relations in the graph ( Section 6.3 ) . 3 . Through extensive experiments on tasks such as node classification , link prediction , and graph classification , we demonstrate the effectiveness of our proposed method . The source code of COMPGCN and datasets used in the paper have been made available at http : //github.com/malllabiisc/CompGCN . 2 RELATED WORK . Graph Convolutional Networks : GCNs generalize Convolutional Neural Networks ( CNNs ) to non-Euclidean data . GCNs were first introduced by Bruna et al . ( 2013 ) and later made scalable through efficient localized filters in the spectral domain ( Defferrard et al. , 2016 ) . A first-order approximation of GCNs using Chebyshev polynomials has been proposed by Kipf & Welling ( 2016 ) . Recently , several of its extensions have also been formulated ( Hamilton et al. , 2017 ; Veličković et al. , 2018 ; Xu et al. , 2019 ; Vashishth et al. , 2019 ; Yadav et al. , 2019 ) . Most of the existing GCN methods follow Message Passing Neural Networks ( MPNN ) framework ( Gilmer et al. , 2017 ) for node aggregation . Our proposed method can be seen as an instantiation of the MPNN framework . However , it is specialized for relational graphs . GCNs for Multi-Relational Graph : An extension of GCNs for relational graphs is proposed by Marcheggiani & Titov ( 2017 ) . However , they only consider direction-specific filters and ignore relations due to over-parameterization . Schlichtkrull et al . ( 2017 ) address this shortcoming by proposing basis and block-diagonal decomposition of relation specific filters . Weighted Graph Convolutional Network ( Shang et al. , 2019 ) utilizes learnable relational specific scalar weights during GCN aggregation . While these methods show performance gains on node classification and link prediction , they are limited to embedding only the nodes of the graph . Contemporary to our work , Ye et al . ( 2019 ) have also proposed an extension of GCNs for embedding both nodes and relations in multirelational graphs . However , our proposed method is a more generic framework which can leverage any KG composition operator . We compare against their method in Section 6.1 . Knowledge Graph Embedding : Knowledge graph ( KG ) embedding is a widely studied field ( Nickel et al. , 2016 ; Wang et al. , 2017 ) with application in tasks like link prediction and question answering ( Bordes et al. , 2014 ) . Most of KG embedding approaches define a score function and train node and relation embeddings such that valid triples are assigned a higher score than the invalid ones . Based on the type of score function , KG embedding method are classified as translational ( Bordes et al. , 2013 ; Wang et al. , 2014b ) , semantic matching based ( Yang et al. , 2014 ; Nickel et al. , 2016 ) and neural network based ( Socher et al. , 2013 ; Dettmers et al. , 2018 ; Vashishth et al. , 2019 ) . In our work , we evaluate the performance of COMPGCN on link prediction with methods of all three types . 3 BACKGROUND . In this section , we give a brief overview of Graph Convolutional Networks ( GCNs ) for undirected graphs and its extension to directed relational graphs . GCN on Undirected Graphs : Given a graph G = ( V , E , X ) , where V denotes the set of vertices , E is the set of edges , and X ∈ R|V|×d0 represents d0-dimensional input features of each node . The node representation obtained from a single GCN layer is defined as : H = f ( ÂXW ) . Here , Â = D̃− 1 2 ( A+ I ) D̃− 1 2 is the normalized adjacency matrix with added self-connections and D̃ is defined as D̃ii = ∑ j ( A + I ) ij . The model parameter is denoted by W ∈ Rd0×d1 and f is some activation function . The GCN representation H encodes the immediate neighborhood of each node in the graph . For capturing multi-hop dependencies in the graph , several GCN layers can be stacked , one on the top of another as follows : Hk+1 = f ( ÂHkW k ) , where k denotes the number of layers , W k ∈ Rdk×dk+1 is layer-specific parameter and H0 = X . GCN on Multi-Relational Graphs : For a multi-relational graph G = ( V , R , E , X ) , where R denotes the set of relations , and each edge ( u , v , r ) represents that the relation r ∈ R exist from node u to v. The GCN formulation as devised by Marcheggiani & Titov ( 2017 ) is based on the assumption that information in a directed edge flows along both directions . Hence , for each edge ( u , v , r ) ∈ E , an inverse edge ( v , u , r−1 ) is included in G. The representations obtained after k layers of directed GCN is given by Hk+1 = f ( ÂHkW kr ) . ( 1 ) Here , W kr denotes the relation specific parameters of the model . However , the above formulation leads to over-parameterization with an increase in the number of relations and hence , Marcheggiani & Titov ( 2017 ) use direction-specific weight matrices . Schlichtkrull et al . ( 2017 ) address overparameterization by proposing basis and block-diagonal decomposition of W kr . 4 COMPGCN DETAILS . In this section , we provide a detailed description of our proposed method , COMPGCN . The overall architecture is shown in Figure 1 . We represent a multi-relational graph by G = ( V , R , E , X , Z ) as defined in Section 3 where Z ∈ R|R|×d0 denotes the initial relation features . Our model is motivated by the first-order approximation of GCNs using Chebyshev polynomials ( Kipf & Welling , 2016 ) . Following Marcheggiani & Titov ( 2017 ) , we also allow the information in a directed edge to flow along both directions . Hence , we extend E and R with corresponding inverse edges and relations , i.e. , E ′ = E ∪ { ( v , u , r−1 ) | ( u , v , r ) ∈ E } ∪ { ( u , u , > ) | u ∈ V ) } , and R′ = R ∪ Rinv ∪ { > } , where Rinv = { r−1 | r ∈ R } denotes the inverse relations and > indicates the self loop . 4.1 RELATION-BASED COMPOSITION . Unlike most of the existing methods which embed only nodes in the graph , COMPGCN learns a d-dimensional representation hr ∈ Rd , ∀r ∈ R along with node embeddings hv ∈ Rd , ∀v ∈ V . Representing relations as vectors alleviates the problem of over-parameterization while applying GCNs on relational graphs . Further , it allows COMPGCN to exploit any available relation features ( Z ) as initial representations . To incorporate relation embeddings into the GCN formulation , we leverage the entity-relation composition operations used in Knowledge Graph embedding approaches ( Bordes et al. , 2013 ; Nickel et al. , 2016 ) , which are of the form eo = φ ( es , er ) . Here , φ : Rd × Rd → Rd is a composition operator , s , r , and o denote subject , relation and object in the knowledge graph and e ( · ) ∈ Rd denotes their corresponding embeddings . In this paper , we restrict ourselves to non-parameterized operations like subtraction ( Bordes et al. , 2013 ) , multiplication ( Yang et al. , 2014 ) and circular-correlation ( Nickel et al. , 2016 ) . However , COMPGCN can be extended to parameterized operations like Neural Tensor Networks ( NTN ) ( Socher et al. , 2013 ) and ConvE ( Dettmers et al. , 2018 ) . We defer their analysis as future work . As we show in Section 6 , the choice of composition operation is important in deciding the quality of the learned embeddings . Hence , superior composition operations for Knowledge Graphs developed in future can be adopted to improve COMPGCN ’ s performance further .
This paper proposes a graph convolutional network based model for joint embedding of nodes and relations in a multi-relational graph. The framework comprises of node/relation embedding, nonparametric compositional operation as in knowledge graph embedding, and finally convolution operation with direction specific weight matrices. The performance is evaluated on link prediction, and node/graph classification tasks.
SP:82afe0f6d661432c3124eb14e9a83699e251143d