forum_id
stringlengths 9
10
| sections
stringlengths 1.26k
174k
|
---|---|
Sk-oDY9ge | [{"section_index": "0", "section_name": "DIET NETWORKS: THIN PARAMETERS FOR FAT GENOMICS", "section_text": "Adriana Romero* Pierre Luc Carrier. Akram Erraqabi, Tristan Sylvain.. Alex Auvolat, Etienne Dejoie.\nfirstName.lastName@umontreal.ca, except. adriana.romero.soriano@umontreal.ca andpierre-luc.carrier@umontreal.ca"}, {"section_index": "1", "section_name": "Wellcome Trust Centre for Human Genetic University of Oxford Oxford, UK", "section_text": "Learning tasks such as those involving genomic data often poses a serious chal-. lenge: the number of input features can be orders of magnitude larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nu- cleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving. the ability of deep learning to handle such datasets could have an important im-. pact in medical research, more specifically in precision medicine, where high. dimensional data regarding a particular patient is used to make predictions of in-. terest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hid-. den units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which consid-. erably reduces the number of free parameters. It is based on the idea that we. can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature's distributed representation (based on the feature's identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed ap-. proach can significantly reduce both the number of parameters and the error rate of the classifier."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Medical datasets often involve a dire imbalance between the number of training examples and the number of input features, especially when genomic information is used as input to the trained pre.\nmarc-andre.leqault.1@umontreal.ca marie-pierre.dube@umontreal.ca"}, {"section_index": "3", "section_name": "Yoshua Bengio", "section_text": "Montreal Institute for Learning Algorithms Montreal, Quebec, Canada\nyoshua.umontreal@qmail.com"}, {"section_index": "4", "section_name": "ABSTRACT", "section_text": "dictor. This is problematic in the context where we want to apply deep learning (which typically involves large models) to precision medicine, i.e., making patient-specific predictions using a po tentially large set of input features to better characterize the patient. This paper proposes a novel approach, called Diet Networks, to reparametrize neural networks to considerably reduce their num ber of free parameters when the input is very high-dimensional and orders of magnitude larger thar the number of training examples.\nGenomics is the study of the genetic code encapsulated as DNA in all living organisms' cells Genomes contain the instructions to produce and regulate all the functional components needed to guide the development and adaptation of living organisms. In the last decades, advances in genomic technologies resulted in an explosion of available data, making it more interesting to apply advanced machine learning techniques such as deep learning. Learning tasks involving genomic data and al- ready tackled by deep learning include: using Convolutional Neural Networks (CNNs) to learn the functional activity of DNA sequences (Basset package, Kelley et al.(2016), predicting effects of noncoding DNA (DeepSEA,Zhou & Troyanskaya (2015)), investigating the regulatory role of RNA binding proteins in alternative splicing (Alipanahi et al.|2015), inferring gene expression patterns (Chen et al.]2016] Singh et al.]2016) and population genetic parameters (Sheehan & Song2016) among others (seeLeung et al.[(2016) for a detailed example). Noticeably, most of these techniques are based on sequence data where convolutional or recurrent networks are appropriate. When the full DNA sequence is unavailable, such as when data is acquired through genotyping, other methods need to be used. All this work shows that deep learning can be used to tackle genomic-related tasks. paving the road towards a better understanding of the biological impact of DNA variation.\nApplying deep learning to human genetic variation holds the promise of identifying individuals at risk for medical conditions. Modern genotyping technologies usually target millions of simple vari- ants across the genome, called single nucleotide polymorphisms (SNPs). These genetic mutations result from substitutions from one nucleotide to another (eg. A to C), where both versions exist within a population. In modern studies, as many as 5 millions SNPs can be acquired for every par- ticipant. These datasets differ from other types of genomic data because they focus on the genetic differences between individuals which represents a space of high dimensionality where sequence- context information is unavailable. In medical genetics, these variants are tested for their association with a trait of interest, an approach termed genome-wide association study (GwAS). This method- ology aims at finding genetic variants implicated in disease susceptibility, etiology and treatment.\nAn important confounding factor in GwAS is population stratification, which arises because both disease prevalence and genetic profiles vary from one population to the other. Although most GwAS have been restricted to homogeneous populations, dimensionality reduction techniques are generally used to account for population-level genetic differences (Price et al.|2006). Our experiments com- pare such dimensionality reduction techniques (based on principal components analysis, PCA) to the proposed Diet Network parametrization, as well as with standard deep networks..\nRecently, several machine learning methods have been successfully applied to detect populatior stratification, based on the presence of systematic differences in genetic variation between popula tions. For instance, Support Vector Machines (SVM) models have been used multiple times to infe. recent genetic ancestry of sub-continental populations (Haasl et al.(2013), and local ancestry ir admixed populations (SupportMix, Omberg et al.(2012), 23andMe, Inc.). However SVM methods are very sensitive to the the kernel choice and the parameters. They also tend to overfit the mode selection criterion which usually induces a limitation in its predictive power.\nIn this work, we are interested in predicting the genetic ancestry of an individual from their SNI data using a novel deep learning approach, Diet Networks, which allow us to considerably reduce the number of free parameters. Therefore, we propose to tackle this problem by introducing a multi-task architecture in which the problem of predicting the appropriate parameters for each input feature is considered like a task in itself, and the same parameter prediction network is used for all of the hundreds of thousands of input features. This parameter prediction network learns to predict these feature-specific parameters as a function of a distributed representation of the feature identity, Or feature embedding. The feature embedding can be learned as part of end-to-end training or using other datasets or a priori knowledge about the features. What is important is that two features which are similar in some appropriate sense (in terms of their interactions with other features or other variables observed in any dataset) end up having similar embeddings, and thus a similar parameter vector as output of the parameter prediction network. A practical advantage of this approach is\nthat the parameter prediction network can generalize to new features for which there is no labeled training data (without the target to be predicted by the classifier), so long as it is possible to derive an embedding for that feature (for example using just the unlabeled observations of co-occurences of that feature with other features in human genomes)..\nAn interesting consideration is that from the point of the parameter prediction network, each feature is an example: more features now allow to better train the parameter prediction network. It is like if we were considering not the data matrix itself but its transpose. This is actually how the Diet Network implementation processes the data, by using the transpose of the matrix of input values as the input part of the learning task for the parameter prediction network.\nThe idea of having two networks interacting with each other and with one producing parameters for. the other is well rooted in the machine learning literature (Bengio et al.l|1991. Schmidhuber1992 Gomez & Schmidhuber2005]Stanley et al.[2009, Denil et al.[2013fAndrychowicz et al.[2016). Recent efforts in the same direction include works such as (Bertinetto et al.]2016] Brabandere et al.[2016] Ha et al.[2016) that use a network to predict the parameters of a Convolutional Neural Network (CNN).Brabandere et al.(2016) introduce a dynamic filter module that generates network filters conditioned on an input.Bertinetto et al.(2016) propose to learn the parameters of a deep. model in one shot, by training a second network to predict the parameters of the first from a single exemplar. Hypernetworks (Ha et al.[2016) explore the idea of using a small network to predict the parameters of another network, training them in an end-to-end fashion. The small network takes as input the feature embedding from the previous layer and learns the parameters of the current layer..\nTo the best of our knowledge, deep learning has never been used so far to tackle the problem of ancestry prediction based on SNP data. Compared to other approaches that attempt to learn model parameters using a parameter prediction network, our main goal is to reduce the large number of parameters required by the model, by considering the input features themselves as sub-tasks in a multi-task view of the learning problem, as opposed to constructing a model with even higher capacity, as seen, e.g. in (Ha et al.||2016). Our approach is thus based on building an embedding oi these tasks (the features) in order to further reduce the number of parameters.\nWe evaluate our method on a publicly available dataset for ancestry prediction, the 1ooo Genomes datase[' that best represents the human population diversity. Because population-specific differ- ences in disease and drug response are widespread, identifying an individual's ancestry heritage based on SNP data is a very important task to help detect biological causation and achieve good predictive performance in precision medicine. Most importantly, ancestry-aware approaches in pre- cision genomics will reduce the hidden risks of genetic testing, by preventing spurious diagnosis and ineffective treatment."}, {"section_index": "5", "section_name": "2 METHOD", "section_text": "In this section, we describe the Diet Networks as well as the feature embeddings used by the mode."}, {"section_index": "6", "section_name": "2.1 MODEL", "section_text": "Our model aims at reducing the number of free parameters that a network trained on fat data woulc typically have.\nh=f(xi) yi = g(hi) x; = r(h)\nwhere f. q and r are non-linear function\n' http://www.internationalgenome.org\nLet X E RNNa be a matrix of data, with N samples and N features, where N < Na (e. g. N being approximately 100 times smaller than Na). We build a multi-layer perceptron (MLP), which takes X as input, computes a hidden representation and outputs a prediction Y. Optionally, the MLP may generate a reconstruction X of the input data from the hidden representation. Figure|1(a) illustrates this basic network architecture. Let x; be one data sample, i.e. a row in X. The standard formulations to compute its hidden representation hi, output prediction y: and reconstruction x; are given by\nFigure 1: Our model is composed of 3 networks, one basic and two auxiliary networks: (a) a basic discriminative network with optional reconstruction path (dashed arrow), (b) a network that predicts. the input fat layer parameters, and (c) a network that predicts the reconstruction fat layer parameters (if any). First layer in the \"'prediction networks\" (b, c) represents embedding (Emb.). Each MLP. block may contain any number of hidden layers. We and WT represent the parameters of the fat. hidden layer and the fat reconstruction layer of the basic network (a), respectively. These parameters. are predicted by auxiliary networks (b) and (c) - also called parameter prediction networks - tc reduce the number of free parameters of (a).\nh(1) = f1(x;We+ be),\n(We);. = $(ej)\nwhere ej represents the embedding of a feature in XT, $ is a non-linear function and (We)j: is. the j-th row of We. This means that each feature is associated with the vector of values it takes in the dataset (e.g. across the patients). Other representations could be used, e.g., derived from. other datasets in which those features interact. Figure|1(b)[shows a prediction network which is an auxiliary network that predicts the parameters of the fat hidden layer of our basic network. Following. the same spirit, Figure 1(c) highlights the interaction between a second prediction network that predicts the fat reconstruction layer parameters and the basic network. The architectures of both. auxiliary networks may share the initial feature embedding..\nThe feature embeddings used in the auxiliary networks allow us to substantially reduce the number of free parameters of the fat layers of the basic architecture. The auxiliary network should predict a matrix of weights of size Ng N? from a feature embedding. Consider a feature embedding that would transform each N-dimensional feature into a Ns-dimensional vector, where N: < N. The auxiliary network would learn a function : RNf -> RNh. Thus, the fat hidden layer of our basic architecture would have N? N free parameters (assuming a single layer MLP in the auxiliary network), instead of Nd N. Following our previous example, where Na = 300K and N1 = 100, using an auxiliary network with previously-obtained feature embeddings of dimensionality Nf = 500 would reduce the number of free parameters of the basic network by a factor of 600 (from 30M to 50K).\nx We Y Wd A 1 1 MLP 1 MLP MLP - I WT MLP Emb. Emb. W X XT XT (a) (b) (c)\nThe number of parameters of the first hidden layer of the architecture grows linearly with the di mensionality of the input data:\nJ1(^iV e T Pe) where We and be are the layer's parameters. Using fat data such as the one described in Section 1] leads to a parameter explosion in this layer, hereafter referred to as fat hidden layer. To give. the reader an intuition, consider the case of having an input with Nd = 300K, and a hidden layer. with N1 = 100, the number of parameters of such a layer would be 30M. The same happens to the. number of parameters of the optional reconstruction layer, hereafter referred to as fat reconstruction layer.\nIn order to mitigate this effect, we introduce an auxiliary network to predict the fat layers' pa rameters. The auxiliary network takes as input the transposed data matrix XT, extracts a feature embedding and learns a function of this embedding. to be used as parameters of a fat layer\nH(Y, y) + yl[X- X|?\nwhere H refers to the cross-entropy, Y to the true classification labels and y is a tunable parameter to balance the supervised and the reconstruction losses"}, {"section_index": "7", "section_name": "2.2 FEATURE EMBEDDINGS", "section_text": "The feature embeddings used by the auxiliary networks can be either pre-computed or learnt offline as well as learnt jointly with the rest of the architecture. In theory, any kind of embedding could be used, as long as we keep in mind that the goal is to reduce the number of free parameters of the basic model. In this work, we considered random projections (Bingham & Mannila2001), histograms (which are akin to bag-of-words representations), feature embeddings learnt offline (Mikolov et al. 2013) and feature embeddings jointly learnt with the rest of the proposed architecture.\nRandom projection: Randomly initializing an MLP defines a random projection. By using such a projection to encode the high-dimensional feature space into a more manageable lower-dimensiona space, we were able to obtain decent results.\nPer class histogram: For a given SNP, we can define a histogram of the values it can take over the whole population. Once normalized, this yields 3 values per SNP, corresponding to the proportion of the population having the values 0, 1 and 2 respectively for that SNP. After initial tests showed this. was too coarse a representation for the dataset, we instead chose to consider the per-class proportior of the three values. With 26 classes in the 1000 Genomes dataset, this yields an embedding of size. 78 for each feature. By this method, the matrix XT is summarized as a Ng 78 matrix, where N. is the number of SNPs in the dataset..\nSNPtoVec: In|Mikolov et al.(2013), the authors propose a word embedding that allows good re-. construction of the words' context (surrounding words) by a neural network. SNPs do not have a. similarly well-defined positional context (SNPs close together in our ordering might very well be independent) so our embedding is instead built by training a denoising autoencoder (DAE) (Vin-. cent et al.2008) on the matrix X. Thus, the DAE learns to recover the values of missing SNPs. by leveraging their similarities and cooccurences with other SNPs. Once the DAE is trained, we. obtain an encoding for each feature by feeding to the DAE an input where only that feature is active. (the other features are set to Os) and computing the hidden representation of the autoencoder for that. single-feature input.\nEmbedding learnt end-to-end from raw data: In this case, we consider the feature embedding to be another MLP, whose input corresponds to the values that a SNP takes for each of the training samples and, whose parameters are learnt jointly with the rest of the network. Note that the layer(s) corresponding to the feature embedding are shared among auxiliary networks. For experiments reported in Section4] we used a single hidden layer as embedding."}, {"section_index": "8", "section_name": "DATA: THE 1OOO GENOMES PROJECT", "section_text": "The 1o00 Genomes project is the first project to sequence the genomes of a large number of people in populations worldwide, yielding the largest public catalog of human genetic variants to date Con- sortium(2015). This allowed large-scale comparison of DNA sequences from populations, thanks to the presence of genetic variation. Individuals of the 1o0o Genomes project are samples taken from 26 populations over the world, which are grouped into 5 geographical regions. Figure2(a)[shows a histogram derived from the 1o0o Genomes data, depicting the frequency of individuals per pop- ulation (ethnicity). Analogously, Figure|2(b)|depicts the frequency of individuals per geographical region.\nIn this dataset, we included 315,345 genetic variants with frequencies of at least 5% in 3,450 individ-. uals sampled worldwide from 26 populations, interrogated using microarray genotyping technology:. the Genome-Wide Human SNP Array 6.0 by Affymetrix. The mutated state is established by com- parison to the Genome Reference Consortium human genome (build 37). Since individuals have 2. copies of each genomic position, a sampled individual can have 0, 1 or 2 copies of a genetic mu tation, hereafter referred to as an individual genotype. We excluded SNPs positioned on the sex.\nchromosomes and only included SNPs in approximate linkage equilibrium with each other, sucl that genotypes at neighboring positions are only weakly correlated (r2 < 0.5).\nFigure 2: The 1000 Genome pulation distribution:(a) Ethnicity: (b) Geographical Region"}, {"section_index": "9", "section_name": "4 EXPERIMENTS", "section_text": "In this section, we describe the model architectures, and report and discuss the obtained results\n1.00 1.00 ACB ASW BEB EAS CDX CEU CHB CHS 0.75 0.75 CLM EUR ESN FIN GBR preeeelon GIH GWD 0.50 AFR 0.50 prrei IBS ITU JPT KHV LWK MSL AMR MXL 0.25 0.25 PEL PJL PUR STU SAS TSI YRI 0.00 0.00 ESS EUR AER AMR SAS G label label (a) (b)"}, {"section_index": "10", "section_name": "4.1 MODEL ARCHITECTURE", "section_text": "We experimented with simple models both in the auxiliary networks and the basic architecture. which yielded very promising results. We designed a basic architecture with 2 hidden layers fol-\n1000 800 150 100 200 EASS SAS Ethnicity Geographical Region (a) (b)\nFigure 3: Results of our best model: (a) Confusion matrix per ethnicity; (b) Confusion matrix per large geographical region. The 1o0o Genomes legend for population abbreviations can be found in the appendix.\nlowed by a softmax layer to perform ancestry prediction. We trained this architecture with and with out the assistance of the auxiliary network. Similarly, the auxiliary networks were build by stacking a hidden layer on top of one of the feature embeddings described in Section[2.2[ In the reported ex. periments, all hidden layers have 100 units. All models were trained by means of stochastic gradient descent with adaptive learning rate (Tieleman & Hinton2012), both for y = 0 and y = 10, using dropout, limiting the norm of the weights to 1 and/or applying weight decay to reduce overfitting2"}, {"section_index": "11", "section_name": "4.2 RESULTS", "section_text": "Table [1 summarizes the results obtained for each model. First, we observe that, for most of Diet Network architectures, training with an reconstruction term in the loss (y > O) reduces the misclas. sification error and provides a lower standard deviation over the folds, suggesting more robustness to variations in the learnt feature embedding.\nTraining the models end-to-end, with no pre-computed feature embedding, yielded higher misclas. sification error than simply training the basic model, which could be attributed to the fact that adding the prediction networks makes a difficult, high-dimensional optimization problem even harder. As a general trend, adding pre-computed feature embeddings achieved better performance (lower error). while allowing to significantly reduce the number of free parameters in the fat layers of the model Among the tested feature embeddings, random projections achieved good results, highlighting the. potential of the model when reducing the number of free parameters..\nUsing the SNP2Vec embedding, trained to exploit the similarities and co-occurences between the SNPs, in conjunction with the Diet Networks framework obtains slightly better results than the. model using a random projection. The addition of the reconstruction criterion does not appear to. reduce the number of errors made by the model but it does appear to reduce the variance of the. results. as observed on the other models..\nDespite its simplicity, the per class histogram encoding (when used with a reconstruction criterion) yielded the best results. Note that this encoding is the one with the fewest number of free parameters in the fat layers, with a reduction factor of almost 40o0 w.r.t. the analogous basic mode1 (with reconstruction). Figure 3(a)[shows the mean results obtained with the histogram embedding. As shown in the figure, when considering the ethnicity, the main misclassifications involve ethnicities likely to display very close genetic proximity, such as British from England and Scotland, and Utah residents with Northern and Western ancestry (likely to be immigrants from England), or Indian Telugu and Sri Lankan Tamil for instance. However, the model achieves almost 100% accuracy when considering the 5 geographical regions."}, {"section_index": "12", "section_name": "5 CONCLUSION", "section_text": "In this paper, we proposed Diet Networks, a novel network parametrization, which considerabl. reduces the number of free parameters in the fat layers of a model when the input is very hig.\nGiven the relatively small amount of samples in the 1o0o Genomes data, we report results obtained by 5-fold cross validation of the model. We split the data into 5 folds of equal size. A single fold is retained for test, whereas three of the remaining folds are used as training data and the final fold is used as validation data. We repeated the process 5 times (one per fold) and report the means and standard deviations of results on the different test sets.\nWe also compared the performance of our model to the principal component analysis (PCA) ap. oroach, commonly used in the genomics domain, to select subgroups of individuals in order tc. oerform more homogeneous analysis. The number of principal components (PCs) is chosen ac. cording to their significance, and usually varies from one dataset to another, being 1O the de fact standard for small datasets. However, in the case of the 100o Genomes dataset, we could go up to. 50 PCs. Therefore, we trained a linear classifier on top of PCA features, considering 1O and 50 PCs. 100 PCs to match the number of feature used in the other experiments, as well as 200 PCs. Using. 200 PCs yielded better performance, but going beyond that saturated in terms of misclassificatior. error (see SectionC|in Appendix for more details). Adding hidden layers to the classifier didn't help. either (see reported results for several MLP configurations before the linear classifier)..\n2The code to reproduce the experiments can be found here: https://github.com/adri-romsor/DietNetworks\nTable 1: Results for 1000 Genomes ancestry prediction. Raw end2end, random projection and. SNP2Vec embeddings have dimensionality 100, whereas per class histograms has dimensionality 78. Note that the reported number of free parameters corresponds to the free parameters of the fat. layers of the models.\ndimensional. We showed how using the parameter prediction networks, yielded better generalization in terms of misclassification error. Notably, when using pre-computed feature embeddings that maximally reduced the number of free parameters, we were able to obtain our best results. We validated our approach on the publicly available 1o00 genomes dataset, addressing the relevant task of ancestry prediction based on SNP data. This work demonstrated the potential of deep learning models to tackle domain-specific tasks where there is a mismatch between the number of samples and their high dimensionality.\nGiven the high accuracy achieved in the ancestry prediction task, we believe that deep learning techniques can improve standard practices in the analysis of human polymorphism data. We expec that these techniques will allow us to tackle the more challenging problem of conducting genetic association studies. Hence, we expect to further develop our method to conduct population-aware analyses of SNP data in disease cohorts. The increased power of deep learning methods to identify the genetic basis of common diseases could lead to better patient risk prediction and will improve our overall understanding of disease etiology."}, {"section_index": "13", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank the developers of Theano Theano Development Team (2016) and LasagneLasagne (2016). We acknowledge the support of the following agencies for research fund-. ing and computing support: Imagia, CIFAR, Canada Research Chairs, Compute Canada and Cal-. cul Quebec. J.G.H. is an EPAC/Linacre Junior Research Fellow funded by the Human Frontiers Program (LT-001017/2013-L). Special thanks to Valeria Romero-Soriano, Xavier Grau-Bove and Margaux Luck for their patience sharing genomic biology expertise; as well as to Michal Drozdzal,. Caglar Gulcehre and Simon Jegou for useful discussions and support.."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Model & Embedding. Mean Misclassif. Error. (%). # of free parameters Basic 8.31 1.83 31.5M Raw end2end 8.88 1.42 217.2k Random Projection 9.03 1.20 10.1k SNP2 Vec 7.60 1.28 10.1k Per class histograms. 7.88 1.40 7.9k Basic with reconstruction. 7.76 1.38 63M Raw end2end with reconstruction. 8.28 1.92 227.3k Random Projection with reconstruction 8.03 1.03 20.2k SNP2Vec with reconstruction 7.88 0.72 20.2k Per class histograms with reconstruction. 7.44 0.45 15.8k Traditional approaches Mean Misclassif. Error. (%) PCA (10 PCs) 20.56 3.20 PCA (50 PCs) 12.29 0.89 PCA (100 PCs) 10.52 0.25 PCA (200 PCs) 9.33 1.24 PCA (100 PCs) + MLP(50) 12.67 0.67 PCA (100 PCs) + MLP(100) 12.18 1.75 PCA (100 PCs) + MLP(100, 100) 11.95 2.29\nYifei Chen, Yi Li, Rajiv Narayan, Aravind Subramanian, and Xiaohui Xie. Gene expression infer. ence with deep learning. Bioinformatics, 2016. doi: 10.1093/bioinformatics/btw074\nThe 1ooo Genomes Project Consortium. A global reference for human genetic variation. Nature 2015. doi: 10.1038/nature15393\nDavid Ha, Andrew Dai, and Quoc V. Le. Hypernetworks. CoRR, abs/1609.09106, 2016\nMichael K. K. Leung, Andrew Delong, Babak Alipanahi, and Brendan J. Frey. Machine Learning in Genomic Medicine: A Review of Computational Problems and Data Sets. Proceedings of the IEEE, 104(1):176-197, January 2016. ISSN 0018-9219. doi: 10.1109/jproc.2015.2494198. URI http://dx.doi.0rq/10.1109/iproc.2015.2494198\nT. Mikolov. I. Sutskever. K. Chen, G.S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In NIPS'2013, pp. 3111-3119. 2013.\nAlkes L Price, Nick J Patterson, Robert M Plenge, Michael E Weinblatt, Nancy A Shadick, and D Reich. Principal components analysis corrects for stratification in genome-wide association. studies. Nature Genetics, 2006. doi: 10.1038/ng1847.\nJurgen Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Comput., 4(1):131-139, 1992\nRitambhara Singh, Jack Lanchantin, Gabriel Robins, and Yanjun Qi. Deepchrome: deep-learning for predicting gene expression from histone modifications. Bioinformatics, 2016. doi: 10.1093/. bioinformatics/btw427\nMisha Denil, Babak Shakibi, Laurent Dinh, Marc' Aurelio Ranzato, and Nando de Freitas. Predicting parameters in deep learning. arXiv:1306.0543, 2013.\nT. Tieleman and G. Hinton. Lecture 6.5-RmsProp: Divide the gradient by a running average of it recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012\nJian Zhou and Olga Troyanskaya. Predicting effects of noncoding variants with deep learningbased sequence model. Nature Methods, 2015. doi: 10.1038/nmeth.3547.\nACB: African Caribbeans in Barbados ASW: Americans of African Ancestry in SW USA BEB: Bengali from Bangladesh CDX: Chinese Dai in Xishuangbanna CEU: Utah Residents (CEPH) with Northern and Westert CHB: Han Chinese in Bejing CHS: Southern Han Chinese CLM: Colombians from Medellin ESN: Esan in Nigeria FIN: Finnish in Finland GBR: British in England and Scotland GIH: Gujarati Indian from Houston GWD: Gambian in Western Divisions in the Gambia IBS: Iberian Population in Spain ITU: Indian Telugu from the UK JPT: Japanese in Tokyo KHV: Kinh in Ho Chi Minh City LWK: Luhya in Webuye MSL: Mende in Sierra Leone MXL: Mexican Ancestry from Los Angeles PEL: Peruvians from Lima PJL: Punjabi from Lahore PUR: Puerto Ricans STU: Sri Lankan Tamil from the UK TSI: Toscani in Italia\nAFR: African. AMR: Ad Mixed American EAS: East Asian. EUR: European SAS: South Asian."}, {"section_index": "15", "section_name": "Representative commands:", "section_text": "With PLINK v1.90b2n 64-bithttps://www.cog-genomics.org/p1ink2\nFor information on how to download this pre-processed dataset directly, please email Adriana Romero or Pierre Luc Carrier\n25 Train Validation 20 %) 15 10 5 0 100PCs 200PCs 400PCs 800PCs 1000PCs\nFigure 4: PCA: Train and validation misclassification error for different numbers of PCs\nIn this section, we analyze the influence of increasing the number of PCs used to perform classifica tion. Figure4|depicts the obtained results when considering 100, 200, 400, 800 and 1000 PCs. As shown in the figure, the best validation error comes with PC2o0. Further increasing the number of PCs improves the training error but does not generalize well on the validation set."}] |
HksioDcxl | [{"section_index": "0", "section_name": "JOINT TRAINING OE RATINGS AND REVIEWS WITH RECURRENT RECOMMENDER NETWORKS", "section_text": "Chao-Yuan Wu\nChao-Yuan Wu. University of Texas at Austin Austin. TX. USA. cywu@cs.utexas.edu"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Designing highly accurate recommender systems has been the focus of research in many communities. and at the center of many products for the past decade. The core goal is to predict which items a given user will like or dislike, typically based on a database of previous ratings and reviews. Ir particular, a good recommender system has been defined as one that predicts the rating for randomly chosen and unseen (user,item) pairs. During the Netflix Prize contest, a variety of factorizatior models were proposed to capture the latent embeddings of users and items that would lead to accurate. recommendations (Bell & Koren, 2007; Koren et al., 2009). Generative models for personalizec. ratings have recently become popular, due to impressive and robust results (Mnih & Salakhutdinov. 2007; Salakhutdinov & Mnih, 2008; Stern et al., 2009; Beutel et al., 2015)..\nMore recently, there has been an interest in the recommender system community to also make use of the rich natural language reviews provided by users. Most often, these reviews have been transformed into a bag-of-words-model and used as a sort of regularization for the rating predictions (McAuley & Leskovec, 2013; Diao et al., 2014; Almahairi et al., 2015; Wu et al., 2016b). Using reviews in this way has been found to improve prediction accuracy, and in some cases provide detailed explanations for the recommendations.\nThis previous research has been remarkably successful, but has two significant limitations that we. discuss and address in this paper. First, prediction accuracy has rarely been measured by the ability of. a model to predict future ratings. Rather, recommendation accuracy has been derived from a random split of the ratings data, which undermines our understanding of the models' usefulness in practice. Here, we focus on predicting future ratings, splitting our training and testing data by date. In order to. be successful at this task, we incorporate the time of ratings and reviews in our model structure and. training. Koren (2010) previously derived temporal features of ratings data, but used these features. to remove temporal effects since the metric of success was interpolation, not extrapolation. More. recently, Recurrent Recommender Networks (RNN) use a recurrent neural network to capture changes.\nA majority of this work was done while the author was at Carnegie Mellon University\nAmr Ahmed & Alex Beutel\nMountain View. CA. USA"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Accurate modeling of ratings and text reviews is at the core of successful rec ommender systems. While neural networks have been remarkably successful in modeling images and natural language, they have been largely unexplored in rec- ommender system research. In this paper, we provide a neural network model that combines ratings, reviews, and temporal patterns to learn highly accurate recommendations. We co-train for prediction on both numerical ratings and natural language reviews, as well as using a recurrent architecture to capture the dynamic components of users' and items' states. We demonstrate that incorporating text reviews and temporal dynamic gives state-of-the-art results over the IMDb dataset\nin both user preferences and item perceptions, and extrapolate future ratings in an autoregressive way (Wu et al., 2016a). However, temporal patterns in reviews are largely unexplored. Note that just like ratings, reviews also depend on changing factors, such as user writing styles, user preferences, movie perceptions, or the popularity of certain slang words or emoticons. Here we use a generative LSTM model that is able to jointly model the temporal effects in ratings and reviews.\nSecond, models of reviews in recommender system fall significantly behind the state-of-the-art in natural language processing. The bag-of-words model used in previous research improves over not using text, but is limited in the degree to which it can understand the review. In fact, the drawback of an underfitting model is especially salient in the case of reviews, because they are much more diverse and unstructured than regular documents. Recently there has been significant research attention on modeling natural language with neural networks, with encouraging results (Lipton et al., 2015; Yang et al., 2016). Here, we combine these powerful neural-based language models with recurrent neural network to learn both accurate recommendations and accurate reviews. Our main contributions are as follows:"}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Collaborative Filtering As mentioned in the introduction, recommender systems have been the. focus of many different research communities. The Netflix Prize generated a flurry of research tc. improve recommendation accuracy, with a variety of matrix factorization models being proposed (Bell & Koren, 2007; Koren et al., 2009; Koren, 2008). During the Netflix competition and more afterwards, a stream of research has focused on designing generative Bayesian models for user ratings. data (Mnih & Salakhutdinov, 2007; Salakhutdinov & Mnih, 2008; Stern et al., 2009; Beutel et al. 2014; 2015). Nearly all of these models predict ratings by an inner product between a latent use. embedding and a latent item embedding; different approaches primarily regularization, e.g., Bayesiar. models and learning algorithms capture uncertainty in the data..\nReview Modeling Although the most common metric for recommendation accuracy has been. rating prediction, natural language reviews provide rich, detailed insight into user preferences. Most. often, reviews have been used in a bag-of-words model to regularize rating prediction (McAuley & Leskovec, 2013; Diao et al., 2014; Wu et al., 2016b). For example, McAuley & Leskovec (2013 effectively learns a topic model of reviews regularize item embeddings. By using such coarse models. the impact of and insight from reviews is limited. More recently, Almahairi et al. (2015) use neural network based review models to regularize hidden factors, but their model assumes only stationary. States.\nJoint generative model: We propose a novel joint model of ratings and reviews via inter acting recurrent networks (particularly LSTM). Nonlinear nonparametric review model: By learning a function of user and movie state dynamics, we can capture the evolution of reviews (as well as ratings) over time. Experiments show that by jointly modeling ratings and reviews along with temporal pat terns, our model achieves state-of-the-art results on IMDb dataset in terms of forward prediction, i.e. in the realistic scenario where we use only ratings strictly prior to prediction time to predict future ratings.\nOther models have tried to capture interesting patterns discovered in ratings data. As an example. Beutel et al. (2014) finds that some ratings form bimodal rather than Gaussian distributions and designs a model to accommodate this diversity. More closely related to this work, Koren (2010 designs many features to capture and remove the temporal effects in ratings data. By removing these temporal effects, Koren (2010) learns better stationary embeddings for users and items. Work such as this improves prediction accuracy, but has two drawbacks: (1) it requires time consuming feature. engineering, and (2) it focuses on interpolation rather than extrapolation into the future. Wu et al. 2016a) addresses both of these concerns by learning a function for the evolution of user preferences. and item properties. However, this work focuses exclusively on modeling ratings over time and. in a large part, on the qualitative patterns discovered in the Netflix dataset. Here we focus on the. model itself and, in particular, the interaction of jointly understanding ratings, reviews, and tempora.. patterns.\nFigure 1: As shown on the left, previous recommendation models learn static stationary embeddings for users and movies to predict ratings. As shown on the right, we can also capture temporal effects present in the data. We have both user and movie embeddings follow a Markov chain, and use these dynamic embeddings (along with stationary ones not shown) to predict both ratings and text reviews\nNeural Networks Neural networks have recently offered large improvements in natural language processing. More recently, a few papers have focused these natural language models on online reviews (Lipton et al., 2015; Yang et al., 2016). However, while these papers do model online reviews, they differ greatly from our work in that they are not actually used for recommendation.\nWith the recent remarkable successes of neural networks in other domains, there has been growing attention on using neural networks for model graphs and ratings data. Most similar, Sedhain et al (2015) design an autoencoder for collaborative filtering.\nZt+1 = f(ht,Zt) and ht+1 = g(ht,Zt+1)\nZt+1 = f(ht,Zt) and ht+1 = g(ht,Zt+1\nft,it,Ot]=[W[ht-1,Zt]+ b lt = tanh[V[ht-1,Zt]+ d Ct=ftCt-1+itlt ht = ot: tanh(ct)\nwhere ft, it, Ot denote the forget gate, input gate and the output gate respectively. For simplicity in the following we denote this set of operations by ht = LSTM(ht-1, Zt). We will refer to ht as the output embedding from the LSTM."}, {"section_index": "4", "section_name": "3 MODEL", "section_text": "A comparison of our model with traditional recommender systems is illustrated in Figure 1. In previous recommender systems, ratings are assumed to be a function of stationary user and movie embeddings. Here we consider dynamic embeddings that predict both ratings and text reviews at a given time step.\nFigure 2 shows a depiction of our model: Joint Review-Rating Recurrent Recommender Network. In addition to stationary embeddings as used in traditional recommender systems, here we use two\nUi Uit. Uit Uit+ Wijt 'ijt mj mjt mjt mjt+\nInterestingly, data mining research has found that review patterns are dynamic, with different language. being adopted by communities over time (Danescu-Niculescu-Mizil et al., 2013). Therefore, it is important to capture not just the dynamics of ratings, but also the language used to justify those ratings.\nLSTM and Recurrent Network Recurrent neural network provides a powerful tool to nonpara. metrically model temporal data by using a latent variable autoregressive model as follows.\nWhere zt is the observation at time t. z+ is the model associated estimate, and ht denotes the lateni state. A popular class of RNN is the Long Short Term Memory (LSTM) (Hochreiter & Schmidhuber, 1 997) and we use this as a building block in our model .The state updates is given below:\nt,Ot]=[W[ht-1,Zt]+b lt = tanh|V|ht-1,zt]+ d Ct-1 +lt tanr\nLSTM RNNs that take user/movie history as input to capture the temporal dynamics in both user. and movie states. Given stationary and dynamic states of user i and movie j, we define generator functions that emit both rating r |t and reviews O[t at time step t. Formally,.\nij|t = f(ui, mj, Uit, mjt and Ui,t+1=gUit,{rij|t} and\nwhere u, and m; denote stationary states, and ut and mt denote the dynamic state at t. Note that with learned f, , g and h and given user/movie history, an user/movie state can be inferred without further optimization. In other words, different from traditional recommender systems, here we learn the functions that find the states instead of learning the states directly."}, {"section_index": "5", "section_name": "3.1 DYNAMIC USER AND MOVIE STATE", "section_text": "Here we give a detailed description on the RNNs that find the dynamic states. The key idea is to use. user/movie rating history as inputs to update the states. In this way we are able to model causalit. instead of just finding correlation. That is, we can model e.g. the change of user (movie) state causec by having watched and liked/disliked a movie (being liked/disliked by certain users). At each step. the network takes\nYt := Wembed [Xt, 1newbie, Tt, Tt-1]\nwhere xt is the rating vector, 1newbie is the indicator for new users, and tt is wall-clock time. The jt. element of xt is the rating the user gives for movie j at time t, and 0 otherwise. 1newbie effectivel. select a default embedding for a new user, and Tt and Tt-1 gives the model the information t. synchronize between RNNs and model the effects such as rating scale change or movie age. Note tha. with the inclusion of ts, we do not need to include the steps where a user did not rate any movie, an this can drastically speed up training. The state update is given by standard ut := LSTM(ut-1, Yt In the above we omit user index for clarity. In cases where we need to distinguish different users (an. movies) such as in Figure 2, we use additional index i for user i as in uit, and similarly for movie. in mit.\nOij,1 Oij,2 Oij,3 Oij,4 user new Yi,t-2 Yi,t-1 1 1 m Uit Tij xij F i 1 m Oij,0 Oij,1 Oij,2 Oij,3 mjt m (new Yj,t-3 Yj,t-2 Yj,t-1 movie\nFigure 2: Joint Review-Rating Recurrent Recommender Networks: We use recurrent networks to capture the temporal evolution of user and movies states. The recurrent networks depend on the ratings of a user (and movie) in previous time steps. We combine these dynamic states with classic stationary states. We directly use all of these states to predict ratings, and use them within an LSTM to model review text.\nij|t = f(Ui, mj, Uit,mjt and Oij|t = Y(i, mj, Uit,mjt) Ui,t+1=g(Uit,{rij|t}) and mj,t+1=h(mjt,{rij|t}),\nDij|t = Y(Ui, mj, Uit, mjt mj,t+1=h(mjt,{rij|t})"}, {"section_index": "6", "section_name": "3.2 RATING EMISSIONS", "section_text": "The review rating is thus modeled as a function of both dynamic and stationary states, i.e\nrij=f(uit,mjt,Ui,mj):=(Uit,mjt)+(ui,mj\nUit = Wuseruit + buser and mjt = Wmoviemjt + bmovie\nThis makes the model a strict superset of popular matrix factorization recommender systems that accounts for stationary effects, while we use LSTMs, on top of that, to model longer-range dynamic updates."}, {"section_index": "7", "section_name": "3.3 REVIEW TEXT MODEI", "section_text": "Xjoint,ij := $(Wjoint [Uit, mjt, Ui, mj] + bjoint Xij,k :=Xoij,k,Xjoint,ij\ndenotes the embedding of the character. here is some non-linear function.\nThe review text emission model is itself an RNN, specifically a character-level LSTM generative model. For character index k = 1. 2\nhij,k := LSTM(hij,k-1,Xij,k) Oij,k := softmax (Wouthij,k + bout.\nHere a softmax layer at output of LSTM is used to predict the next character. Generating text conditioned on contents has been applied to various areas, such as machine translation (Sutskeve. et al., 2014), question answering (Gao et al., 2015), or image captioning (Vinyals et al., 2015) Probably the most similar approach is Lipton et al. (2015), but it conditions review generation on observed ratings instead of latent states."}, {"section_index": "8", "section_name": "3.4 PREDICTION", "section_text": "In prediction time, we make rating predictions based on predicted future states. That is, we take the latest ratings as input to update the states, and use the newly predicted states to predict ratings. This. differs from traditional approaches where embeddings are estimated instead of inferred.."}, {"section_index": "9", "section_name": "3.5 TRAINING", "section_text": "Our goal is to predict both accurate ratings and accurate reviews, and thus we minimize\nwhere Dtrain is the training set of (i, j) pairs, 0 denotes all model parameters, and n; is the number. of characters in the review user i gives to movie j. The first term corresponds to the deviation of the prediction from the actual rating, and the second term is the likelihood of the text reviews. X controls the weight between predicting accurate ratings and predicting accurate reviews. Our training follows the subspace descent strategy in Wu et al. (2016a). That is, while the review generative model is updated in every iteration, the user-state and movie-state RNNs are updated in an alternating way\nWe supplement the time-varying profile vectors ut and mt with stationary ones u; and m; respec tively. These stationary components encode time-invariant properties such as long-term preference of a user or the genre of a movie.\nUit = Wuseruit + buser and mjt = Wmoviemjt + bmovie\nReview text is modeled by a character-level LSTM network. This network shares the same user/movie latent states with the rating model. After all, the purpose of a review is to explain its rating score. We fuse the stationary and dynamic states of both user of movie by the bottleneck layer xjoint,ij given below:\nnij L := -X>`log(Pr(Oij,k[0)) ri (i,j)EDtrain k=1\nData # users # items # ratings # characters (reviews) Train Jul 98 - Dec 12 402.3k 690.6M IMDb 6,127 8,002 Test Jan 13 - Sep 13 11.0k 21.6M Train Jun - Nov 11 13.7M Netflix 6 months 311.3k 17.7k Test Dec 11 2.1M\nTable 1: IMDb dataset comprises reviews and ratings collected from July 1998 to September 2013 Netflix 6 months data is a subset of original Netflix prize dataset that is split based on time\nThe gradients are calculated with standard backpropagation. Furthermore, we pre-warm train the review LSTM over the review text excluding the auxiliary input from the user and movie states. It is undesirable if the review likelihood overwhelms the rating. We hence normalize review likelihooc by the number of characters in a review so that it does not dominates the rating likelihood. This technique is common in NLP literature (Wang & McCallum, 2006)."}, {"section_index": "10", "section_name": "4 EXPERIMENTS", "section_text": "In this section we empirically demonstrate the ability of our model to accurately predict both rating and reviews, and capture temporal dynamics."}, {"section_index": "11", "section_name": "4.1 EXPERIMENTAL SETUP", "section_text": "In the following experiments, we select hyperparameters, optimization parameters and model ar chitecture by cross-validation. The details are as follows. We use 1-layer LSTM recurrent neural networks with 4O hidden factors for user/movie state transitions. The input of this LSTM is an user/item embedding of dimension 40. Stationary and dynamic factors are 160 and 40-dimensional respectively. A 2-layer LSTM network is used to model texts, which takes 30-dimensional character embedding xchar, 40-dimensional state vector xjoint, and a 50-dimensional movie embedding xmovie\n10 10 104 10 10 10 10 10 # # # 100 100 100 105 102 104 100 100 100 105 k (number of ratings) k (number of ratings) Review length k (characters) (a) User distribution. (b) Movie distribution. (c) Review length distribution\nFigure 3: Characteristics of IMDb dataset.\nData Here we focus on movie recommendations, where the opinions are highly dynamic. We evaluate our model on IMDb dataset, first used in Diao et al. (2014), that is the only large-scale movie\nTo speed up convergence, we initialize the text model by a character-level RNN pre-trained without considering rating. Stationary factors are initialized by a pre-trained iAutoRec (Sedhain et al., 2015) model based on the last layer. We initialize all the other parameters from uniform distribution between [-a, a] with a = /1.5(fin + fout), where fin and fout are fan-in and fan-out of transition matrices.. l2 regularization with magnitude 0.001 is applied to all parameters. Dropout with a 0.5 rate is applied after all fully-connected layers. To prevent exploding gradients in of LSTM, gradients are clipped to. 15, 15]. ADAM (Kingma & Ba, 2014) with learning rate 0.0015 is used for optimization..\nTable 2: RRN outperforms competing models in terms of RMSE. In addition, jointly modeling ratings and reviews achieves even better accuracy.\nreview dataset available. Restaurant recommendations (e.g. Yelp) could be also a suitable domain but full rating history is not available in publicly available datasets'\nThe IMDb dataset contains full review and rating history of all users and all movies from 1998. to 2013. The characteristics of this dataset is shown in Figure 3. We see that the user and movie. ratings follow heavy tail distributions, and thus the majority of users and movies have very few reviews, making accurate recommendation challenging for these users and movies. Review length is summarized in Figure 3 (c). Since one of the major goal of this project is to study temporal dynamics. we focus on users and items that have multiple interactions with the system. Specifically, we select a. subset of k-core of the graph with k = 15. That is, each user and movie has at least 15 ratings in this. subset. Note that the resulting subgraph is still very sparse - with only 0.8% density, which is sparser. than for example, 1.2 % density of Netflix dataset . For completeness, we also include the 6-month Netflix dataset as used in Wu et al. (2016a), which has only ratings, to study RRN's ability to model. temporal patterns.\nThe dataset is split by date instead of random sampling to simulate the real recommendation settings. where we need to predict into the future instead of interpolating the past. IMDb training set contains. all ratings from July 1998 to December 2012, and the ratings from January to September 2013 are. randomly split into a validation set and a test set. Similarly, the 6-month Netflix dataset is split intc January to November 2011 (training) and December 2011 (testing and validation). We report the. results on testing set with the model that gives the best results on validation set. The summary of this. dataset is given in Table 1.\nBaselines We compare our model with models including the state-of-the-art temporal model, and a state-of-the-art neural network-based model..\nAll models use comparable number of factor sizes. Parameters of PMF and Time-SVD++ are selected by grid-search. Settings of AutoRec follow the original paper. We also include the performance of rating-only RRN, as in Wu et al. (2016a), to separate the benefits obtained from temporal modeling. and review texts."}, {"section_index": "12", "section_name": "4.2 RATING PREDICTION", "section_text": "https://www.yelp.com/dataset_challenge\nRRN RRN PMF Time-SVD++ U-AutoRec I-AutoRec (rating) (rating + text) IMDb 1.7355 1.7348 1.7332 1.7135 1.7047 1.7012 Netflix 6 months 0.9584 0.9589 0.9836 0.9778 0.9427\nPMF (Mnih & Salakhutdinov, 2007): Our model extends matrix factorization by including a dynamic part and a joint review model. Comparing to PMF directly shows us the advantage of our approaches. LIBPMF (Yu et al., 2012) is used in experiments. Time-SVD++ (Koren, 2010): Time-SVD++ is the state-of-the-art model for temporal effects. It achieves excellent performance in Netflix contest. Implementation in GraphChi (Kyrola et al., 2012) is used in experiments. AutoRec (Sedhain et al., 2015): AutoRec is the state-of-the-art neural network recom mender system. It learns an autoencoder that encodes user (item) histories into a low dimensional space and then predict ratings by decoding. No temporal effects or causality are considered in this model. We use the software the authors provide in experiments.\nOne important goal of recommender systems is making accurate rating predictions. Here we evaluate the accuracy by root-mean-square error (RMSE) of prediction from the true rating. The results are summarized in Table 2. For completeness, we include the results from Wu et al. (2016a) on\n6-month Netflix dataset that use ratings only to compare the behavior of different models on different datasets. We see that rating-only RRN outperforms all baseline models in terms of rating prediction. consistently in both dataset. More importantly, joint-modeling ratings and reviews boosts the. performance even more, compared to rating-only RRN. This implies that by sharing statistical. strength between ratings and reviews, the rich information in reviews helps us estimate the latent. factors better. Note that while the absolute improvements in RMSE might not appear to be huge. the 1.98% improvement over PMF is actually considerable in terms of recommendations2. We also. see that while Time-SVD++ performs well in Netflix contest, it does not work as well for predicting. future ratings. After all, the goal of Time-SVD++ is estimating the temporal bias in hindsight instead. of extrapolating into future states."}, {"section_index": "13", "section_name": "4.3 TEXT MODELING", "section_text": "Here we examine the impact of conditioning on user and item states for text modeling. Towards this end, we compare perplexity of characters in testing set with and without using the user/item factors. Perplexity is defined as"}, {"section_index": "14", "section_name": "4.4 TEMPORAL DYNAMICS", "section_text": "Here we study if RRN is able to automatically capture the overall rating trends in IMDb by adaptivel. updating states along history sequence. Specifically, at each time step, we randomly sample up tc. 1000 users, and see what ratings the users would have given to each of the movie given their states. at the time step, even in reality the user might not have given a rating to the movie. This gives us. an unbiased estimation of average behavior of our model on each of the ratings. Figure 4 shows th average predicted ratings in this setting and the true average rating in the data set. We see that RRN. clearly captures the overall trend in IMDb smoothly..\n7.6 7.8 7.4 7.6 7.2 7.4 7.2 7 7 6.8 6.6 6.6 1999 2002 2005 2008 2011 1999 2002 2005 2008 2011 Year Year (a) Average ratings on IMDb. (b) Predicted ratings\nFigure 4: RRN is able to capture the overall trend of data. (a) show the average ratings of all movies on IMDb over time. In (b) we see the predicted ratings are consistent with this trend"}, {"section_index": "15", "section_name": "DISCUSSION & CONCLUSION", "section_text": "We present a novel approach that jointly models ratings, reviews, and their temporal dynamics witl RRN. The contributions we have provided are as follows:\n2For example, in 2009 SVD++ outperforms SVD by 1.09% and Time-SVD++ outperforms SVD++ by 1.25% and they are considered important progress in recommender systems.\n1 ppx(Dtest) = exp log Pr(c) N cEDtest\nwhere Nc is the total number of characters in Dtest, and Pr(c) is the likelihood of character c Interestingly, we found that by jointly training with user and item states, the perplexity improves from 3.3442 to 3.3362.\n1. Joint rating-review modeling: We offer an LSTM-based joint rating-review model tha. provides advantages in both rating prediction and text modeling.. 2. Nonparametric dynamic review modeling: RRN is based on an autoregressive method t model temporal dynamics of users and movies, allowing us to capture how reviews chang. Over time. 3. Empirical results: We demonstrate that our joint model offers state-of-the-art results ol rating prediction in real recommendation settings, i.e. predicting into the future..\nAlex Beutel, Kenton Murray, Christos Faloutsos, and Alexander J Smola. Cobafi: collaborative bayesian filtering. In Www, 2014.\nAlex Beutel, Amr Ahmed, and Alexander J Smola. ACCAMS: Additive Co-Clustering to Approxi mate Matrices Succinctly. In Www, 2015\nCristian Danescu-Niculescu-Mizil, Robert West, Dan Jurafsky, Jure Leskovec, and Christopher Potts No country for old members: User lifecycle and linguistic change in online communities. In Www 2013. Qiming Diao, Minghui Qiu, Chao- Yuan Wu, Alexander J Smola, Jing Jiang, and Chong Wang. Jointly modeling aspects, ratings and sentiments for movie recommendation (jmars). In KDD. ACM, 2014.\nHaoyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, and Wei Xu. Are you talking to a. machine? dataset and methods for multilingual image question answering. In NIPS, 2015.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nYehuda Koren. Collaborative filtering with temporal dynamics. Communications of the ACM, 53(4) 8997, 2010.\nZachary Chase Lipton, Sharad Vikram, and Julian McAuley. Capturing meaning in product review. with character-level generative text models. CoRR, 2015.\nAndriy Mnih and Ruslan Salakhutdinov. Probabilistic matrix factorization. In NIPs, 2o07.\nR. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using markov chain monte carlo. In W.W. Cohen, A. McCallum, and S.T. Roweis (eds.), ICML, 2008.\nDavid H Stern, Ralf Herbrich, and Thore Graepel. Matchbox: large scale online bayesian recommen dations. In WWW. ACM, 2009.\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks In NIPS, 2014\nAapo Kyrola, Guy Blelloch, and Carlos Guestrin. Graphchi: Large-scale graph computation on just a pc. In OSDI, 2012.\nXuerui Wang and Andrew McCallum. Topics over time: a non-markov continuous-time model of topical trends. In KDD. ACM, 2006.\nZichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard Hovy. Hierar chical attention networks for document classification. In NAACL, 2016..\nHsiang-Fu Yu, Cho-Jui Hsieh, Si Si, and Inderjit S. Dhillon. Scalable coordinate descent approaches to parallel matrix factorization for recommender systems. In ICDM, 2012."}] |
HJhcg6Fxg | [{"section_index": "0", "section_name": "BINARY PARAGRAPH VECTORS", "section_text": "Karol Grzegorczyk & Marcin Kurdziel\nAGH University of Science and Technology\nkgr, kurdziel}@agh.edu.pl\nRecently Le & Mikolov described two log-linear models, called Paragraph Vector. that can be used to learn state-of-the-art distributed representations of documents Inspired by this work, we present Binary Paragraph Vector models: simple neural networks that learn short binary codes for fast information retrieval. We show that binary paragraph vectors outperform autoencoder-based binary codes, despite us- ing fewer bits. We also evaluate their precision in transfer learning settings, where binary codes are inferred for documents unrelated to the training corpus. Results from these experiments indicate that binary paragraph vectors can capture seman- tics relevant for various domain-specific documents. Finally, we present a model that simultaneously learns short binary codes and longer, real-valued representa tions. This model can be used to rapidly retrieve a short list of highly relevant documents from a large document collection."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "One of the significant challenges in contemporary information processing is the sheer volume of available data. Gantz & Reinsel (2012), for example, claim that the amount of digital data in the world doubles every two years. This trend underpins efforts to develop algorithms that can efficiently search for relevant information in huge datasets. One class of such algorithms, represented by, e.g., Locality Sensitive Hashing (Indyk & Motwani, 1998), relies on hashing data into short, locality- preserving binary codes (Wang et al., 2014). The codes can then be used to group the data into buckets, thereby enabling sublinear search for relevant information, or for fast comparison of data items.\nIn this work we focus on learning binary codes for text documents. An important work in thi direction has been presented by Salakhutdinov & Hinton (2oo9). Their semantic hashing lever ages autoencoders with sigmoid bottleneck layer to learn binary codes from a word-count bag-of words (BOw) representation. Salakhutdinov & Hinton demonstrated that semantic hashing code. used as an initial document filter can improve precision of TF-IDF-based retrieval. Learning fron BOw, however, has its disadvantages. First, word-count representation, and in turn the learnec. codes, are not in itself stronger than TF-IDF. Second, BOw is an inefficient representation: ever for moderate-size vocabularies BOw vectors can have thousands of dimensions. Learning fully connected autoencoders for such high-dimensional vectors is impractical. Salakhutdinov & Hintor restricted the BOw vocabulary in their experiments to 2000 most frequent words..\nRecently several works explored simple neural models for unsupervised learning of distributed rep. resentations of words, sentences and documents. Mikolov et al. (2013) proposed log-linear mod-. els that learn distributed representations of words by predicting a central word from its context. (CBOw model) or by predicting context words given the central word (Skip-gram model). The. CBOw model was then extended by Le & Mikolov (2014) to learn distributed representations of. documents. Specifically, they proposed Paragraph Vector Distributed Memory (PV-DM) model,. in which the central word is predicted given the context words and the document vector. During. training, PV-DM learns the word embeddings and the parameters of the softmax that models the. conditional probability distribution for the central words. During inference, word embeddings and. softmax weights are fixed. but the gradients are backpropagated to the inferred document yector"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In addition to PV-DM, Le & Mikolov studied also a simpler model, namely Paragraph Vector Dis. tributed Bag of Words (PV-DBOw). This model predicts words in the document given only the. document vector. It therefore disregards context surrounding the predicted word and does not learn word embeddings. Le & Mikolov demonstrated that paragraph vectors outperform BOw and bag. of-bigrams in information retrieval task, while using only few hundreds of dimensions. These mod-. els are also amendable to learning and inference over large vocabularies. Original CBOw network. used hierarchical softmax to model the probability distribution for the central word. One can also use noise-contrastive estimation (Gutmann & Hyvarinen, 2010) or importance sampling (Cho et al.,. 2015) to approximate the gradients with respect to the softmax logits.\nAn alternative approach to learning representation of sentences has been recently described by Kiros et al. (2015). Networks proposed therein, inspired by the Skip-gram model, learn to predict sur rounding sentences given the center sentence. To this end, the center sentence is encoded by an encoder network and the surrounding sentences are predicted by a decoder network conditioned or the center sentence code. Once trained, these models can encode sentences without resorting to backpropagation inference. However, they learn representations at the sentence level but not at the document level.\nIn this work we present Binary Paragraph Vector models, an extensions to PV-DBOW and PV-DM. that learn short binary codes for text documents. One inspiration for binary paragraph vectors comes. from a recent work by Lin et al. (2015) on learning binary codes for images. Specifically, we intro-. duce a sigmoid layer to the paragraph vector models, and train it in a way that encourages binary. activations. We demonstrate that the resultant binary paragraph vectors significantly outperform se. mantic hashing codes. We also evaluate binary paragraph vectors in transfer learning settings, where. training and inference are carried out on unrelated text corpora. Finally, we study models that si- multaneously learn short binary codes for document filtering and longer, real-valued representations. for ranking. While Lin et al. (2015) employed a supervised criterion to learn image codes, binary. paragraph vectors remain unsupervised models: they learn to predict words in documents..\nThe basic idea in binary paragraph vector models is to introduce a sigmoid nonlinearity before the. softmax that models the conditional probability of words given the context. If we then enforce binary or near-binary activations in this nonlinearity, the probability distribution over words will. be conditioned on a bit vector context, rather than real-valued representation. The inference in. the model proceeds like in Paragraph Vector, except the document code is constructed from the sigmoid activations. After rounding, this code can be seen as a distributed binary representation of the document.\nIn the simplest Binary PV-DBOW model (Figure 1) the dimensionality of the real-valued document embeddings is equal to the length of the binary codes. Despite this low dimensional representation -- a useful binary hash will typically have 128 or fewer bits - this model performed surprisingly well in our experiments. Note that we cannot simply increase the embedding dimensionality in Binary\nFigure 1: The Binary PV-DBOw model. Modifications to the original PV-DBOW model are higl lighted.\nPV-DBOw in order to learn better codes: binary vectors learned in this way would be too long to be. useful in document hashing. The retrieval performance can, however, be improved by using binary codes for initial filtering of documents, and then using a representation with higher capacity to rank. the remaining documents by their similarity to the query. Salakhutdinov & Hinton (2o09), for exam. ple, used semantic hashing codes for initial filtering and TF-IDF for ranking. A similar document.\nreal-valued binary embedding embedding embedding rounded sampled lookup sigmoid softmax document's ocument word\nembedding embedding embedding rounded sampled lookup sigmoid softmax document's document word\nretrieval strategy can be realized with binary paragraph vectors. Furthermore, we can extend the Binary PV-DBOW mode1 to simultaneously learn short binary codes and higher-dimensional real valued representations. Specifically, in the Real-Binary PV-DBOw model (Figure 2) we introduce a linear projection between the document embedding matrix and the sigmoid nonlinearity. During training, we learn the softmax parameters and the projection matrix. During inference, softmax weights and the projection matrix are fixed. This way, we simultaneously obtain a high-capacity representation of a document in the embedding matrix, e.g. 300-dimensional real-valued vector, anc a short binary representation from the sigmoid activations. One advantage of using the Real-Binary\nFigure 2: The Real-Binary PV-DBOW model. Modifications to the original PV-DBOw model are highlighted.\nPV-DBOw model over two separate networks is that we need to store only one set of softma. parameters (and a small projection matrix) in the memory, instead of two large weight matrices Additionally, only one model needs to be trained, rather than two distinct networks.\nBinary document codes can also be learned by extending distributed memory models. Le & Mikolo. (2014) suggest that in PV-DM, a context of the central word can be constructed by either concate nating or averaging the document vector and the embeddings of the surrounding words. However in Binary PV-DM (Figure 3) we always construct the context by concatenating the relevant vector before applying the sigmoid nonlinearity. This way, the length of binary codes is not tied to the. dimensionality of word embeddings.\nFigure 3: The Binary PV-DM model. Modifications to the original PV-DM model are highlighted\nhigh-dimensional embedding low-dimensional binary embedding embedding embedding linear rounded sampled lookup projection sigmoid softmax document's document word\nhigh-dimensional embedding low-dimensional binary embedding embedding embedding linear rounded sampled lookup projection sigmoid softmax document's document word\ndocument binary embedding concatenated concatenated lookup context context document rounded sampled sigmoid softmax central word Canrr cnnnrrs word embedding lookup\nSoftmax layers in the models described above should be trained to predict words in documents giver inary context vectors. Training should therefore encourage binary activations in the preceding sig noid layers. This can be done in several ways. In semantic hashing autoencoders Salakhutdinov & Hinton (2009) added noise to the sigmoid coding layer. Error backpropagation then countered the oise, by forcing the activations to be close to O or 1. Another approach was used by Krizhevsky & Hinton (2011) in autoencoders that learned binary codes for small images. During the forwarc ass, activations in the coding layer were rounded to O or 1. Original (i.e. not rounded) activations vere used when backpropagating errors. Alternatively, one could model the document codes with tochastic binary neurons. Learning in this case can still proceed with error backpropagation, pro\nvided that a suitable gradient estimator is used alongside stochastic activations. We experimented with the methods used in semantic hashing and Krizhevsky's autoencoders, as well as with the two biased gradient estimators for stochastic binary neurons discussed by Bengio et al. (2013). We also investigated the slope annealing trick (Chung et al., 2016) when training networks with stochastic bi- nary activations. From our experience, binary paragraph vector models with rounded activations are easy to train and learn better codes than models with noise-based binarization or stochastic neurons We therefore use Krizhevsky's binarization in our models.\nTo assess the performance of binary paragraph vectors, we carried out experiments on two datasets frequently used to evaluate document retrieval methods, namely 20 Newsgroups' and a cleansec version (also called v2) of Reuters Corpus Volume 12 (RCV1). As paragraph vectors can be trained with relatively large vocabularies, we did not perform any stemming of the source text. However we removed stop words as well as words shorter than two characters and longer than 15 characters Results reported by Li et al. (2015) indicate that performance of PV-DBOw can be improved by including n-grams in the model. We therefore evaluated two variants of Binary PV-DBOw: one predicting words in documents and one predicting words and bigrams. Since 20 Newsgroups is a relatively small dataset, we used all words and bigrams from its documents. This amounts tc a vocabulary with slightly over one million elements. For the RCV1 dataset we used words anc bigrams with at least 10 occurrences in the text, which gives a vocabulary with approximately 800 thousands elements.\nThe 20 Newsgroups dataset comes with reference train/test sets. In case of RCV1 we used half. of the documents for training and the other half for evaluation. We perform document retrieval by. selecting queries from the test set and ordering other test documents according to the similarity of the inferred codes. We use Hamming distance for binary codes and cosine similarity for real-valued. representations. Results are averaged over queries. We assess the performance of our models with. precision-recall curves and two popular information retrieval metrics, namely mean average preci-. sion (MAP) and the normalized discounted cumulative gain at the 1Oth result (NDCG@ 10) (Jarvelin & Kekalainen, 2002). The results depend, of course, on the chosen document relevancy measure.. Relevancy measure for the 20 Newsgroups dataset is straightforward: a retrieved document is rel-. evant to the query if they both belong to the same newsgroup. However, in RCV1 each document. belongs to a hierarchy of topics, making the definition of relevancy less obvious. In this case we adopted the relevancy measure used by Salakhutdinov & Hinton (2009). That is, the relevancy is cal- culated as the fraction of overlapping labels in a retrieved document and the query document. Over-. all, our selection of test datasets and relevancy measures follows Salakhutdinov & Hinton (2009). enabling comparison with semantic hashing codes..\nWe use AdaGrad (Duchi et al., 2011) for training and inference in all experiments reported in this work. During training we employ dropout (Srivastava et al., 2014) in the embedding layer. Tc facilitate models with large vocabularies, we approximate the gradients with respect to the softmax logits using the method described by Cho et al. (2015). Binary PV-DM networks use the same number of dimensions for document codes and word embeddings.\nPerformance of 128- and 32-bit binary paragraph vector codes is reported in Table 1 and in Fig. ure 4. For comparison we also report performance of real-valued paragraph vectors. Note that the. binary codes perform very well, despite their far lower capacity: on both test sets the 128-bit Binary. PV-DBOW trained with bigrams approaches the performance of the real-valued paragraph vectors Furthermore, Binary PV-DBOW with bigrams outperforms semantic hashing codes: comparison of. precision-recall curves from Figure 4 with Salakhutdinov & Hinton (2009, Figures 6 & 7) shows. that on both test sets 128-bit codes learned with this model outperform 128-bit semantic hashing. codes. Moreover, the 32-bit codes from this model outperform 128-bit semantic hashing codes or. the RCV1 dataset, and on the 20 Newsgroups dataset give similar precision up to approximately 3%. recall and better precision for higher recall levels. Note that the difference in this case lies not onl in retrieval precision: the short 32-bit Binary PV-DBOw codes are more efficient for indexing thar. long 128-bit semantic hashing codes.\nTable 1: Information retrieval results. The best results with binary models are highlighted\nModel PV-DBOW Binary PV-DBOW PV-DM Binary PV-DM\nPV-DBOW Binary PV-DBOW\nPV-DM\nTable 2: Information retrieval results for 32-bit binary codes constructed by first inferring 32d real valued paragraph vectors and then employing another unsupervised model or hashing algorithm for binarization. Paragraph vectors were inferred using PV-DBOw with bigrams\nBinarization model\nLi et al. (2015) argue that PV-DBOW outperforms PV-DM on a sentiment classification task, anc. demonstrate that the performance of PV-DBOw can be improved by including bigrams in the vo. cabulary. We observed similar results with Binary PV models. That is, including bigrams in the. vocabulary usually improved retrieval precision. Also, codes learned with Binary PV-DBOw pro. vided higher retrieval precision than Binary PV-DM codes. Furthermore, to choose the context size. for the Binary PV-DM models, we evaluated several networks on validation sets taken out of the training data. The best results were obtained with a minimal one-word, one-sided context window. This is the distributed memory architecture most similar to the Binary PV-DBOw model..\nCode With 20 Newsgroups RCV1 Model size bigrams MAP NDCG@10 MAP NDCG@10 no 0.4 0.75 0.25 0.79 PV-DBOW yes 0.45 0.75 0.27 0.8 no 0.34 0.69 0.22 0.74 Binary PV-DBOW 128 yes 0.35 0.69 0.24 0.77 PV-DM 0.41 0.73 0.23 0.78 N/A Binary PV-DM 0.34 0.65 0.18 0.69 no 0.43 0.71 0.26 0.75 PV-DBOW yes 0.46 0.72 0.27 0.77 no 0.32 0.53 0.22 0.6 Binary PV-DBOW 32 yes 0.32 0.54 0.25 0.66 PV-DM 0.43 0.7 0.23 0.77 N/A Binary PV-DM 0.29 0.49 0.17 0.53\nWe also compared binary paragraph vectors against codes constructed by first inferring short, real- valued paragraph vectors and then using another unsupervised model or hashing algorithm for bi- narization. When the dimensionality of the paragraph vectors is equal to the size of binary codes the number of network parameters in this approach is similar to that of Binary PV models. We experimented with an autoencoder with sigmoid coding layer and Krizhevsky's binarization, with a Gaussian-Bernoulli Restricted Boltzmann Machine (Welling et al., 2004), and with two standard hashing algorithms, namely random hyperplane projection (Charikar, 2002) and iterative quantiza- tion (Gong & Lazebnik, 2011). Paragraph vectors in these experiments were inferred using PV- DBOw with bigrams. Results reported in Table 2 shows no benefit from using a separate algorithm for binarization. On the 20 Newsgroups dataset an autoencoder with Krizhevsky's binarization achieved MAP equal to Binary PV-DBOw, while the other three approaches yielded lower MAP. On the larger RCV1 dataset an end-to-end training of Binary PV-DBOW yielded higher MAP than the baseline approaches. Some gain in precision of top hits can be observed for iterative quantization and an autoencoder with Krizhevsky's binarization. However, it does not translate to an improved MAP, and decreases when models are trained on a larger corpus (RCV1).\n20 Newsgroups RCV1 Binarization model MAP NDCG@10 MAP NDCG@10 Autoencoder with Krizhevsky's 0.32 0.57 0.24 0.67 binarization Gaussian-Bernoulli RBM 0.26 0.39 0.23 0.52 Random hyperplane projection 0.27 0.53 0.21 0.66 Iterative quantization 0.31 0.58 0.23 0.68\n128 dimensional codes\nFigure 4: Precision-recall curves for the 20 Newsgroups and RCV1 datasets. Cosine similarity wa. used with real-valued representations and the Hamming distance with binary codes. For compar ison we also included semantic hashing results reported by Salakhutdinov & Hinton (2009, Fig ures 6 & 7).\nIn the experiments presented thus far we had at our disposal training sets with documents similar to the documents for which we inferred binary codes. One could ask a question, whether binary paragraph vectors could be used without collecting a domain-specific training set? For example, what if we needed to hash documents that are not associated with any available domain-specific corpus? One solution could be to train the model with a big generic text corpus, that covers a wide variety of domains. It is not obvious, however, whether such model would capture language semantics meaningful for unrelated documents. To shed light on this question we trained Binary PV- DBOw with bigrams on the English Wikipedia, and then inferred binary codes for the test parts of the 20 Newsgroups and RCV1 datasets. We used words and bigrams with at least 100 occurrences in the English Wikipedia. The results are presented in Table 3 and in Figure 5. The model trained on an unrelated text corpus gives lower retrieval precision than models with domain-specific training sets which is not surprising. However, it still performs remarkably well, indicating that the semantics it captured can be useful for different text collections. Importantly, these results were obtained without domain-specific finetuning.\n(a) 20 Newsgroups 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 uo 0.5 reess 0.5 eeess e 0.4 0.4 0.3 0.3 PV-DBOW uni- & bi-grams PV-DBOW unigrams only PV-DBOW uni- & bi-grams 0.2 Binary PV-DBOW uni- & bi-grams 0.2 PV-DBOW unigrams only Binary PV-DBOW unigrams only Binary PV-DBOW uni- & bi-grams 0.1 Binary PV-DM 0.1 Binary PV-DBOW unigrams only. Semantic hashing Binary PV-DM 0.0 0.0 10-2 10-1 100 10-2 101 10 Recall Recall 128 dimensional codes 32 dimensional codes (b) RCV1 0.7 0.7 0.6 0.6 0.5 0.5 0.4 Prriion 0.4 0.3 PV-DBOW uni- & bi-grams 0.2 PV-DBOW unigrams only PV-DBOW uni- & bi-grams Binary PV-DBOW uni- & bi-grams PV-DBOW unigrams only 0.2 0.1 Binary PV-DBOW unigrams only Binary PV-DBOW uni- & bi-grams Binary PV-DM Binary PV-DBOW unigrams only. Semantic hashing Binary PV-DM 0.0 0.1 10-2 101 100 10-2 101 10 Recall Recall 128 dimensional codes 32 dimensional codes\n(b) RCV1 0.7 0.7 0.6 0.6 0.5 0.5 0.4 Prreson 0.4 0.3 PV-DBOW uni- & bi-grams 0.2 PV-DBOW unigrams only PV-DBOW uni- & bi-grams Binary PV-DBOW uni- & bi-grams PV-DBOW unigrams only 0.1 Binary PV-DBOw unigrams only. 0.2 Binary PV-DBOW uni- & bi-grams Binary PV-DM Binary PV-DBOW unigrams only Semantic hashing Binary PV-DM 0.0 0.1 102 10-1 100 10-2 101 10 Recall Recall 100\nTable 3: Information retrieval results for the Binary PV-DBOw model trained on an unrelated text corpus. Results are reported for 128-bit codes.\nMAP NDCG@10 20 Newsgroups 0.24 0.51 RCV1 0.18 0.66 0.7 0.9 0.8 0.6 0.7 0.5 0.6 uo! 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 training on the 20 Newsgroups training set training on the RCV1 training set. training on English Wikipedia. training on English Wikipedia. 0.0 0.1 10-2 10-1 100 10-2 101 10 Recall Recall (a) 20 Newsgroups (b) RCV1\nFigure 5: Precision-recall curves for the baseline Binary PV-DBOw models and a Binary PV DBOw model trained on an unrelated text corpus. Results are reported for 128-bit codes\nAs pointed out by Salakhutdinov & Hinton (2009), when working with large text collections one car use short binary codes for indexing and a representation with more capacity for ranking. Follow ing this idea, we proposed Real-Binary PV-DBOW model (Section 2) that can simultaneously learr short binary codes and high-dimensional real-valued representations. We begin evaluation of this model by comparing retrieval precision of real-valued and binary representations learned by it. To this end, we trained a Real-Binary PV-DBOW model with 28-bit binary codes and 300-dimensional real-valued representations on the 20 Newsgroups and RCV1 datasets. Results are reported in Fig ure 6. The real-valued representations learned with this model give lower precision than PV-DBOw vectors but, importantly, improve precision over binary codes for top ranked documents. This justi- fies their use alongside binary codes.\nUsing short binary codes for initial filtering of documents comes with a tradeoff between the retrieva performance and the recall level. For example, one can select a small subset of similar documents by using 28-32 bit codes and retrieving documents within small Hamming distance to the query. This will improve retrieval performance, and possibly also precision, at the cost of recall. Conversely short codes provide a less fine-grained hashing and can be used to index documents within large Hamming distance to the query. They can therefore be used to improve recall at the cost of retrieval performance, and possibly also precision. For these reasons, we evaluated Real-Binary PV-DBOw models with different code sizes and under different limits on the Hamming distance to the query. Ir general, we cannot expect these models to achieve 100% recall under the test settings. Furthermore recall will vary on query-by-query basis. We therefore decided to focus on the NDCG@10 metric in this evaluation, as it is suited for measuring model performance when a short list of relevan documents is sought, and the recall level is not known. MAP and precision-recall curves are no applicable in these settings.\nInformation retrieval results for Real-Binary PV-DBOw are summarized in Table 4. The mode gives higher NDCG@10 than 32-bit Binary PV-DBOW codes (Table 1). The difference is large when the initial filtering is restrictive, e.g. when using 28-bit codes and 2-bit Hamming distance. limit. Real-Binary PV-DBOw can therefore be useful when one needs to quickly find a short list. of relevant documents in a large text collection, and the recall level is not of primary importance.\nIf needed, precision can be further improved by using plain Binary PV-DBOw codes for filtering and standard DBOw representation for raking (Table 4, column C). Note, however, that PV-DBOw model would then use approximately 10 times more parameters than Real-Binary PV-DBOW\n0.7 0.9 0.8 0.6 0.7 0.5 0.6 Prreion 0.5 rreess 0.4 0.4F 0.3 0.3 0.2 0.2 PV-DBOW PV-DBOW 0.1 Real-Binary PV-DBow, real-valued codes Real-Binary PV-DBow, real-valued codes Real-Binary PV-DBow, binary codes Real-Binary PV-DBow, binary codes 0.0 0.1 10-2 10-1 100 10-2 10-1 100 Recall Recall (a) 20 Newsgroups (b) RCV1\nTable 4: Information retrieval results for the Real-Binary PV-DBOw model. All real valued repre sentations have 300 dimensions and are use for ranking documents according to the cosine similarity to the query. (A) Real-valued representations learned by Real-Binary PV-DBOw are used for rank ing all test documents. (B) Binary codes are used for selecting documents within a given Hamming distance to the query and real-valued representations are used for ranking. (C) For comparison variant B was repeated with binary codes inferred using plain Binary PV-DBOw and real-valuec representation inferred using original PV-DBOW model.\nCode Hamming NDCG@10 size distance (bits) 20 Newsgroups RCV1 A B C A B C 28 0.64 0.72 0.87 0.75 0.79 0.87 2 0.65 0.86 0.76 0.83 24 0.6 0.74 0.63 0.8 0.75 0.81 20 3 0.58 0.6 0.73 0.73 0.73 0.79 16 0.54 0.55 0.72 0.72 0.72 0.79"}, {"section_index": "3", "section_name": "4 CONCLUSION", "section_text": "In this article we presented simple neural networks that learn short binary codes for text documents.. Our networks extend Paragraph Vector by introducing a sigmoid nonlinearity before the softmax. that predicts words in documents. Binary codes inferred with the proposed networks achieve higher retrieval precision than semantic hashing codes on two popular information retrieval benchmarks They also retain a lot of their precision when trained on an unrelated text corpus. Finally, we. presented a network that simultaneously learns short binary codes and longer, real-valued represen tations.\nThe best codes in our experiments were inferred with Binary PV-DBOw networks. The Binary PV DM model did not perform so well. Li et al. (2015) made similar observations for Paragraph Vectoi models, and argue that in distributed memory model the word context takes a lot of the burden o. predicting the central word from the document code. An interesting line of future research could\nFigure 6: Information retrieval results for binary and real-valued codes learned by the Real-Binary PV-DBOw model with bigrams. Results are reported for 28-bit binary codes and 300d real-valued codes. A 300d PV-DBOw model is included for reference."}, {"section_index": "4", "section_name": "ACKNOWLEDGMENTS", "section_text": "This research iS supported by the Polish National Science Centre grant no. DEC-2013/09/B/ST6/01549 \"Interactive Visual Text Analytics (IVTA): Development oi novel, user-driven text mining and visualization methods for large text corpora exploration.\" This research was carried out with the support of the \"HPC Infrastructure for Grand Challenges of Science and Engineering\"' project, co-financed by the European Regional Development Fund unde the Innovative Economy Operational Programme. This research was supported in part by PL-Grid Infrastructure."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Yoshua Bengio, Nicholas Leonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.\nSebastien Jean Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large target. vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 1, pp. 1-10. ACL, 2015.\nJunyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net works. arXiv preprint arXiv:1609.01704, 2016\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and. stochastic optimization. Journal of Machine Learning Research. 12(Jul):2121-2159. 2011\nKalervo Jarvelin and Jaana Kekalainen. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (T01S). 20(4):422-446. 2002\nRyan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Tor ralba, and Sanja Fidler. Skip-thought vectors. In Advances in neural information processing. systems, pp. 3294-3302, 2015.\ntherefore, focus on models that account for word order, while learning good binary codes. It is also worth noting that Le & Mikolov (2014) constructed paragraph vectors by combining DM and. DBOw representations. This strategy may proof useful also with binary codes, when employed with. hashing algorithms designed for longer codes, e.g. with multi-index hashing (Norouzi et al., 2012)..\nJohn Gantz and David Reinsel. The digital universe in 2020: Big data, bigger digital shadows, and biggest growth in the far east. Technical report, IDC, 2012\nBofang Li, Tao Liu, Xiaoyong Du, Deyuan Zhang, and Zhe Zhao. Learning document embed dings by predicting n-grams for sentiment classification of long movie reviews. arXiv preprin. arXiv:1512.08183, 2015.\nRuslan Salakhutdinov and Geoffrey E Hinton. Semantic hashing. International Journal of Approx imate Reasoning, 50(7):969-978, 2009.\nNitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958, 2014.\nJingdong Wang, Heng Tao Shen, Jingkuan Song, and Jianqiu Ji. Hashing for similarity search: A survey. arXiv preprint arXiv:1408.2927, 2014.\nMax Welling. Michal Rosen-Zvi. and Geoffrey E Hinton. Exponential family harmoniums with ar application to information retrieval. In Advances in neural information processing systems, pp. 1481-1488. 2004.\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word represen tations in vector space. arXiv preprint arXiv:1301.3781, 2013.\nLaurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(Nov):2579-2605, 2008."}, {"section_index": "6", "section_name": "VISUALIZATION OF BINARY PV CODES", "section_text": "For an additional comparison with semantic hashing, we used t-distributed Stochastic Neighbor. Embedding (van der Maaten & Hinton, 2008) to construct two-dimensional visualizations of codes learned by Binary PV-DBOW with bigrams. We used the same subset of newsgroups and RCV1. topics that is pictured in Salakhutdinov & Hinton (2009, Figure 5). Codes learned by Binary PV-. DBOw (Figure 7) appear slightly more clustered.\n.\nFigure 7: t-SNE visualization of binary paragraph vector codes; the Hamming distance was used to calculate code similarity\n(a) A subset of the 20 Newsgroups dataset: green - soc.religion.christian, red - talk.politics.guns blue - rec.sport.hockey, brown - talk.politics.mideast, magenta - comp.graphics, black - sci.crypt.\n128 dimensional codes 32 dimensional codes"}] |
r1yjkAtxe | [{"section_index": "0", "section_name": "SPATIO-TEMPORAL ABSTRACTIONS IN REINFORCEMENT LEARNING THROUGH NEURAL ENCODING", "section_text": "Nir Baram, Tom Zahavy\nRecent progress in the field of Reinforcement Learning (RL) has enabled to tackle bigger and more challenging tasks. However, the increasing complexity of the. problems, as well as the use of more sophisticated models such as Deep Neural Networks (DNN), has impeded the ability to understand the behavior of trained policies. In this work, we present the Semi-Aggregated Markov Decision Process (SAMDP) model. The purpose of the SAMDP modeling is to analyze trained poli. cies by identifying temporal and spatial abstractions. In contrast to other modeling. approaches, SAMDP is built in a transformed state-space that encodes the dynam-. ics of the problem. We show that working with the right state representation mit igates the problem of finding spatial and temporal abstractions. We describe the. process of building the SAMDP model by observing trajectories of a trained pol. icy and give examples for using it in a toy problem and complicated DQN agents Finally, we show how using the SAMDP we can monitor the trained policy and make it more robust."}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "In the early days of RL, understanding the behavior of trained policies could be done rather easily. (Sutton, 1990). Researchers focused on simpler problems (Peng and Williams, 1993), and policies were built using lighter models than today (Tesauro, 1994). As a result, a meaningful analysis of. policies was possible even by working with the original state representation and relating to primitive actions. However, in recent years research has made a huge step forward. Fancier models such as Deep Neural Networks (DNNs) have become a commodity (Mnih et al., 2015), and the RL commu- nity tackles bigger and more challenging problems (Silver et al., 2016). Artificial agents are even. expected to be used in autonomous systems such as self-driving cars. The need to reason the behav. ior of trained agents, and understand the mechanisms that govern its choice of actions is pressing. more than ever.\nAnalyzing a trained policy modeled by a DNN (either graphically using the state-action diagram or by any other mean) is practically impossible. A typical problem consists of an immense number. of states, and policies often rely on skills (Mankowitz, Mann, and Mannor, 2014), creating more. than a single level of planning. The resulting Markov reward processes induced by such policies are too complicated to comprehend through observation. Simplifying the behavior requires finding. a suitable representation of the state space; a long-standing problem in machine learning, where extensive research has been conducted over the years (Boutilier, Dean, and Hanks, 1999). There, the. goal is to come up with a transformation of the state space : s -> s, that can facilitate learning\n*These authors contributed equally"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In the field of RL, where problems are sequential in nature, this problem is exacerbated since the representation of a state needs to account for the dynamics of the problem as well..\nFinding a suitable state representation can be phrased as a learning problem itself (Ng, 2011; Lee et al., 2006). DNNs are very useful in this context since they automatically build a hierarchy of. representations with an increasing level of abstraction along the layers. In this work, we show that the state representation that is learned automatically by DNNs is suitable for building abstractions in RL. To this end, we introduce the SAMDP model; a modeling approach that creates abstractions. both in space and time. Contrary to other modeling approaches, SAMDP is built in a transformed. state space, where the problem of creating spatial abstractions (i.e., state aggregation), and temporal. abstractions (i.e., identifying skills) is facilitated using spatiotemporal clustering. We provide an. example for building an SAMDP model for a basic gridworld problem where $(s) is hand-crafted. However, the real strength of the model is demonstrated on challenging Atari2600 games solved. using DQNs (Mnih et al., 2015). There, we set $(s) to be the state representation automatically learned by the DNN (i.e. the last hidden layer). We continue by presenting methods for evaluating. the fitness of the SAMDP model to the trained policy at hand. Finally, we describe a method for. using the SAMDP as a monitor that alerts when the policy's performance weakens, and provide initial results showing how the SAMDP model is useful for shared autonomy systems..\nLow Temporal Hierarchy High Temporal Hierarchy Skill MDP Low Spatial SMDP Identification {S,A,PA, RA,y} Hierarchy {S, E, P, R,y} State S State 01,3 Aggregation Aggregation C1 AMDP SAMDP High Spatial {C,A, PA, R,Y {C,E, PE, RE,y} Hierarchy Cluster Action State Skill\nFigure 1: Left: Illustration of state aggregation and skills. Primitive actions (orange arrows) cause transitions between MDP states (black dots) while skills (red arrows) induce transitions between SAMDP states (blue circles). Right: Modeling approaches for analyzing trained policies"}, {"section_index": "3", "section_name": "BACKGROUND", "section_text": "We briefly review the standard reinforcement learning framework of discrete-time finite Marko. decision processes (MDPs). An MDP is defined by a five-tuple < S, A, P, R, y >. At time t ar agent observes a state st E S, selects an action at E A, and receives a reward rt. Following the. agent's action choice, it transitions to the next state st+1 E S according to a Markovian proba. E [0, 1] is the discount factor. In this framework, the goal of an RL agent is to maximize the. expected return by learning a policy : S -> A; a mapping from states s E S to a probability. distribution over actions. The action-value function Q(s, a) = E[Rt[st = s, at = a, ] represents. the expected return after observing state s, taking action a and then following policy . The opti. mal action-value function obeys a fundamental recursion known as the optimal Bellman Equation\nSkills, Options (Sutton, Precup, and Singh, 1999) are temporally extended control structures, de- noted by . A skill is defined by a triplet: =< I, , >, where I defines the set of states where the skill can be initiated, is the intra-skill policy, and 3 is the set of termination probabilities de termining when a skill will stop executing. is typically either a function of state s or time t. Any MDP with a fixed set of skills is a Semi-MDP (SMDP). Planning with skills can be performed by learning for each state the value of choosing each skill. More formally, an SMDP is defined by a five-tuple < S, , P, R, y >. S is the set of states, is the set of skills, P is the SMDP transition\nmatrix. y is the discount factor and the SMDP reward is defined by\nR =E[r]=E[rt+1+yrt+2+..+ t+kSt = S,\nThe Skill Policy : S -> y is a mapping from states to a distribution over skills. The action-value. function Q(s, o) = E[t=o ' Rt|(s, ), ] represents the value of choosing skill o E at state s E S, and thereafter selecting skills according to policy . The optimal skill value function is given by: Q*(s, ) = E[R + maxQ*(s', o)] (Stolle and Precup, 2002)."}, {"section_index": "4", "section_name": "THE SEMI AGGREGATED MDP", "section_text": "SMDP (Sutton, Precup, and Singh, 1999), can simplify the analysis of a trained policy by using temporal abstractions. The model extends the MDP action space A to allow the agent to plan with temporally extended actions (i.e., skills). Analyzing policies using the SMDP model shortens the planning horizon and simplifies the analysis. However, there are two problems with this approach First, one still faces the high complexity of the state space, and second, the SMDP model requires to identify skills.\nSkill identification is an ill-posed problem that can be addressed in many ways, and for which ex-. tensive research has been done over the years. The popular approaches are to identify bottlenecks. in the state space (McGovern and Barto, 2001),or to search for common behavior trajectories, or. common state region policies (McGovern, 2002). A different approach can be to build a graphical. model of the agent's interaction with the environment and to use betweenness centrality measures to identify subtasks ($imsek and Barreto, 20o9). No matter what the method is, identifying skills. solely by observing an agent play is a challenging task..\nAlternative approach to SMDP modeling is to analyze a policy using spatial abstractions in the state. space. If there is a reason to believe that groups of states share common attributes such as similar. policy or value function, it is possible to use State Aggregation (Moore, 1991). State Aggregation is. a well-studied problem that typically involves identifying clusters as the new states of an Aggregatec. MDP, where the set of clusters C replaces the MDP states S. Applying RL on aggregated states is. potentially advantageous because the dimensions of the transition probability matrix P, the rewarc. signal R and the policy are decreased (Singh, Jaakkola, and Jordan, 1995). However, the AMDI modeling approach has two drawbacks. First, the action space A is not modified, and therefore the. planning horizon remains intractable, and second, AMDPs are not necessarily Markovian (Bai, Sri. vastava, and Russell, 2016). In this paper, we propose a model that combines the advantages of the SMDP and AMDP approaches. and denote it by SAMDP. Under SAMDP modeling, aggregation defines both the states and the se.\nIn this paper, we propose a model that combines the advantages of the SMDP and AMDP approaches and denote it by SAMDP. Under SAMDP modeling, aggregation defines both the states and the set of skills, allowing analysis with spatiotemporal abstractions (the state-space dimensions and the planning horizon are reduced). However, SAMDPs are still not necessarily Markovian. We summa- rize the different modeling approaches in Figure 1. The rest of this section is devoted to explaining the five stages of building an SAMDP model: (0) Feature selection, (1) State Aggregation, (2) Skill identification, (3) Inference, and (4) Model Selection.\nReinforcement Learning problems are typically modeled using the MDP formulation. Given an MDP, a variety of algorithms have been proposed to find an optimal policy. However, when one wishes to analyze a trained policy, MDP may not be the best modeling choice due to the size of the state space and the length of the planning horizon. In this section, we present the SMDP and Aggregated MDP (AMDP) models which can simplify the analysis by using temporal and spatial abstractions respectively. We also introduce the new Semi-Aggregated MDP (SAMDP) model, that combines SMDP and AMDP models in a novel way which leverages the abstractions made in each modeling approach. SMDP(Su dSinoh 1000) O1ifx\nis generated from an MDP which violates this assumption. We alleviate this problem using two dif-. ferent approaches. First, we decouple the clustering step from the SAMDP model, by creating an ensemble of clustering candidates and building an SAMDP model for each (following stages 2 and. 3). In stage 4, we will explain how to run a non-analytic outer optimization loop to choose between. these candidates based on spatiotemporal evaluation criteria. Second, we introduce a novel exten- sion of the celebrated K-means algorithm (MacQueen and others, 1967), which enforces temporal coherency along trajectories. In the vanilla K-means algorithm, a point xt is assigned to cluster cj. with mean ; if ; is the closest cluster center to xt (for further details please see the supplementary. material). We modified this step as follows:\nc(xt)={ci:Xti[FXt-jF,Vj E[1,K]}\nmore details). (2) Skill identification. We define an SAMDP skill o,; E uniquely by a single initiation state c E C and a single termination state c; E C : ij =< ci, i,j, Cj > . More formally, at time t the agent enters an AMDP state c, at an MDP state st E c;. It chooses a skill according to its SAMDP policy and follows the skill policy i,j for k time steps until it reaches a state St+k E c, s.t i j. We do not define the skill length k apriori nor the skill policy but infer the skill length from the data. As for the skill policies, our model does not define them explicitly, but we will observe later that our model successfully identifies skills that are localized in time and space.\na skill i,; by averaging the number of MDP states visited since entering SAMDP state c; until leaving for SAMDP state c. The skill reward is inferred similarly using Equation 1. The inference of the SAMDP transition matrices is a bit more puzzling, since the probability ol seeing the next SAMDP state depends both on the MDP dynamics and the agent policy in the MDP state space. We now turn to discuss how to infer these matrices by observing transitions in the MDP state space. Our goal is to infer two quantities: (a) The SAMDP transition probability matrices Py : P _ Pr(c,|ci,o), measures the probability of moving from state c, to cj given that skill o is chosen. These matrices are defined uniquely by our definition of skills as deterministic probability matrices. (b) The probability of moving from state c; to c; given that skill o is chosen according to the agent SAMDP policy: P, = Pr(c|C, = (ci)). This quantity involves both the SAMDP transition probability matrices and the agent policy. However, since SAMDP transitior probability matrices are deterministic, this is equivalent to the agent policy in the SAMDP state space. Therefore by inferring transitions between SAMDP states, we directly infer the agent's SAMDP policy. Given an MDP with a deterministic environment and an agent with a nearly deterministic MDI nolicy (e g. a deterministic policy that uses an. -greedv exnloration. (e. 1) it is intuitive tc\nSAMDP policy. Given an MDP with a deterministic environment and an agent with a nearly deterministic MDP. policy (e.g., a deterministic policy that uses an e-greedy exploration (e < 1)), it is intuitive to. assume that we would observe a nearly deterministic SAMDP policy. However, there are twc mechanisms that cause stochasticity in the SAMDP policy: (1) Stochasticity that is accumulatec along skill trajectories. (2) Approximation errors in the aggregation process. A given SAMDP state. may contain more than one \"real' state and therefore more than one skill. Performing inference ir. this setup, we might observe a stochastic policy that chooses randomly between skills.. Therefore, it is very likely to infer a stochastic SAMDP transition matrix, even though the SAMDI. transition probability matrices and the MDP environment are deterministic, and the MDP policy. is nearly deterministic. (4) Model selection. So far we have explained how to build an SAMDP from observations. In this stage, we'll explain how to choose between different SAMDP model. candidates. There are two advantages of choosing between multiple SAMDPs. First, there are different hyperparameters to tune: two examples are the number of SAMDP states (K) anc. the window size (w) for the clustering algorithm. Second, there is randomness in the aggregatior. step. Hence, clustering multiple times and picking the best result will potentially yield better models.\nwhere F stands for the Frobenius norm, K is the number of clusters, t is the time index of xt, and Xt is a set of 2w + 1 centered at xt from the same trajectory: {x; E Xt <> j E [t - w,t + w]} The dimensions of correspond to a single point, but is expanded to the dimensions of Xt. In this way, we enforce temporal coherency since a point xt is assigned to a cluster c; if its neighbors in time along the trajectory are also close to i. We have also experimented with other clustering methods such as spectral clustering, hierarchical agglomerative clustering and entropy minimization (please refer to the supplementary material for more details). (2) Skill identification. We define an SAMDP skill i.; E uniquely by a single initiation state ci E C and a single termination state cj E C : ij =< ci, i,j, Cj > . More formally, at time t the agent enters an AMDP state c; at an MDP state st E c;. It chooses a skill according to its SAMDP policy and follows the skill policy i,; for k time steps until it reaches a state St+k E c, s.t i j We do not define the skill length k apriori nor the skill policy but infer the skill length from the data. As for the skill policies. our model does not define them explicitly. but we will observe later that our\nwhere F stands for the Frobenius norm, K is the number of clusters, t is the time index of xt, and Xt is a set of 2w + 1 centered at xt from the same trajectory: {x; E Xt <-> j E [t - w,t + w]} The dimensions of correspond to a single point, but is expanded to the dimensions of Xt. In this way, we enforce temporal coherency since a point xt is assigned to a cluster c; if its neighbors in time along the trajectory are also close to i. Wehave ented with other clu\nWe have also experimented with other clustering methods such as spectral clustering, hierarchical agglomerative clustering and entropy minimization (please refer to the supplementary material for. more details).\nGiven an MDP with a deterministic environment and an agent with a nearly deterministic MDP policy (e.g., a deterministic policy that uses an e-greedy exploration (e < 1)), it is intuitive to. assume that we would observe a nearly deterministic SAMDP policy. However, there are two. mechanisms that cause stochasticity in the SAMDP policy: (1) Stochasticity that is accumulated along skill trajectories. (2) Approximation errors in the aggregation process. A given SAMDP state. may contain more than one \"'real\"' state and therefore more than one skill. Performing inference in this setup, we might observe a stochastic policy that chooses randomly between skills.. Therefore, it is very likely to infer a stochastic SAMDP transition matrix, even though the SAMDP.\n(a) MDP (b) SMDP (c) AMDP O... 7O (d) SAMDP\nWe developed, therefore, evaluation criteria that allow us to select the best model, motivated by Hallak, Di-Castro, and Mannor (2013). We follow the Occams Razor principle and aim to find the simplest model which best explains the data. (i) Value Mean Square Error(VMSE), measures the consistency of the model with the observations. The estimator is given by\nwhere v stands for the SAMDP value function of the given policy, and vsAmDp is given by VsAmDp = (I + k P)-1r, where P is measured under the SAMDP policy. (ii) Inertia, the K means algorithm objective function, is given by : I = =o min, ec(l|; - i||?). Inertia mea sures the variance inside clusters and encourages spatial coherency. Motivated by Ncut and spectra clustering (Von Luxburg, 2007), we define (iii) The Intensity Factor as the fraction of out/in clus ter transitions. However, we define edges between states that are connected along the trajectory ( transition between them was observed) and give them equal weights (instead of defining the edge by euclidean distances as in spectral clustering). Minimizing the intensity factor encourages longe duration skills. (iv)had been Entropy, is defined on the SAMDP probability transition matrix a follows: e = - ,{|Ci| : , Pi,j log Pi,j}. Low entropy encourages clusters to have less skills i.e., clusters that are localized both in time and space."}, {"section_index": "5", "section_name": "SAMDP FOR GRIDWORLD", "section_text": "We first illustrate the advantages of SAMDP in a basic gridworld problem (Figure 2). In this task. an agent is placed at the origin (marked in X), where the goal is to reach the green ball and return. The state s E R3 is given by: s = {x,y,b}, where (x,y) are the coordinates and b E {0,1. indicates whether the agent has reached the ball or not. The policy is trained to find skills following. the algorithm of Mankowitz, Mann, and Mannor (2014). We are given trajectories of the trainec. agent, and wish to analyze its behavior by building the state-action graph for all four modeling. approaches. For clarity, we plot the graphs on the maze using the coordinates of the state. The MDI. graph (Figure 2(a)), consists of a vast number of states. It is also difficult to understand what skill. the agent is using. In the SMDP graph (Figure 2(b)), the number of states remain high, howeve.\nFigure 2: State-action diagrams for a gridworld problem. a. MDP diagram: relate to individual states and primitive actions. b. SMDP diagram: Edge colors represent different skills. c. AMDP diagram: clusters are formed using spatial aggregation in the original state. d. SAMDP diagram: clusters are found after transforming the state space. intra-cluster transitions (dashed arrows) can be used to explain the skills, while inter-cluster transitions (big red arrows) loyally describe the governing policy.\nVMSE = ||v-vSAMDP|l/l|u|l\nFigure 3: SAMDP visualization for Breakout. Each MDP state is represented on the map by its two t-SNE coordinates and colored by its value estimate (low values in blue and high in red). SAMDP states are visualized by their mean state (with frame pixels) at the mean t-SNE coordinate. An Edge between two SAMDP states represents a skill (bold side indicate terminal state), and the numbers above the edges correspond to the inferred SAMDP policy.\ncoloring the edges by the skills, helps to understand the agent's behavior. Unfortunately, producing this graph is seldom possible because we rarely receive information about the skills. On the other hand, abstracting the state space can be done more easily using state aggregation. However, in the AMDP graph (Figure 2(c)), the clusters are not aligned with the played skills because the routes leading to and from the ball overlap. For building the SAMDP model (Figure 2(d)), we transform the state space in a way that disentangles the routes:"}, {"section_index": "6", "section_name": "SAMDPS FOR DONS", "section_text": "Feature extraction: We evaluate a pre-trained DQN agent for multiple trajectories with an e-greedy policy on three Atari2600 games, Pacman (a game where DQN performs very well), Seaquest (for the opposite reason) and Breakout (for its popularity). We let the trained agent play 120k game states, and record the neural activations of the last hidden layer as well as the Q values. We also keep the time index of each state to be able to find temporal neighbors. Features from other layers can also be used. However, we rely on the results from Zahavy, Zrihem, and Mannor (2016) that showed that the features learned in the last hidden layer capture a spatiotemporal hierarchy and therefore make a good candidate for state aggregation. We then apply t-SNE on the neural activations data, a non-linear dimensionality reduction method that is particularly good at creating a single map that reveals structure at many different scales. We use the two coordinates of the t-SNE map and the value estimation as the MDP state features. Each coordinate is normalized to have zero mean and\n.21\n(x,y if b is O X,U (2L-x,y) if b is 1\nwhere L is the maze width. The transformation flip and translate the states where b = 1. Now that the routes to and from the ball are disentangled, the clusters are perfectly aligned with the skills. Understanding the behavior of the agent is now possible by examining inter-cluster and intra-cluster. transitions.\nDQN value vs. SAMDP value DQN vane 2 SAMDP 0 10 15 20 25 Cluster index Correlation between greedy policy and trajectory reward Corelron 0 10 15 20 25 Cluster index Greedy policy weight in good/bad trajectories. 0.5 0.4 0.3 High reward trajs Low reward trajs 03.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 Percentage of extermum trajectories used.\nunit variance. We have experimented with other configurations such as using the activations withou t-SNE as well as different normalization. However, we found that this configuration results in bettej SAMDP models. We also use two approximations in the inference stage which we found to work well: 1) overlooking transitions with a small skill length (shorter than 2) and 2) truncating transition with probability less than O.1. We only present results for the Breakout game and refer the reader tc the supplementary material for results on Pacman and Seaquest.\nModel Selection: We perform a grid search on two parameters: i) number of clusters K E [15, 25. ii) window size w E [1, 7]. We found that models larger (smaller) than that are too cumbersome (simplistic) to analyze. We select the best model in the following way: we first sort all models by the four evaluation criteria (SAMDP Section, stage 4) from best to worst. Then, we iteratively intersect the p-prefix of all sets (i.e., the first p elements of each set) starting with 1-prefix. We stop when. the intersection is nonempty and choose the configuration at the intersection. The resulted SAMDP. model for Breakout can be seen in Figure 3..\ndecisions and the trajectory reward. For a given trajectory j we measure P, : the empirical distribu tion of choosing the greedy policy at state c, and the cumulative reward R?. Finally, we present the correlation between these two measures in each state: corr, = corr(P,, R) in (Figure 4, center). A\nWe also measure the p-value of the chosen model. For the null hypothesis. we take the SAMDI model constructed with random clusters. We tested 10oo0 random SAMDP models, none of which scored better than the chosen model (for all the evaluation criteria).\nQualitative Evaluation: Examining the resulting SAMDP (Figure 3) it is interesting to note the sparsity of transitions, which implies low entropy. Inspecting the mean image of each cluster reveals insights about the nature of the skills hiding within and uncovers the policy hierarchy as described in Zahavy, Zrihem, and Mannor (2016). The agent begins to play in low value (blue) clusters (e.g., 1,5,8,9,13,16,18,19). These clusters are well connected between them and are disconnected from other clusters. Once the agent transitions to the 'tunnel-digging\" option in clusters 4,12,14, it stays in there until it finishes to curve the tunnel, then it transitions to cluster 11. From cluster 11 the agent progresses through the 'left banana' and hops between clusters 2.21.5.10.0.7 and 3 in that order\nOnowing the greedy poncy leads to ngnleward. of the states, we observe positive correlation, supporting the consistency of the model.. The third evaluation is close in spirit to the second one. We partition the data to a train and test sets. We evaluate the greedy policy based on the train set and create two transition matrices T+, T- using. the k top and bottom rewarded trajectories respectively from the test set. We measure the correlation of the greedy policy TG with each of the transition matrices for different values of k (Figure 4 bot-. tom). As clearly seen, the correlation of the greedy policy and the top trajectories is higher than the correlation with the bottom trajectories.."}, {"section_index": "7", "section_name": "DISCUSSION", "section_text": "SAMDP modeling offers a way to present a trained policy in a concise way by creating abstractions that relate to the spatiotemporal structure of the problem. We showed that by using the right rep- resentation, time-aware state aggregation could be used to identify skills. It implies that the crucial step in building an SAMDP is the state aggregation phase. The aggregation depends on the state features and the clustering algorithm at hand.\nIn this work, we presented a basic K-means variant that relies on temporal information. However other clustering approaches are possible. We also experimented with agglomerative methods bu found them to be significantly slower without providing any benefit. We believe that clustering methods that better relate to the topology, such as spectral clustering, would produce the best results Regarding the state features; in the DQN example, we used the 2D t-SNE map. This map, however, i built under the i.i.d assumption that overlooks the temporal dimension of the problem. An interestin? line of future work will be to modify the t-SNE algorithm to take into account temporal distances as well as spatial ones. A tSNE algorithm of this kind may produce 2D maps with even lower entropy which will decrease the aggregation artifacts that affect the quality of the SAMDP model.\nIn this work we analyzed discrete-action policies, however SAMDP can also be applied fo continuous-action policies that maintain a value function (since our algorithm depends on it for construction and evaluation), as in the case of actor-critic methods. Another issue we wish to inves tigate is the question of consistency in re-building an SAMDP. We would like the SAMDP to be unique for a given problem. However, there are several aspects of randomness that may cause diver- gence. For instance, when using a DQN, randomness exists in the creation of the t-SNE map, and in the clustering phase. From our experience, though, different models built for the same problem are reasonably consistent. In future work, we wish to address the same problem by laying out an optimization problem that will directly account for all of the performance criteria introduced here. I1 would be interesting to see what clustering method will be drawn out of this process and to compare the principled solution with our current approach.\nEject Button: The motivation for this experiment stems from the idea of shared autonomy Pitzer et al. (2011). There are domains where errors are dreadful, and performance must be as high as possi- ble. The idea of shared autonomy, is to allow an operator to intervene in the decision loop at critical. times. For example, in 20% of commercial flights, the auto-pilot returns the control to the human pilots. In the following experiment, we show how the SAMDP model can help to identify where the agent's behavior deteriorates. Setup. (a) Evaluate a DQN agent, create a trajectory data set, and evaluate the features for each state (stage O). (b) Divide the data into two groups: train (100 trajec-. tories) and test (60). then build an SAMDP model (stages 1-4) on the train data. (c) Split the train data to k top and bottom rewarded trajectories T+, T- and re-infer the model parameters separately. for each (stage 3). (d) Project the test data on the SAMDP model (mapping each state to the near-. est SAMDP state). (e) Eject when the transitions of the agent are more likely under the T- matrix rather then under T+ (inspired by the idea of option interruption Sutton, Precup, and Singh (1999)). (f) We average the trajectory reward on (i) the entire test set, and (ii) the un-ejected trajectories sub. set. We measure 36% 7.7%, 20% 8.0%, and 4.7% %1.2 performance gain for Breakout Seaquest and Pacman, respectively. The eject experiment indicates that the SAMDP model can be. used to make a given DQN policy robust by identifying when the agent is not going to perform well. and return control to a human operator or some other AI agent. Other eject mechanisms are also pos- sible. For example, ejecting by looking at MDP values. However, the Q value is not monotonically. decreasing along the trajectory as expected (See Figure 3). The solution we propose is to eject by monitoring transitions and not state values, which makes the MDP impractical in this case because it's state-action diagram is too large to construct, and too expensive to process.."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Boutilier, C.: Dean, T.: and Hanks, S. 1999. Decision-theoretic planning: Structural assumptions and computational leverage. Journal of Artificial Intelligence Research 11(1:94\nMacQueen, J., et al. 1967. Some methods for classification and analysis of multivariate observations\nMcGovern, A., and Barto, A. G. 2001. Automatic discovery of subgoals in reinforcement learning using diverse density.\nNg, A. 2011. Sparse autoencoder. CS294A Lecture notes 72:1-19\nPeng, J., and Williams, R. J. 1993. Efficient learning and planning within the dyna framework Adaptive Behavior 1(4):437-454.\nSilver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J. Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; Dieleman, S.: Grewe, D.; Nham, J.; Kalchbren ner, N.; Sutskever, I.; Lillicrap, T.; Leach, M.; Kavukcuoglu, K.; Graepel, T.; and Hassabis, D 2016. Mastering the game of go with deep neural networks and tree search. Nature 529:484-503\nSimsek, O., and Barreto, A. S. 2o09. Skill characterization based on betweenness. In Advances in neural information processing systems, 1497-1504.\nStolle, M., and Precup, D. 2002. Learning options in reinforcement learning. Springer\nSutton, R. S.; Precup, D.; and Singh, S. 1999. Between MDPs and semi-MDPs: A framework fo temporal abstraction in reinforcement learning. Artificial intelligence 112(1):181-211.\nLee, H.; Battle, A.; Raina, R.; and Ng, A. Y. 2006. Efficient sparse coding algorithms. In Advances in neural information processing systems, 801-808\nMoore, A. 1991. Variable resolution dynamic programming: Efficiently learning action maps in multivariate real-valued state-spaces. In Birnbaum, L., and Collins, G., eds., Machine Learning Proceedings of the Eighth International Conference. Morgan Kaufmann..\nPitzer, B.; Styer, M.; Bersch, C.; DuHadway, C.; and Becker, J. 2011. Towards perceptual shared autonomy for robotic mobile manipulation. In IEEE International Conference on Robotics Au-. tomation (ICRA).\nVon Luxburg, U. 2007. A tutorial on spectral clustering. Statistics and computing 17(4):395-416\nZahavy, T.; Zrihem, N. B.; and Mannor, S. 2016. Graying the black box: Understanding dqns. Proceedings of the 33 rd International Conference on Machine Learning (ICML-16), JMLR: vol ume48."}] |
Bk8BvDqex | [{"section_index": "0", "section_name": "Nicolas Heess", "section_text": "vinyals@google.com\nheess@google.com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "While there have been significant recent advances in deep reinforcement learning (Mnih et al., 2015. Silver et al., 2016) and control (Lillicrap et al., 2015; Levine et al., 2016), most efforts train a network. that performs a fixed sequence of computations. Here we introduce an alternative in which an agen. uses a metacontroller to choose which, and how many, computations to perform. It \"imagines\" the. consequences of potential actions proposed by an actor module, and refines them internally, before. executing them in the world. The metacontroller adaptively decides which expert models to use tc. evaluate candidate actions, and when it is time to stop imagining and act. The learned experts may. be state transition models, action-value functions, or any other function that is relevant to the task and can vary in their accuracy and computational costs. Our metacontroller's learned policy car. exploit the diversity of its pool of experts by trading off between their costs and reliability, allowing. it to automatically identify which expert is most worthwhile.."}, {"section_index": "2", "section_name": "METACONTROL FOR ADAPTIVE IMAGINATION-BASED OPTIMIZATION", "section_text": "Andrew J. Ballard\nRazvan Pascanu\naybd@google.com\nrazp@google.com\npeterbattaglia@google.con"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Many machine learning systems are built to solve the hardest examples of a par. ticular task, which often makes them large and expensive to run---especially with. respect to the easier examples, which might require much less computation. For an agent with a limited computational budget, this \"one-size-fits-all'' approach. may result in the agent wasting valuable computation on easy examples, while not spending enough on hard examples. Rather than learning a single, fixed policy for solving all instances of a task, we introduce a metacontroller which learns to opti-. mize a sequence of \"imagined\"' internal simulations over predictive models of the. world in order to construct a more informed, and more economical, solution. The metacontroller component is a model-free reinforcement learning agent, which. decides both how many iterations of the optimization procedure to run, as well as which model to consult on each iteration. The models (which we call \"experts') can be state transition models, action-value functions, or any other mechanism that provides information useful for solving the task, and can be learned on-policy or off-policy in parallel with the metacontroller. When the metacontroller, controller,. and experts were trained with \"interaction networks\" (Battaglia et al., 2016) as ex-. pert models, our approach was able to solve a challenging decision-making prob-. lem under complex non-linear dynamics. The metacontroller learned to adapt the amount of computation it performed to the difficulty of the task, and learned how to choose which experts to consult by factoring in both their reliability and indi-. vidual computational resource costs. This allowed the metacontroller to achieve a lower overall cost (task loss plus computational cost) than more traditional fixed policy approaches. These results demonstrate that our approach is a powerful. framework for using rich forward models for efficient model-based reinforcement. learning.\nWe draw inspiration from research in cognitive science and neuroscience which has studied how. people use a meta-level of reasoning in order to control the use of their internal models and alloca tion of their computational resources. Evidence suggests that humans rely on rich generative models. of the world for planning (Glascher et al., 2010), control (Wolpert & Kawato, 1998), and reasoning. (Hegarty, 2004; Johnson-Laird, 2010; Battaglia et al., 2013), that they adapt the amount of compu-. tation they perform with their model to the demands of the task (Hamrick et al., 2015), and that they. trade off between multiple strategies of varying quality (Lee et al., 2014; Lieder et al., 2014; Lieder. & Griffiths, in revision; Kool et al., in press)..\nOur imagination-based optimization approach is related to classic artificial intelligence research o1 bounded-rational metareasoning (Horvitz, 1988; Russell & Wefald, 1991; Hay et al., 2012), whicl formulates a meta-level MDP for selecting computations to perform, where the computations have : known cost. We also build on classic work by Schmidhuber (1990a;b), which used an RL controlle with a recurrent neural network (RNN) world model to evaluate and improve upon candidate control:. Online.\nRecently Andrychowicz et al. (2016) used a fully differentiable deep network to learn to perforn gradient descent optimization, and Tamar et al. (2016) used a convolutional neural network fo performing value iteration online in a deep learning setting. In other similar work, Fragkiadaki et al. (2015) made use of \"visual imaginations\" for action planning. Our work is also related tc recent notions of \"conditional computation\"' (Bengio, 2013; Bengio et al., 2015), which adaptively modifies network structure online, and \"adaptive computation time\"' (Graves, 2016) which allows for variable numbers of internal \"pondering\" iterations to optimize computational cost.\nOur work's key contribution is a framework for learning to optimize via a metacontroller which man- ages an adaptive, imagination-based optimization loop. This represents a hybrid RL system where a model-free metacontroller constructs its decisions using an actor policy to manage model-free and model-based experts. Our experimental results demonstrate that a metacontroller can flexibly allo cate its computational resources on a case-by-case basis to achieve greater performance than more rigid fixed policy approaches, using more computation when it is required by a more difficult task."}, {"section_index": "4", "section_name": "2 MODEL", "section_text": "We consider a class of fully observed, one-shot decision-making tasks (i.e., continuous, contextual bandits). The performance objective is to find a control c E C which, given an initial state x E l, minimizes some loss function L between a known future goal state x* and the result of a forward process, f(x, c). The performance loss Lp is the (negative) utility of executing the control in the world, and is related to the optimal solution c* E C as follows:\nLp(x*,x,c) = L(x*,f(x,c)) c* = argmin Lp(x*,x, c\nHowever, (2) defines only the optimal solution not how to achieve it."}, {"section_index": "5", "section_name": "2.1 OPTIMIZING PERFORMANCE", "section_text": "We consider an iterative optimization procedure that takes x* and x as input and returns an approx. imation of c* in order to minimize (1). The optimization procedure consists of a controller, whic iteratively proposes controls, and an expert, which evaluates how good those controls are. On th. nth iteration, the controller C : I I H -> C takes as input, x*, x, and information about th. history of previously proposed controls and evaluations hn-1 E H, and returns a proposed contrc. cn that aims to improve on previously proposed controls. An expert E : C -> takes th proposed control and provides some information en E about the quality of the control, which w. call an opinion. This opinion is added to the history, which is passed back to the controller, and th. loop continues for N steps, after which a final control cy is proposed..\nStandard optimization methods use principled heuristics for proposing controls. In gradient descent,. for example, controls are proposed by adjusting cn in the direction of the gradient of the reward. with respect to the control. In Bayesian optimization, controls are proposed based on selection criteria such as \"probability of improvement\"', or a meta-selection criterion for choosing among.\nFigure 1: Metacontroller architecture and task. A: All components are part of the metacontroller agent (box) except the scene and the world, which are part of the agent's environment. The manager takes the scene and history and determines which action to take (i.e., whether to execute or ponder, and with what expert to ponder with), denoted by the orange lines. The controller takes the scene and history and computes a control (e.g., the force to apply to a spaceship), denoted by the blue lines. The orange line ending with a circle at the switch reflects the fact that the manager's action affects the behavior of the switch, which routes the controller's control to either an expert (e.g., a simulation model of the spaceship's trajectory, an action-value function, etc.) or the world. The outcome and reward from the expert, along with the history, action, and control, are fed into the memory, which produces the next history. The history is fed back to the controller on the next iteration in order to allow it to propose controls based on what it has already tried. B-C: Scenes consisted of a number of planets (depicted here by colored circles) of different masses as well as a spaceship (also with a variable mass). The task was to apply a force to the spaceship for one time step of simulation (depicted here as a solid red arrow) such that the resulting trajectory (dotted red arrow) would put the spaceship at a target (bullseye) after 11 steps of simulation. The white ring of the bullseye corresponds to a performance loss of 0.12-0.15, the black ring to a loss of 0.09-0.12, the blue ring to a loss of 0.06-0.09, the red ring to a loss of 0.03-0.06, and the yellow center to a loss of O.03 or less. B depicts an easy, 1-planet scene, while C depicts a very difficult 5-planet scene.\nseveral basic selection criteria Hoffman et al. (2011); Shahriari et al. (2014). Rather than choosing. one of several controllers, our work learns a single controller and instead focuses on selecting from multiple experts (see Sec. 2.2). In some cases f is known and inexpensive to compute, and thus. the optimization procedure sets E = f. However, in many real-world settings, f is expensive or. non-stationary and so it can be advantageous to use an approximation of f (e.g., a state transition. model), Lp (e.g., an action-value function), or any other quantity that gives some information about. f or L p.\nGiven a controller and one or more experts, there are two important decisions to be made. First, how many optimization iterations should be performed? The approximate solution usually improves\nA Scene (x) B History (hn-1) Manager Controller M TT Action Control (kn) (Cn) Switch T C World(f) Expert E1 Expert E2 Expert EK Opinion (en) Outcome (x') Performance loss ( Lp) Resource loss (LR) Memory Scene (x) History (hn-1) History (hn) Metacontroller agent\nwith more iterations, but each iteration costs computational resources. However, most tradition optimizers either ignore the cost of computation or select the number of iterations using simp. heuristics. Because they do not balance the cost of computation against the performance loss, th overall effectiveness of these approaches is subject to the skill and preferences of the practitionei who use them. Second, which expert should be used on each step of the optimization? Some expert may be accurate but expensive to compute in terms of time, energy and/or money, while others ma be crude, yet cheap. Moreover, the reliability of the experts may not be known a priori, furthe limiting the effectiveness of the optimization procedure. Our use of a metacontroller address thes issues by jointly optimizing over the choices of how many steps to take and which experts to use.\nWe consider a family of optimizers which use the same controller, C, but vary in their expert. evaluators, { E1, ..., Ek }. Assuming that the controller and experts are deterministic functions, the number of iterations N and the sequences of experts k = (k1, ..., ky-1) exactly determine the final. control and performance loss Lp. This means we have transformed the performance optimization over c into an optimization over N and k: (N, k)* = arg minx,n Lp(x*, x, c(N, k, x, x*)), where. the notation c(N, k, x, x*) is used to emphasize that the control is a function N. k. x, and x*.\nIf each optimizer has an associated computational cost Tk, then N and k also exactly determine the the sum of Lp and LR, each of which are functions of N and k..\nand the optimal solution is defined as (N, k)* = arg miny.k LT(x*, x, N, k). Optimizing LT is difficult because of the recursive dependency on the history, hv-1, and because the discrete choices of N and k mean LT is not differentiable\nTo optimize LT we recast it as an RL problem where the objective is to jointly optimize task perfor- mance and computational cost. As shown in Figure 1a, the metacontroller agent aM is comprised of a controller C, a pool of experts { E1, ..., Ek}, a manager M, and a memory . The manager is a meta-level policy (Russell & Wefald, 1991; Hay et al., 2012) over actions indexed by k, which determine whether to terminate the optimization procedure (k = 0) or to perform another iteration of the optimization procedure with the kth expert. Specifically, on the nth iteration the controller produces a new control cn based on the history of controls, experts, and evaluations. The manager, also relying on this history, independently decides whether to end the optimization procedure (i.e., to execute the control in the world) or to perform another iteration and evaluate the proposed con- trol with the kth expert (i.e., to ponder, after Graves (2016)). The memory then updates the history hn by concatenating k, cn, and en with the previous history hn-1. Coming back to the notion of imagination-based optimization, we suggest that this iterative optimization process is analogous to imagining what will happen (using one or more approximate world models) before actually execut- ing that action in the world. For further details, see Appendix A, and for an algorithmic illustration of the metacontroller agent. see Algorithm 1 in the appendix."}, {"section_index": "6", "section_name": "2.3 NEURAL NETWORK IMPLEMENTATION", "section_text": "We use standard deep learning building blocks, e.g., multi-layer perceptrons (MLPs), RNNs, etc., tc. implement the controller, experts, manager, and memory, because they are effective at approximating complex functions via gradient-based and reinforcement learning, but other approaches could be. used as well. In particular, we constructed our implementation to be able to make control decisions in complex dynamical systems, such as controlling the movement of a spaceship (Figure 1b-c) though we note that our approach is not limited to such physical reasoning tasks. Here we usec mean-squared error (MSE) for our L and Adam (Kingma & Ba, 2014) as the training optimizer..\nN-1 = L(x*, fx,) (N-1) n=1\nWe also define two special cases of the metacontroller for baseline comparisons. The iterative agent. aI does not have a manager and uses only a single expert. Its number of iterations are pre-set to a. single N. The reactive agent, a, is a special case of the iterative agent, where the number of iterations is fixed to N = 0. This implies that proposed controls are executed immediately in the world, and are not evaluated by an expert. For algorithmic illustrations of the iterative and reactive. agents, see Algorithms 2 and 3 in the appendix..\nExpertsWe implemented the experts as MLPs and \"interaction networks (INs) (Battaglia et al 2016), which are well-suited to predicting complex dynamical systems like those in our experiment below. Each expert has parameters 0Ek, i.e. en = Ek(x*, x, cn; 0Er), and may be trained eithe on-policy using the outputs of the controller (as is the case in this paper), or off-policy by any dat that pairs states and controls with future states or reward outcomes. The objective LEr for eac expert may be different depending on what the expert outputs. For example, the objective could b the loss between the goal and future states, LEr = (f(x, c), Ek(x*, x, c; 0Er)), which is wha we use in our experiments. Or, it could be the loss between Lp and an action-value function tha predicts Lp directly, LEx = (Lp(x*, x, c), Ek(x*, x, c; 0E)). See Appendix B.1 for details.\nController and Memory We implemented the controller as an MLP with parameters 0C, i.e. cn = C(x*, x, hn-1; 0C), and we implemented the memory as a Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) with parameters 0. The memory embeds the history as a fixed- length vector, i.e. hn = (hn-1, kn, Cn, Ekn (x*, x, cn); 0). The controller and memory were trained jointly to optimize (1). However, this objective includes f, which is often unknown or not differentiable. We overcame this by approximating Lp with a differentiable critic analogous to those used in policy gradient methods (e.g. Silver et al., 2014; Lillicrap et al., 2015; Heess et al., 2015) See Appendices B.2 and B.3 for details.\nManager We implemented the manager as a stochastic policy that samples from a categor. ical distribution whose weights are produced by an MLP with parameters oM, i.e. kn. Categorical(k; M (x*, x, hn-1; 0M)). We trained the manager to minimize (3) using ReInFoRCE. (Williams, 1992), but other deep RL algorithms could be used instead. See Appendix B.4 for details\nTo evaluate our metacontroller agent, we measured its ability to learn to solve a class of physics based tasks that are surprisingly challenging. Each episode consisted of a scene which contained a spaceship and multiple planets (Figure 1b-c). The spaceship's goal was to rendezvous with its mothership near the center of the system in exactly 11 time steps, but it only had enough fuel to fire its thrusters once. The planets were static but the gravitational force they exerted on the spacecraft induced complex non-linear dynamics on the motion over the 11 steps. The spacecraft's action space was continuous, up to some maximum magnitude, and represented the instantaneous Cartesian velocity vector imparted by its thrusters. Further details are in Appendix C.\nWe trained the reactive, iterative, and metacontroller agents on five versions of the spaceship tasl involving different numbers of planets.' The iterative agent was trained to take anywhere from zero (i.e., the reactive agent) to ten ponder steps. The metacontroller was allowed to take a maximum of ten ponder steps. We considered three different experts which were all differentiable: an MLI expert which used an MLP to predict the final location of the spaceship, an IN expert which used an interaction network (Battaglia et al., 2016) to predict the full trajectory of the spaceship, and a true simulation expert which was the same as the world model. In some conditions the metacon troller could use exactly one expert and in others it was allowed to select between the MLP and IN experts. For experiments with the true simulation expert, we used it to backpropagate gradients to the controller and memory. For experiments with an MLP as the only expert, we used a learned IN as the critic. For experiments with an IN as one of its experts, the critic was an IN with shared parameters. We trained the metacontroller on a range of different ponder costs, Tk, for the different experts. Further details of the training procedure are available in Appendix D."}, {"section_index": "7", "section_name": "3.1 REACTIVE AND ITERATIVE AGENTS", "section_text": "Figure 2 shows the performance on the test set of the reactive and iterative agents for differen numbers of ponder steps. The reactive agent performed poorly on the task, especially when the tasl was more difficult. With the five planets dataset, it was only able to achieve a performance loss o 0.583 on average (see Figure 1 for a depiction of the magnitude of the loss). In contrast, the iterativ agent with the true simulation expert performed much better, reaching ceiling performance on the\nAvailable from: https://www.github. com/deepmind/spaceship_dataset\nMLP expert Int. Net. expert. 0.8 0.8 True simulation expert 0.8 + one planet 0.7 0.7 0.7 I two planets. 0.6 0.6 0.6 three planets s 0.5 0.5 0.5 F four planets five planets 0.4 0.4 0.4 0.3 0.3 0.2 0.2 0.2 0.1 0.1 0.1 0.0 0.0 0.0 0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10 Number of ponder steps Number of ponder steps Number of ponder steps\n0.8 0.8 0.8 one planet 0.7 0.7 0.7 two planets 0.6 0.6 0.6 three planets 0.5 0.5 four planets 0.4 0.4 0.4 five planets T 0.3 0.3 0.3 0.2 0.2 0.2 0.1 0.1 0.1 0.0 0.0 0.0 0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10 Number of ponder steps Number of ponder steps Number of ponder steps\nFigure 2: Test performance of the reactive and iterative agents. Each line corresponds to the. performance of an iterative agent (either the true simulation expert, the MLP expert, or the interac- tion net expert) trained for a fixed number of ponder steps on one of the five datasets; the line color indicates which dataset the controller was trained on. In all cases, performance refers to the perfor-. mance loss, Lp. Left: the MLP expert struggles with the task due to its limited expressivity, but. still benefits from pondering. Middle: the IN expert performs almost as well as the true simulation. expert, even though it is not a perfect model. Right: The true simulation expert does quite well on. the task, especially with multiple ponder steps.\ndatasets with one and two planets, and achieving a performance loss of 0.0683 on the five planet. dataset. The IN and MLP experts also improve over the reactive agent, with a minimum performanc loss of 0.117 and 0.375 on the five planets dataset, respectively..\nFigure 2 also highlights how important the choice of expert is. When using the true simulatior and IN experts, the iterative agent performs well. With the MLP expert, however, performance is. substantially diminished. But despite the poor performance of the MLP expert, there is still some. benefit of pondering with it. With even just a few steps, the MLP iterative agent outperforms it reactive counterpart. However comparing the reactive agent with the N = 1 iterative agent is. somewhat unfair because the iterative agent has more parameters due to the expert and the memory. However, given that there tends to also be an increase in performance between one and two ponde. steps (and beyond), it is clear that pondering--even with a highly inaccurate model- can still leac. to better performance than a model-free reactive approach.."}, {"section_index": "8", "section_name": "3.2 METACONTROLLER WITH ONE EXPERT", "section_text": "Though the iterative agents achieve impressive results, they expend more computation than neces. sary. For example, in the one and two planet conditions, the performances of the IN and true sim. ulation iterative agents received little performance benefit from pondering more than two or thre steps, while for the four and five planet conditions they required at least five to eight steps befor their performance converged. When computational resources have no cost, the number of steps ar. of no concern, but when they have some cost it is important to be economical..\nBecause the metacontroller learns to choose its number of pondering steps, it can balance its perfor- mance loss against the cost of computation. Figure 3 (top row, middle and right subplots) shows that the IN and true simulation expert metacontroller take fewer ponder steps as - increases, tracking closely the minimum of the iterative agent's cost curve (i.e., the metacontroller points are always near the iterative agent curves' minima). This adaptive behavior emerges automatically from the manager's learned policy, and avoids the need to perform a hyperparameter search to find the best number of iterations for a given t.\nThe metacontroller does not simply choose an average number of ponder steps to take per episode. it actually tailors this choice to the difficulty of each episode. Figure 4 shows how the number of ponder steps the IN metacontroller chooses in each episode depends on that episode's difficulty, as measured by the episode's loss under the reactive agent. For more difficult episodes, the metacon- troller tends to take more ponder steps, as indicated by the positive slopes of the best fit lines, and this proportionality persists across the different levels of t in each subplot..\nMLP expert Int. Net. expert True simulation expert 1.0 1.0 1.0 0.8 0.8 0.8 cost 0.6 0.6 0.6 o 0.4 0.4 0.4 0.2 0.2 0.2 0.0 0.0 0.0 0.8 0.8 + 0.8 0.7 0.7 0.7 0.6 0.6 0.6 0.5 0.5 0.5 ? 0.4 0.4 0.4 S 0.3 0.3 0.3 0.2 0.2 0.2 0.1 0.1 0.1 0.0 0.0 0.0 0 1234 5 6 78 910 1 2 345 6 78 10 0 1 234567 0 8 10 Number of ponder steps Number of ponder steps Number of ponder steps\n0.8 0.8 0.8 8 0.6 0.6 0.4 0.4 0.2 0.2 0.2 0.0 0.0 0.0 0.8 0.8 0.8 0.7 0.7 0.7 0.6 0.6 0.6 0.5 0.5 0.5 O 0.4 0.4 0.4 0.3 0.3 0.2 0.2 0.2 0.1 0.1 0.1 0.0 0.0 0.0 0 123 4 5 6 78 9 10 0 1 23 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Number of ponder steps Number of ponder steps Number of ponder steps\nFigure 3: Test performance of the metacontroller with a single expert on the five planets. dataset. Each column corresponds to a different experts. The lines indicate the performance of. the iterative agents for different numbers of ponder steps. The points indicate the performance ol the metacontroller, with each point corresponding to a different value of t. The x-coordinate of each. point is an average across the number of ponder steps, and the y-coordinate is the average loss. Top. row: Here we show total cost rather than just performance on the task (i.e., including computation. cost). Different colors show the result for different t, with the different lines showing the cost for the. same iterative controller under different values of t. The error bars (for the metacontroller) indicate 2.5% and 97.5% confidence intervals. When the point is below its corresponding curve, it means. that the metacontroller was able to achieve a better speed-accuracy trade-off than that achievable by. the iterative agent. Line colors of increasing brightness correspond to increasing t, with t values. taken from 0, 0.0134, 0.0354, 0.0576, 0.0934, 0.152, 0.246|. Bottom row: Here we show just the performance loss (i.e., without computational cost). Each point corresponds to a different value of. T. The fact that the points are below the curve means the metacontroller agent learns to perform. better than the iterative agent with the equivalent number of ponder steps..\nThe ability to adapt its choice of number of ponder steps on a per-episode basis is very valuable because it allows the metacontroller to spend additional computation only on those episodes whicl require it. The total costs of the IN and true simulation metacontrollers' are 11% and 15% lower (median) than the best achievable costs of their corresponding iterative agents, respectively, across the range of t values we tested (see Figure 7 in the Appendix for details).\nThere can even be a benefit to using a metacontroller when there are no computational resource costs. Consider the rightmost points in Figure 3 (bottom row, middle and right subplots), which show the performance loss for the IN and true simulation metacontrollers when t is low. Remarkably, these points still outperform the best achievable iterative agents. This suggests that there can be an advantage to stopping pondering once a good solution is found, and more generally demonstrates. that the metacontroller's learning process can lead to strategies that are superior to those available. to less flexible agents.\nThe metacontroller with the MLP expert had very poor average performance and high variance on. the five planet condition (Figure 3, top left subplot), which is why we restricted our focus in this section to how the metacontrollers with IN and true simulation experts behaved. The MLP's poor. performance is crucial, however, for the following section (3.3) which analyzes how a multiple. expert metacontroller manages experts which vary greater in their reliability..\nWhen we allow the manager to additionally choose between two experts, rather than only relying. on a single expert, we find a similar pattern of results in terms of the number of ponder steps (Fig. ure 5, left). Additionally, the metacontroller is successfully able to identify the more reliable IN network and consequently uses it a majority of the time, except in a few cases where the cost of the IN network is extremely high relative to the cost of the MLP network (Figure 5, right). This. pattern of results makes sense given the good performance (described in the previous section) of the. metacontroller with the IN expert compared to the poor performance of the metacontroller with the. MLP expert. The manager should not generally rely on the MLP expert because it is simply not a. reliable source of information.\nHowever, the metacontroller has more difficulty finding an optimal balance between the two experts. on a step-by-step basis: the addition of a second expert did not yield much of an improvement over. the single-expert metacontroller, with only 9% of the different versions (trained with different t val-. ues for the two experts) achieving a lower loss than the best iterative controller. We believe the mixed performance of the metacontroller with multiple experts is partially due to an entropy term which we used to encourage the manager's policy to be non-deterministic (see Appendix B.4). In particular. for high values of t, the optimal thing to do is to always execute immediately without pondering However, because of the entropy term, the manager is encourage to have a non-deterministic policy. and therefore is likely to ponder more than it should---and to use experts that are more unreliable-. even when this is suboptimal in terms of the total loss (3)..\nDespite the fact that the metacontroller with multiple experts does not result in a substantial im. provement over that which uses a single expert, we emphasize that the manager is able to identify. and use the more reliable expert the majority of the time. And, it is still able to choose a variable. number of steps according to how difficult the task is (Figure 5, left). This, in and of itself, is ar. improvement over more traditional optimization methods which would require that the expert is hand-picked ahead of time and that the number of steps are determined heuristically..\nLow ponder cost (=0.01) Medium ponder cost (= 0. 06) High ponder cost (r=0. 25) 10 Pdner eopeee .s .. 8 .. : 2 # 0 slope = 1.214 [0.992, 1.450] slope = 1.533 [1.316, 1.767] slope = 0.456 [0.398, 0.515] correlation = 0.278 [0.227, 0.326] correlation = 0.482 [0.428, 0.536] correlation = 0.526 [0.471, 0.581] 0 1 2 3 0 1 2 3 0 1 2 3 Reactive controller loss Reactive controller loss Reactive controller loss\n10 000 Pdaer dopuedn .. 4 2 # 0 slope = 1.214 [0.992, 1.450] slope = 1.533 [1.316, 1.767] slope = 0.456 [0.398, 0.515] correlation = 0.278 [0.227, 0.326] correlation = 0.482 [0.428, 0.536] correlation = 0.526 [0.471, 0.581] 0 1 2 3 0 1 2 3 0 1 2 3 Reactive controller loss Reactive controller loss Reactive controller loss\nFigure 4: Relationship between the number of ponder steps and per-episode difficulty for the. IN metacontroller. Each subplot's x-axis represents the episode difficulty, as measured by the. reactive controller's loss. Each y-axis represents the number of ponder steps the metacontroller took. The points are individual episodes, and the line is the best fit regression line and 95% confidence. intervals. The different subplots show different values of - (labeled in the title). In each case, there is a clear positive relationship between the difficulty of the task and the number of ponder steps. suggesting that the metacontroller learns to spend more time on hard problems and less time on. easier problems. At the bottom of each plot are the fitted slope and correlation coefficient values,. along with their 95% confidence intervals in brackets.\nTotal number of samples Fraction of samples using MLP expe 0000 0000 10.0 10.0 6.7 10.0 5.6 4.7 5.8 0.0 0.0 0.0 0.0 0.0 0.0 exxert 1000 exxeet 1000 10.0 10.0 10.0 10.0 4.4 6.2 4.7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 000 000 5.7 10.0 10.0 6.1 5.6 4.6 6.3 Net 0.1 0.0 0.0 0.0 0.0 0.0 0.0 9800 0000 4.1 4.1 4.9 3.1 3.2 3.3 2.9 0.1 0.2 0.2 0.0 0.0 0.0 0.0 0000 0200 2.1 2.0 2.2 1.7 1.3 1.5 1.4 0.2 0.2 0.1 0.0 0.0 0.0 0.0 0300 0300 2.5 2.3 1.2 1.1 1.1 1.2 1.2 0.8 0.4 0.0 0.0 0.0 0.0 0440 0440 1.2 1.7 1.3 1.3 0.7 1.1 1.1 0.1 0.9 0.2 1.0 0.0 0.0 0.0 0.000 0.001 0.007 0.036 0.200 0.300 0.400 0.000 0.001 0.007 0.036 0.200 0.300 0.400 Cost of MLP expert Cost of MLP expert\nFigure 5: Test performance of the metacontroller with multiple experts on the five planets dataset. Left: The average number of total ponder steps, for different values of t. As with the single-expert metacontrollers, fewer ponder steps are taken when the cost is very high, and more are taken when the cost is low. Right: The fraction of ponder steps taken by the MLP expert relative to the IN expert. In the majority of cases, the metacontroller favors using the IN expert as it is much. more reliable. The few exceptions (red squares) are cases when the cost of the IN expert is much higher relative to the cost of the MLP expert.."}, {"section_index": "9", "section_name": "4 DISCUSSION", "section_text": "In this paper, we have presented an approach to adaptive, imagination-based optimization in neura networks. Our approach is able to flexibly choose which computations to perform as well as hov many computations need to be performed, approximately solving a speed-accuracy trade-off tha. depends on the difficulty of the task. In this way, our approach learns to rely on whatever source o. information is most useful and most efficient. Additionally, by consulting the experts on-the-fly, ou approach allows agents to test out actions to ensure that their consequences are not disastrous befor. actually executing them.\nWhile the experiments in this paper involve a one-shot decision task, our approach lays a founda tion that can be built upon to support more complex situations. For example, rather than applying force only on the first time step, we could turn the problem into one of trajectory optimization fo continuous control by asking the controller to produce a sequence of forces. In the case of planning our approach could potentially be combined with methods like Monte Carlo Tree-Search (MCTS (Coulom, 2006), where our experts would be akin to having several different rollout policies tc choose from, and our controller would be akin to the tree policy. While most MCTS implemen tations will run rollouts until a fixed amount of time has passed, our approach would allow th manager to adaptively choose the number of rollouts to perform and which policies to perform th rollouts with. Our method could also be used to naturally augment existing model-free approache such as DQN (Mnih et al., 2015) with online model-based optimization by using the model-fre policy as a controller and adding additional experts in the form of state-transition models. An in teresting extension would be to compare our metacontroller architecture with a naive model-base controller that performs gradient-based optimization to produce the final control. We expect ou metacontroller architecture might require fewer model evaluations and to be more robust to mode inaccuracies compared to the gradient-based method, because our method has access to the ful history of proposed controls and evaluations whereas traditional gradient-based methods do not.\nAlthough we rely on differentiable experts in our metacontroller architecture, we do not utilize the gradient information from these experts. An interesting extension to our work would be to pass this gradient information through to the manager and controller (as in Andrychowicz et al. (2016)), which would likely improve performance further, especially in the more complex situations discussed here. Another possibility is to train some or all of the experts inline with the controller and metacontroller, rather than independently, which could allow their learned functionality to be more tightly integrated with the rest of the optimization loop, at the expense of their generality and ability to be repurposed for other uses.\nTo conclude. we have demonstrated how neural network-based agents can use metareasoning tc adaptively choose what to think about, how to think about it, and for how long to think for. Ou\nmethod is directly inspired by human cognition and suggests a way to make agents much more flexible and adaptive than they currently are, both in decision making tasks such as the one described here, as well as in planning and control settings more broadly"}, {"section_index": "10", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Matt Hoffman, Andrea Tacchetti, Tom Erez, Nando de Freitas, Guillaume Desjardins, Joseph Modayil, Hubert Soyer, Alex Graves, David Reichert, Theo Weber, Jon Scholz Will Dabney, and others on the DeepMind team for helpful discussions and feedback."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nandc de Freitas. Learning to learn by gradient descent by gradient descent. arXiv:1606.04474, 2016..\nBobak Shahriari, Ziyu Wang, Matthew W Hoffman, Alexandre Bouchard-Cote, and Nando de Freitas. Ar entropy search portfolio for Bayesian optimization. arXiv:1406.4625, 2014.\nD.M. Wolpert and M. Kawato. Multiple paired forward and inverse models for motor control. Neural Networks 11(78):1317 - 1329.1998\nTimothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv:1509.02971. 2015."}, {"section_index": "12", "section_name": "METACONTROLLER DETAILS", "section_text": "comprised of the following components\nA history-sensitive controller, C : X X H -> C, which is a policy that maps goal and initial states, and a history, h E H, to controls, whose aim is to minimize (1). A pool of experts {E1, ..., Ek}. Each expert E : X' ' C -> maps goal states, input states, and actions to opinions. Opinions can be either states-only (E = ), states and rewards (E =- I' R), or rewards-only (E = R). The expert corresponds to the evaluator for the optimization routine, i.e., an approximation of the forward process f. A manager, M : I I' Hn -> {0, ..., K}, which is a policy which decides whether to send a proposed control to the world (k = 0) or to the kth expert for evaluation, in order to minimize (3). This formulation is based on that used by metareasoning systems (Russell & Wefald, 1991; Hay et al., 2012). Details on the corresponding MDP are given in Appendix A.1. A memory, : Hn-1 Z -> Hn, which is a function that maps the prior history hn-1 E Hn-1, as well as the most recent manager choice, proposed control, and expert evaluation (k, c, e) E {0,..., K} C = Z, to an updated history hn E Hn, which is then made available to the manager and controller on subsequent iterations. The history at step n is a recursively defined tuple which is the concatenation of the prior history with the most recently proposed control, expert evaluation, and expert identity: hn = hn-1 ((kn, Cn, Ekn (x*,x, cn))) = ((k1, C1, Ek1 (x*, x, c1)),..., (kn, Cn, Ekn(x*, x, cn))) where ho = Q represents an empty ini- tial history. Similarly, the finite set of histories up to step n is: Hn = Hn-1 Z = Zn where Ho ={O}.\nqM(x*,x) = C x*.x,hN-1) = CN"}, {"section_index": "13", "section_name": "A.1 META-LEVEL MDP", "section_text": "The state space S consists of goal states, external states, and internal histories, S = ' . The action space A contains K + 1 discrete actions, {0,..., K}, which correspond to (k = 0) and ponder (k E {1,..., K }), where ponder (after Graves (2016)) refers to perf an iteration of the optimization procedure with the kth expert. The (deterministic) state transition model P : S C S -> [0, 1] is, P(x'|x*, x, hn-1, k) if k = 0 P(x', hn|x*, x, hn-1, k) (P(hn|x*, x, hn-1, k) Otherwise where x' = f(x, c) and c = (x*, x, hn-1) and, if x' = f (x, c) P(x'|x*, x, hn-1, k) otherwise if hn = hn-1 U{(k, c, Ek(x*,x,c))} P(hn|x*,x, hn-1,k) otherwise\nP(x'|x*, x, hn-1, k if k = P(x', hn|x*, x, hn-1, k P(hn|x*, x, hn-1, k otherw\nHere, we give the precise definitions of the metacontroller agent. As described in the main text, the iterative and reactive agents are special cases of the metacontroller agent, and are therefore not discussed here\nwhere N = n s.t. kn = 0. This function is summarized in Algorithm 1. The other agents (iterative and reactive), as mentioned in the main text, are simpler versions of the metacontroller agent and are summarized in Algorithms 2 and 3..\nTo implement the manager for the metacontroller agent, we draw inspiration from the metareasoning. literature (Russell & Wefald, 1991: Hay et al., 2012) and formulate the problem as a finite-horizon Markov Decision Process (MDP) (S,A, P, R) over the decision of whether to perform another. iteration of the optimization procedure or to execute a control in the world..\nif x' = f(x,c) P(x'|x*,x,hn-1,k) otherwise if hn = hn-1 U{(k,c,Ek(x*,x,c)) P(hn|x*,x,hn-1, k) otherwise\nif hn = hn-1 U{(k,c,Ek(x*,x,c)) hn.[x*, x, hn-1 otherwise\nAlgorithm 1 Metacontroller agent. x is the scene and x* is the target.\n1: function aM (x, x*) 2: ho+( > Initial empty history. koM(x,x*,ho) 3: > Get an action from the manager 4: Co (x,x*,ho) > Propose a control with the controller 5: n0 6: while kn 0 do. When k 0, ponder with an expert 7: enEkn(x,x*,cn) > Get an expert's opinion. 8: hn+1+ (hn,kn,Cn,en) > Update the history 9: n<n+1 kn+M(x,x*,hn) 10: > Choose the next action. 11: cnC(x,x*,hn) > Propose the next control. 12: end while 13: return cn 14: end function\nZ : 3: ko M (x,x*,ho 4: CoC x.x 5: n0 6: while kn. 0 do. 7: en<Ekn(x,x*,Cn 8: hn+1 + (hn, kn, Cn, en 9: n<n+1 kn<M(x,x*,hn) 10: 11: CnC(x,x*,hn) 12: end while.\nAlgorithm 2 Iterative agent. x is the scene, x* is the target, and N is the number of ponder steps\n1: function a'(x, x*, N) 2: ho+ ( > Initial empty history 3: co (x,x*,ho) > Propose a control with the controller 4: n0 5: while n < NV do > Ponder with an expert for N steps 6: en E(x,x*,cn) > Get the expert's opinion 7: hn+1+ (hn,kn,Cn,en > Update the history 8: n+n+1 9: Cn (x,x*,hn) > Propose the next control 10: end while 11: return cn 12: end function\nAlgorithm 3 Reactive agent. x is the scene and x* is the target.\n1: function a(x, x*) 2: Co(x,x*,() > Propose a control with the controlle 3: return Co 4: end function\nif k = 0 (see Eq. 1) R(x*, x, hr otherwise (see Eq. 3) (x*,x,hn-1))\nWe approximate the solution to this MDP with a stochastic manager policy M. The manager. chooses actions proportional to the immediate reward for taking action k in state sn plus the expectec sum of future rewards. This construction imposes a trade-off between accuracy and resources, in centivizing the agent to ponder longer and with more accurate (and potentially expensive) experts when the problem is harder..\nif k = 0 (see Eq. 1) R(x*,x. otherwise (see Eq. 3)\nA B Scene History (n-1) Scene History (n-1) Manager Controller Manager Controller REINFORCE Action Control Action Control Predicted pertormanceloss Switch Switch Critic World Expert 1 Expert 2 Expert K World Expert 1 Expert 2 Expert K A Opinion Opinion Performance losse Resource loss Memory Memory Scene Scene History (n-1) History (n-1) ^ History (n) History (n) C D Scene History (n-1) Scene History (n-1) Manager Controller Manager Controller Action Control Action Control Predicted performance loss Switch Switch Critic World Expert 1 Expert 2 .. Expert K World Expert 1 Expert 2 Expert K Opinion Outcome Opinions Outcome Performance loss Performance loss Memory Memory Scene Scene History (n-1) History (n-1) History (n) History (n)\nFigure 6: Training each part of the network. In each subplot, red arrows depict gradients. Dotted. arrows indicate backward connections that are not part of the forward pass. Colored nodes indicate weights that are being updated. All backpropagation occurs at the very end of a full forward pass. (i.e., after the control has been executed in the world). A: Training the controller and memory. with backpropagation-through-time (BPTT), beginning with the critic, and flowing to the controller,. through the memory, through the relevant expert, through the controller again, and so on. B: Training. the manager using ReINFoRcE (Williams, 1992). C: Training the experts (note that each expert may. have a different loss with respect to the outcome from the world). D: Training the critic.."}, {"section_index": "14", "section_name": "B.1 EXPERTS", "section_text": "Training the experts is a straightforward supervised learning problem (Figure 6c). The gradient is\nOLEk OLEk k OEk O0Ek dEk O0Ek\nwhere Ek is the kth expert and LEx is the loss function for the kth expert. For exam. ple, in the case of an action-value function expert, this loss function might be E(f, Ek) (x*, f(x, c)) - E(x*, x, c; 0Ek). In the case of an expert that predicts the final state using a model of the system dynamics. it might be LEk (f. Er.). - Ek(x*.x.c:AEk"}, {"section_index": "15", "section_name": "B.2 CRITIC", "section_text": "The critic, Lp, is an approximate model of the performance loss, Lp, (1), which is used to back-. propagate gradients to the controller and memory. This means the critic can either be an action-value. function, which approximates Lp = Eo ~ Lp directly, or a model of the system dynamics com-. posed with a known loss function between the goal and future states, Lp = L o Eo ~ Lo f. We train. the critic, Eo : ' ' C -> R, using the same procedure as the experts are trained (Figure 6d). A good expert may even be used as the critic.."}, {"section_index": "16", "section_name": "B.3 CONTROLLER AND MEMORY", "section_text": "aL aE dC a+n d aL aL aEdC a+n OE0rF 0n 0n-1 dec dec d0 OEx dC dpn dn-1 d\n8+ n dun 0Ekr dun a+ n 8+ n dun 0Ej d dpn-1 n-1\nwhere we are using the d+ notation to indicate summed gradients, following Pascanu et al. (2013) Since kn has already been produced by the manager it can be treated as a constant and will produce an unbiased estimate of the gradient. This is convenient because it allows for training the controller. and manager separately, or testing the controller's behavior with arbitrary actions post-training.."}, {"section_index": "17", "section_name": "B.4 MANAGER", "section_text": "As discussed in the main text, we used the ReinFoRcE algorithm Williams (1992) to train the manager (Figure 6b). One potential issue, however, is that when training the controller and manager simultaneously, the controller will result in high cost early on in training and thus the manager will learn to always choose the execute action. To discourage the manager from learning what is an essentially deterministic policy, we included a regularization term based on the entropy, LH (Williams & Peng, 1991; Mnih et al., 2016):\naE,m[r] r-LH . aem aem\nr is the full return given by (3) and X is the strength of the regularization term\nWe generated five datasets, each containing scenes with a different number of planets (ranging fror. a single planet to five planets). Each dataset consisted of 100,000 training scenes and 1,000 testin. scenes. The target in each scene was always located at the origin, and each scene always had a su with a mass of 100 units. The sun was located between 100 and 200 distance units away from th. target, with this distance sampled uniformly at random. The other planets had a mass between 2 and 50 units, and were located 100 to 250 distance units away from the target, sampled uniformly a. random. The spaceship had a mass between 1 and 9 units, and was located 150 to 250 distance unit. away from the target. The planets were always fixed (i.e., they could not move), and the spaceshi always started at the beginning of each episode with zero velocity..\nAs shown in Figure 6a, we trained the controller and memory using backpropagation through time. (BPTT) with an actor-critic architecture. Specifically, rather than assuming f is known and differ entiable, we use a critic and backpropagate through it (Heess et al., 2015):."}, {"section_index": "18", "section_name": "C.2 ENVIRONMENT", "section_text": "dvs+c s=Xs+EVs a vs+ea ms\nwhere as, Vs, and xs are the acceleration, velocity, and position of the spaceship, respectively d = 0.1 is a damping constant; c is the control force applied to the spaceship; and e is the step size. Note that we set c to zero for all timesteps except the first.."}, {"section_index": "19", "section_name": "D IMPLEMENTATION DETAILS", "section_text": "We used TensorFlow (Abadi et al., 2015) to implement and train all versions of the model\nIn our implementation of the controller, we used a two-layer MLP each with 100 units. The firsi. layer used ReLU activations and the second layer used a multiplicative interaction similar to van der Oord et al. (2016), which we found to work better in practice. In our implementation of the memory. we used a single LSTM layer of size 100. In our implementation of the manager, we used a MLP of. two fully connected layers of 100 units each, with ReLU nonlinearities..\nWe constructed three different experts to test the various controllers. The true simulation expert was the same as the world model, and consisted of a simulation for 11 timesteps with e = 0.05 (see Appendix C). The IN expert was an interaction network (Battaglia et al., 2016), which has previously been shown to be able to learn to predict n-body dynamics accurately for simple systems. The IN consists of a relational module and an object module. In our case, the relational module was composed of 4 hidden layers of 150 nodes each, outputting \"effects\"' encodings of size 100. These effects, together with the relational model input are then used as input to the object model, which contained a single hidden layer of 100 nodes. The object model outputs the velocity of the spaceship and we trained it to predict the velocity on every timestep of the spaceship's trajectory. The MLP expert was a MLP that predicted the final location of the spaceship and had the same architecture as the controller.\nAs discussed in Appendix B, we used a critic to train the controller and memory. We always used the IN expert as the critic, except in the case when the true simulation expert was used, in whicl case we also used the true simulation as the critic."}, {"section_index": "20", "section_name": "D.2 TRAINING PROCEDURE", "section_text": "We trained the controller and memory together using the Adam optimizer (Kingma & Ba, 2014) with gradients clipped to a maximum global norm of 10 (Pascanu et al., 2013). The manager was trained simultaneously, but using a different learning rate than the controller and memory. The IN and MLP experts were also trained simultaneously, but again with different learning rates. Learning rates were determined using a grid search over a small number of values, and are given in Table 1 for the iterative agent, in Table 2 for the metacontroller with one expert, and in Table 3 for the metacontroller with two experts.\nWe simulated our scenes using a physical simulation of gravitational dynamics. The planets were always stationary (i.e., they were not acted upon by any of the objects in the scene) but acted upon the spaceship with a force of:\nwhere F, is the force vector of the planet on the spaceship, G = 10ooo00 is a gravitational constant, m, is the mass of the planet, ms is the mass of the spaceship, r is the distance between the centers of masses of the planet and the spaceship, xp is the location of the planet, and xs is the location of the spaceship. We simulated this environment using the Euler method, i.e.:.\nAll weights were initialized uniformly at random between O and O.01. An iteration of training consisted of gradient updates over a minibatch of size 1000; in total, we ran training for 100,000 iterations. We additionally used a waterfall schedule for each of the learning rates during training. such that after 100o iterations, if the loss was not decreasing, we would decay the step size by 5%.\nThe iterative agent was trained to take a fixed number of ponder steps, ranging from O (i.e., the reactive agent) to 10. The metacontrollers were allowed to take a variable number of ponder steps. up to a maximum of 10. For the metacontroller with a single expert, we trained the manager using. T = 0 and 20 additional values of t spaced logarithmically between O.0oo04 and 0.4 (inclusive) For the metacontroller with multiple experts, we trained the manager on a grid of pairs of t values. where each expert could have t = 0 or one of 6 values spaced logarithmically between O.0oo04 and 0.2 (inclusive). In all cases, the entropy penalty for the metacontroller was = 0.2.."}, {"section_index": "21", "section_name": "D.3 CONVERGENCE", "section_text": "Iterative agent. For the iterative agent with the interaction network or true simulation experts, convergence was also reliable for small numbers of ponder steps. Convergence was somewhat less reliable for larger numbers of ponder steps. We believe this is because for some scenes, a larger number of ponder steps was more than necessary to solve the task (as is evidenced by the plateauing performance in Figure 2). So, the iterative agent had to effectively \"remember\"' what the best control was while it took the last few ponder steps, which is a more complicated and difficult task to perform\nFor the iterative agent with the MLP expert, convergence was more variable especially when the task was harder, as can be seen in the variable performance on the five planets dataset in Figure 2 (left). We believe this is because the MLP agent was so poor, and that convergence would have been. more reliable with a better agent.\nMetacontroller with a single expert. The metacontroller agent with a single expert converge more reliably than the corresponding iterative agent (see the bottom row of Figure 3). As mentione in the previous paragraph, the iterative agent had to take more steps than actually necessary, causing it to perform less well for larger numbers of ponder steps, whereas the metacontroller agent ha the flexibility of stopping when it had found a good control. On the other hand, we found that the metacontroller agent sometimes performed too many ponder steps for large values of t (see Figures : and 7). We believe this is due to the entropy term (X) added to the ReiNFoRcE loss. This is becaus when then ponder cost is very high, the optimal thing to do is to behave deterministically and alway execute (never ponder); however, the entropy term encouraged the policy to be nondeterministic. W plan to explore different training regimes in future work to alleviate this problem, for example by annealing the entropy term to zero over the course of training.\nMetacontroller with multiple experts. The metacontroller agent with multiple experts was some- what more difficult to train, especially for high ponder cost of the interaction network expert. For example, note how the proportion of steps using the MLP expert does not decrease monotonically in Figure 5 (right) with increasing cost for the MLP expert. We believe this is also an unexpected result of using the entropy term: in all of these cases, the optimal thing to do actually is to rely on the MLP expert 100% of the time, yet the entropy term encourages the policy to be non-deterministic. Future work will explore these difficulties further by using experts that complement each other better (i.e. so there is not one that is wholly better than the other).\nExperts. The experts themselves always converged quickly and reliably, and trained much faste than the rest of the network.."}, {"section_index": "22", "section_name": "REFERENCES", "section_text": "Table 1: Hyperparameter values for the iterative controller. Qc refers to the learning rate for the controller and memory, while E1n refers to the learning rate for the IN expert, and Emrp refers to. the learning rate for the MLP expert..\nTrue sim. MLP IN Dataset # Ponder Steps Qc Qc QEIN QEMLP ac QEIN one planet 0 1e-03 1e-03 3e-03 5e-04 1e-03 1e-03 one planet 1 1e-03 1e-03 3e-03 1e-03 1e-03 1e-03 one planet 2 1e-03 1e-03 3e-03 5e-04 1e-03 1e-03 one planet 3 1e-03 1e-03 3e-03 5e-04 1e-03 1e-03 one planet 4 1e-03 1e-03 3e-03 1e-03 1e-03 1e-03 one planet 5 1e-03 1e-03 3e-03 5e-04 5e-04 1e-03 one planet 6 1e-03 1e-03 3e-03 5e-04 1e-03 1e-03 one planet 7 1e-03 1e-03 3e-03 5e-04 1e-03 1e-03 one planet 8 1e-03 1e-03 3e-03 1e-03 1e-03 1e-03 one planet 9 5e-04 1e-03 3e-03 5e-04 5e-04 1e-03 one planet 10 1e-03 1e-03 3e-03 5e-04 1e-03 1e-03 two planets 0 1e-03 1e-03 3e-03 1e-03 3e-03 3e-03 two planets 1 1e-03 1e-03 3e-03 5e-04 1e-03 1e-03 two planets 2 1e-03 1e-03 3e-03 5e-04 1e-03 1e-03 two planets 3 1e-03 1e-03 3e-03 5e-04 1e-03 1e-03 4 1e-03 two planets 1e-03 3e-03 1e-03 1e-03 1e-03 two planets 5 1e-03 1e-03 1e-03 1e-03 1e-03 1e-03 two planets 6 1e-03 1e-03 3e-03 1e-03 1e-03 1e-03 two planets 7 5e-04 1e-03 3e-03 5e-04 5e-04 1e-03 two planets 8 1e-03 1e-03 3e-03 5e-04 5e-04 1e-03 two planets 9 1e-03 1e-03 3e-03 5e-04 3e-03 3e-03 two planets 10 5e-04 1e-03 3e-03 1e-03 5e-04 1e-03 three planets 0 1e-03 1e-03 3e-03 1e-03 1e-03 3e-03 three planets 1 1e-03 1e-03 3e-03 1e-03 1e-03 1e-03 three planets 2 1e-03 5e-04 3e-03 1e-03 1e-03 1e-03 three planets 3 1e-03 1e-03 1e-03 5e-04 1e-03 1e-03 three planets 4 1e-03 1e-03 3e-03 5e-04 1e-03 1e-03 three planets 5 1e-03 1e-03 1e-03 5e-04 5e-04 1e-03 three planets 6 1e-03 5e-04 3e-03 5e-04 1e-03 1e-03 three planets 7 1e-03 1e-03 3e-03 1e-03 1e-03 1e-03 three planets 8 1e-03 1e-03 3e-03 1e-03 5e-04 1e-03 three planets 9 1e-03 1e-03 3e-03 5e-04 1e-03 1e-03 three planets 10 1e-03 5e-04 3e-03 1e-03 1e-03 1e-03 four planets 0 1e-03 5e-04 3e-03 5e-04 1e-03 1e-03 four planets 1 1e-03 5e-04 3e-03 1e-03 1e-03 1e-03 four planets 2 1e-03 5e-04 3e-03 1e-03 1e-03 1e-03 four planets 3 1e-03 1e-03 3e-03 5e-04 1e-03 1e-03 four planets 4 1e-03 5e-04 3e-03 1e-03 1e-03 1e-03 four planets 5 1e-03 1e-03 3e-03 1e-03 1e-03 1e-03 four planets 6 1e-03 1e-03 3e-03 1e-03 1e-03 1e-03 four planets 7 5e-04 1e-03 1e-03 1e-03 1e-03 1e-03 four planets 8 5e-04 1e-03 3e-03 1e-03 1e-03 1e-03 four planets 9 1e-03 1e-03 3e-03 1e-03 5e-04 1e-03 four planets 10 1e-03 1e-03 3e-03 1e-03 5e-04 1e-03 five planets 0 1e-03 1e-03 3e-03 5e-04 1e-03 3e-03 five planets 1 1e-03 1e-03 3e-03 5e-04 1e-03 1e-03 five planets 2 5e-04 1e-03 3e-03 5e-04 1e-03 1e-03 five planets 3 1e-03 1e-03 3e-03 1e-03 1e-03 1e-03 five planets 4 5e-04 1e-03 3e-03 5e-04 1e-03 1e-03 five planets 5 1e-03 5e-04 3e-03 1e-03 1e-03 1e-03 five planets 6 1e-03 1e-03 3e-03 1e-03 1e-03 1e-03 five planets 7 1e-03 1e-03 3e-03 1e-03 1e-03 3e-03 five planets 8 5e-04 1e-03 3e-03 1e-03 1e-03 3e-03 five planets 9 1e-03 1e-03 3e-03 1e-03 1e-03 1e-03 five planets 10 1e-03 1e-03 3e-03 5e-04 1e-03 1e-03\n0.8 True simulation expert. Interaction network expert 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 O Cost of best iterative controller\nFigure 7: Cost of the best iterative controller compared to the managed controller. Each point represents the total cost of the best iterative agent under a particular value of - (x-axis) versus the total cost achieved by the metacontroller trained with the same value of t (y-axis). The best iterative agent was chosen by computing the cost for all the different number of ponder steps, and then choosing the whichever number of ponder stpes yielded the lowest cost (i.e., finding the minimum of the curves in Figure 3, top row). In almost all cases, the managed controller achieves a lower loss than the iterative controller: for the metacontroller with the IN expert, the cost is 11% lower than the iterative controller on average, and for the metacontroller with the true simulation expert, it is 15% lower on average.\nNicholas Hay, Stuart J. Russell, David Tolpin, and Solomon Eyal Shimony. Selecting computations: Theory. and applications. Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence, 2012.. Nicolas Heess, Gregory Wayne, David Silver, Tim Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous. control policies by stochastic value gradients. Advances in Neural Information Processing Systems, 2015.. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.. Volodymyr Mnih. Adria Puigdomenech Badia. Mehdi Mirza. Alex Graves, Timothy P. Lillicrap, Tim Harley\nTrue sim. MLP IN T Qc Qm Qc Qm QEIN QEMLP Qc Qm Q EIN 0.00000 5e-04 5e-04 5e-04 1e-03 3e-03 1e-03 5e-04 1e-04 1e-03 0.00004 1e-03 1e-04 1e-03 5e-05 3e-03 5e-04 1e-03 1e-03 1e-03 0.00006 5e-04 5e-05 1e-03 5e-04 3e-03 1e-03 5e-04 5e-05 1e-03 0.00011 1e-03 1e-04 1e-03 1e-04 3e-03 1e-03 5e-04 5e-04 1e-03 0.00017 5e-04 1e-04 1e-03 1e-03 3e-03 1e-03 1e-03 5e-05 1e-03 0.00028 1e-03 1e-03 1e-03 1e-03 3e-03 1e-03 5e-04 5e-05 1e-03 0.00045 1e-03 1e-03 5e-04 1e-04 3e-03 1e-03 1e-03 5e-05 1e-03 0.00073 1e-03 1e-04 1e-03 1e-04 3e-03 1e-03 1e-03 5e-05 1e-03 0.00119 1e-03 5e-05 1e-03 1e-04 5e-04 1e-03 5e-04 5e-04 1e-03 0.00193 1e-03 5e-05 1e-03 5e-05 3e-03 5e-04 1e-03 5e-05 1e-03 0.00314 1e-03 1e-04 1e-03 1e-04 3e-03 5e-04 1e-03 1e-04 1e-03 0.00510 1e-03 5e-05 1e-03 5e-05 3e-03 1e-03 1e-03 5e-05 1e-03 0.00828 1e-03 5e-04 1e-03 5e-04 3e-03 5e-04 1e-03 1e-03 1e-03 0.01344 1e-03 5e-05 1e-03 5e-05 3e-03 5e-04 5e-04 5e-05 1e-03 0.02182 1e-03 1e-04 1e-03 1e-04 3e-03 5e-04 1e-03 1e-04 1e-03 0.03543 1e-03 1e-04 1e-03 1e-04 3e-03 1e-03 1e-03 1e-04 1e-03 0.05754 1e-03 5e-04 1e-03 5e-04 3e-03 5e-04 1e-03 1e-04 1e-03 0.09343 1e-03 5e-05 1e-03 5e-05 3e-03 1e-03 1e-03 1e-04 1e-03 0.15171 1e-03 1e-04 1e-03 5e-04 3e-03 5e-04 1e-03 1e-04 1e-03 0.24634 1e-03 5e-05 1e-03 1e-03 3e-03 1e-03 1e-03 1e-03 1e-03 0.40000 1e-03 1e-03 1e-03 1e-03 3e-03 5e-04 1e-03 1e-03 1e-03\nTable 2: Hyperparameter values for the metacontroller with a single expert. t refers to the ponde cost, Qc refers to the learning rate for the controller and memory, Qm refers to the learning rate fo the manager, Q E1N refers to the learning rate for the IN expert, and Emrp refers to the learning rate for the MLP expert.\nTable 3: Hyperparameter values for the metacontroller with two experts. T1n refers to the ponder cost for the interaction network expert, TmLp refers to the ponder cost for the MLP expert, Qc refers to the learning rate for the controller and memory, Qm refers to the learning rate for the manager QE1N refers to the learning rate for the IN expert, and Emrp refers to the learning rate for the MLP expert.\nIN + MLP TIN TMLP Qc Qm QEIN QEMLP 0.00000 0.00000 1e-03 5e-05 1e-03 1e-03 0.00000 0.00121 1e-03 5e-04 1e-03 1e-03 0.00000 0.00663 1e-03 1e-03 1e-03 1e-03 0.00000 0.03641 1e-03 5e-05 1e-03 1e-03 0.00000 0.20000 1e-03 5e-05 1e-03 1e-03 0.00000 0.30000 5e-04 1e-04 1e-03 1e-03 0.00000 0.40000 5e-04 5e-05 1e-03 1e-03 0.00121 0.00000 1e-03 1e-04 1e-03 1e-03 0.00121 0.00121 1e-03 5e-05 1e-03 1e-03 0.00121 0.00663 1e-03 1e-03 1e-03 1e-03 0.00121 0.03641 1e-03 1e-04 1e-03 1e-03 0.00121 0.20000 1e-03 5e-04 1e-03 1e-03 0.00121 0.30000 5e-04 5e-05 1e-03 1e-03 0.00121 0.40000 1e-03 1e-04 1e-03 1e-03 0.00663 0.00000 1e-03 1e-03 1e-03 1e-03 0.00663 0.00121 5e-04 5e-05 1e-03 1e-03 0.00663 0.00663 5e-04 1e-04 1e-03 1e-03 0.00663 0.03641 1e-03 1e-04 1e-03 1e-03 0.00663 0.20000 5e-04 5e-04 1e-03 1e-03 0.00663 0.30000 5e-04 1e-03 1e-03 1e-03 0.00663 0.40000 5e-04 1e-04 1e-03 1e-03 0.03641 0.00000 1e-03 5e-04 1e-03 1e-03 0.03641 0.00121 1e-03 5e-04 1e-03 1e-03 0.03641 0.00663 1e-03 1e-03 1e-03 1e-03 0.03641 0.03641 1e-03 5e-04 1e-03 1e-03 0.03641 0.20000 1e-03 1e-04 1e-03 1e-03 0.03641 0.30000 1e-03 5e-05 1e-03 1e-03 0.03641 0.40000 1e-03 1e-04 1e-03 1e-03 0.20000 0.00000 1e-03 5e-04 1e-03 1e-03 0.20000 0.00121 1e-03 5e-04 1e-03 1e-03 0.20000 0.00663 1e-03 5e-04 1e-03 1e-03 0.20000 0.03641 1e-03 1e-04 1e-03 1e-03 0.20000 0.20000 5e-04 1e-03 1e-03 1e-03 0.20000 0.30000 1e-03 5e-05 1e-03 1e-03 0.20000 0.40000 1e-03 5e-04 1e-03 1e-03 0.30000 0.00000 5e-04 1e-04 1e-03 1e-03 0.30000 0.00121 5e-04 1e-03 1e-03 1e-03 0.30000 0.00663 1e-03 1e-03 1e-03 1e-03 0.30000 0.03641 1e-03 5e-04 1e-03 1e-03 0.30000 0.20000 1e-03 1e-03 1e-03 1e-03 0.30000 0.30000 1e-03 1e-04 1e-03 1e-03 0.30000 0.40000 1e-03 5e-05 1e-03 1e-03 0.40000 0.00000 1e-03 1e-03 1e-03 1e-03 0.40000 0.00121 5e-04 1e-03 1e-03 1e-03 0.40000 0.00663 1e-03 5e-04 1e-03 1e-03 0.40000 0.03641 5e-04 1e-04 1e-03 1e-03 0.40000 0.20000 1e-03 1e-03 1e-03 1e-03 0.40000 0.30000 5e-04 1e-03 1e-03 1e-03 0.40000 0.40000 5e-04 5e-04 1e-03 1e-03"}] |
SJU4ayYgl | [{"section_index": "0", "section_name": "SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS", "section_text": "Max Welling\nThomas N. Kipf\nUniversity of Amsterdam T.N.Kipf@uva.nl"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "L = Lo + \\Lreg , with Lreg=>~Aij|f(Xi)-f(Xj)|I2=f(X)'f(X) i,j\nHere, Lo denotes the supervised loss w.r.t. the labeled part of the graph, f() can be a neural network- like differentiable function, X is a weighing factor and X is a matrix of node feature vectors X = D - A denotes the unnormalized graph Laplacian of an undirected graph G = (V, &) with. N nodes v, E V, edges (vi,u;) E &, an adjacency matrix A E RNxN (binary or weighted) and a degree matrix D = , A. The formulation of Eq. 1 relies on the assumption that connected. nodes in the graph are likely to share the same label. This assumption, however, might restrict. modeling capacity, as graph edges need not necessarily encode node similarity, but could contain. additional information.\nIn this work, we encode the graph structure directly using a neural network model f(X, A) an train on a supervised target Lo for all nodes with labels, thereby avoiding explicit graph-basec. regularization in the loss function. Conditioning f() on the adjacency matrix of the graph wil. allow the model to distribute gradient information from the supervised loss Lo and will enable it tc. learn representations of nodes both with and without labels..\nOur contributions are two-fold. Firstly, we introduce a simple and well-behaved layer-wise prop-. agation rule for neural network models which operate directly on graphs and show how it can be motivated from a first-order approximation of spectral graph convolutions (Hammond et al., 2011). Secondly, we demonstrate how this form of a graph-based neural network model can be used for fast and scalable semi-supervised classification of nodes in a graph. Experiments on a number of datasets demonstrate that our model compares favorably both in classification accuracy and effi-. ciency (measured in wall-clock time) against state-of-the-art methods for semi-supervised learning."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We present a scalable approach for semi-supervised learning on graph-structured. data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional archi-. tecture via a localized first-order approximation of spectral graph convolutions.. Our model scales linearly in the number of graph edges and learns hidden layer. representations that encode both local graph structure and features of nodes. In. a number of experiments on citation networks and on a knowledge graph dataset. we demonstrate that our approach outperforms related methods by a significant. margin.\nWe consider the problem of classifying nodes (such as documents) in a graph (such as a citation. network), where labels are only available for a small subset of nodes. This problem can be framed as graph-based semi-supervised learning, where label information is smoothed over the graph via. some form of explicit graph-based regularization (Zhu et al., 2003; Zhou et al., 2004; Belkin et al.,. 2006; Weston et al., 2012), e.g. by using a graph Laplacian regularization term in the loss function:.\nLo + \\Lreg, with Lreg=>`Aij||f(Xi)-f(X;)|I2=f(X)'f(X). i,j"}, {"section_index": "3", "section_name": "FAST APPROXIMATE CONVOLUTIONS ON GRAPHS", "section_text": "In this section, we provide theoretical motivation for a specific graph-based neural network mode f(X, A) that we will use in the rest of this paper. We consider a multi-layer Graph Convolutiona. Network (GCN) with the following layer-wise propagation rule:\nHere, A = A + Iy is the adjacency matrix of the undirected graph G with added self-connections.. In is the identity matrix, Du = , Ai, and W(l) is a layer-specific trainable weight matrix. o(-) denotes an activation function, such as the ReLU() = max(0, -). H(l) E RN D is the matrix of ac- tivations in the lth layer; H(o) = X. In the following, we show that the form of this propagation rule. can be motivated' via a first-order approximation of localized spectral filters on graphs (Hammond et al., 2011; Defferrard et al., 2016).\nK 0kTk(A) ge'(A) ~ k=0\nGoing back to our definition of a convolution of a signal x with a filter 9', we now have:\nK 0sTk(L)x ge' * x ~ k=0\nwith L = 2 L - In, as can easily be verified by noticing that (UAUT)k = UAkUT. Note that this expression is now K-localized since it is a Kth-order polynomial in the Laplacian, i.e. it depends. only on nodes that are at maximum K steps away from the central node (Kth-order neighborhood). The complexity of evaluating Eq. 5 is O([gD), i.e. linear in the number of edges. Defferrard et al.. (2016) use this K-localized convolution to define a convolutional neural network on graphs.."}, {"section_index": "4", "section_name": "2.2 LAYER-WISE LINEAR MODEL", "section_text": "1 We provide an alternative interpretation of this propagation rule based on the Weisfeiler-Lehman algorithm (Weisfeiler & Lehmann, 1968) in Appendix A..\n+1 o(D-2AD-2H(l)W(l)\nWe consider spectral convolutions on graphs defined as the multiplication of a signal x E RN (a scalar for every node) with a filter ge = diag(0) parameterized by 0 E RN in the Fourier domain. i.e.:\nge * x =UgeUTx\nwhere U is the matrix of eigenvectors of the normalized graph Laplacian L = Iy - D- 2 AD-2 = UAUT, with a diagonal matrix of its eigenvalues A and U' x being the graph Fourier transform of x. We can understand ge as a function of the eigenvalues of L, i.e. ge(A). Evaluating Eq. 3 is computationally expensive, as multiplication with the eigenvector matrix U is O(N2). Furthermore. computing the eigendecomposition of L in the first place might be prohibitively expensive for large graphs. To circumvent this problem, it was suggested in Hammond et al. (2011) that ge(A) can be well-approximated by a truncated expansion in terms of Chebyshev polynomials Tk(x) up to Kth Order:\n2 A - IN. Amax denotes the largest eigenvalue of L. 0' E RK is now a. with a rescaled A: vector of Chebyshev coefficients. The Chebyshev polynomials are recursively defined as T(x) = 2xTe-1(x) - T-2(x), with To(x) = 1 and T1(x) = x. The reader is referred to Hammond et al.. (2011) for an in-depth discussion of this approximation..\nA neural network model based on graph convolutions can therefore be built by stacking multiple convolutional layers of the form of Eq. 5, each layer followed by a point-wise non-linearity. Now, imagine we limited the layer-wise convolution operation to K = 1 (see Eq. 5), i.e. a function that is linear w.r.t. L and therefore a linear function on the graph Laplacian spectrum.\nIn this way, we can still recover a rich class of convolutional filter functions by stacking multiple such layers, but we are not limited to the explicit parameterization given by, e.g., the Chebyshev polynomials. We intuitively expect that such a model can alleviate the problem of overfitting on local neighborhood structures for graphs with very wide node degree distributions, such as social networks, citation networks, knowledge graphs and many other real-world graph datasets. Addition ally, for a fixed computational budget, this layer-wise linear formulation allows us to build deeper models, a practice that is known to improve modeling capacity on a number of domains (He et al., 2016).\nIn this linear formulation of a GCN we further approximate Amax ~ 2, as we can expect that neural network parameters will adapt to this change in scale during training. Under these approximations Eq. 5 simplifies to:\n12 ge'*x ~0'x+ 0(L-In) x =0%x-0D-2 AD\nIn practice, it can be beneficial to constrain the number of parameters further to address overfitting and to minimize the number of operations (such as matrix multiplications) per layer. This leaves us with the following expression:\nwith a single parameter 0 = 0% = -0. Note that Iv + D-AD- now has eigenvalues ir the range [0, 2]. Repeated application of this operator can therefore lead to numerical instabilities and exploding/vanishing gradients when used in a deep neural network model. To alleviate this problem, we introduce the following renormalization trick: Iv + D- AD-3 -> D- AD-2, with A = A + Iv and D; = A;\nwhere O E RCxF is now a matrix of filter parameters and Z E RNF is the convolved signal. matrix. This filtering operation has complexity O(|E|FC), as AX can be efficiently implemented as a product of a sparse matrix with a dense matrix.."}, {"section_index": "5", "section_name": "SEMI-SUPERVISED NODE CLASSIFICATION", "section_text": "Having introduced a simple, yet flexible model f(X, A) for efficient information propagation or graphs, we can return to the problem of semi-supervised node classification. As outlined in the in troduction, we can relax certain assumptions typically made in graph-based semi-supervised learn- ing by conditioning our model f(X, A) both on the data X and on the adjacency matrix A of the underlying graph structure. We expect this setting to be especially powerful in scenarios where the adjacency matrix contains information not present in the data X, such as citation links between doc uments in a citation network or relations in a knowledge graph. The overall model, a multi-laye. GCN for semi-supervised learning, is schematically depicted in Figure 1."}, {"section_index": "6", "section_name": "3.1 EXAMPLE", "section_text": "Z = f(X,A) = softmax A ReLU(AXW(0)\nwith two free parameters 0' and 0. The filter parameters can be shared over the whole graph Successive application of filters of this form then effectively convolve the kth-order neighborhood of a node, where k is the number of successive filtering operations or convolutional layers in the neural network model.\nIn+D-2AD- ge * x ~ x\nWe can generalize this definition to a signal X E RN C with C input channels (i.e. a C-dimensional feature vector for every node) and F filters or feature maps as follows:\nZ=D-AD-XO,\nIn the following, we consider a two-layer GCN for semi-supervised node classification on a graph with a symmetric adjacency matrix A (binary or weighted). We first calculate A = D-2 AD- in. a pre-processing step. Our forward model then takes the simple form:.\nFigure 1: Left: Schematic depiction of multi-layer Graph Convolutiona1 Network (GCN) for semi supervised learning with C input channels and F feature maps in the output layer. The graph struc- ture (edges shown as black lines) is shared over layers, labels are denoted by Y,. Right: t-SNE (Maaten & Hinton, 20o8) visualization of hidden layer activations of a two-layer GCN trained on the Cora dataset (Sen et al., 2008) using 5% of labels. Colors denote document class.\nHere, W(0) E RC H is an input-to-hidden weight matrix for a hidden layer with H feature maps. w(1) E RHF is a hidden-to-output weight matrix. The softmax activation function, defined as softmax(x;) = exp(x;) with Z = , exp(x), is applied row-wise. For semi-supervised multi- class classification, we then evaluate the cross-entropy error over all labeled examples:\nF L=-YiflnZif, lEYL f=1\nwhere V-. is the set of node indices that have labels\nThe neural network weights w(o) and w(1) are trained using gradient descent. In this work, we. perform batch gradient descent using the full dataset for every training iteration, which is a viable option as long as datasets fit in memory. Using a sparse representation for A, memory requirement is O([g|), i.e. linear in the number of edges. Stochasticity in the training process is introduced via. dropout (Srivastava et al., 2014). We leave memory-efficient extensions with mini-batch stochastic gradient descent for future work.."}, {"section_index": "7", "section_name": "3.2 IMPLEMENTATION", "section_text": "In practice, we make use of TensorFlow (Abadi et al., 2015) for an efficient GPU-based imple mentation? of Eq. 9 using sparse-dense matrix multiplications. The computational complexity of evaluating Eq. 9 is then O([E|CH F), i.e. linear in the number of graph edges"}, {"section_index": "8", "section_name": "4 RELATED WORK", "section_text": "Our model draws inspiration both from the field of graph-based semi-supervised learning and from. recent work on neural networks that operate on graphs. In what follows, we provide a brief overview on related work in both fields.."}, {"section_index": "9", "section_name": "4.1 GRAPH-BASED SEMI-SUPERVISED LEARNING", "section_text": "A large number of approaches for semi-supervised learning using graph representations have been. proposed in recent years, most of which fall into two broad categories: methods that use some form of explicit graph Laplacian regularization and graph embedding-based approaches. Prominent. examples for graph Laplacian regularization include label propagation (Zhu et al., 2003), manifold. regularization (Belkin et al., 2006) and deep semi-supervised embedding (Weston et al., 2012)..\n2Code to reproduce our experiments is available at https : / /github. com/tkipf/ gcn\nhidden layers input layer. output layer. (a) Graph Convolutional Network (b) Hidden layer activations\nRecently, attention has shifted to models that learn graph embeddings with methods inspired by the skip-gram model (Mikolov et al., 2013). DeepWalk (Perozzi et al., 2014) learns embeddings via the prediction of the local neighborhood of nodes, sampled from random walks on the graph LINE (Tang et al., 2015) and node2vec (Grover & Leskovec, 2016) extend DeepWalk with more sophisticated random walk or breadth-first search schemes. For all these methods, however, a multi step pipeline including random walk generation and semi-supervised training is required where each step has to be optimized separately. Planetoid (Yang et al., 2016) alleviates this by injecting label information in the process of learning embeddings."}, {"section_index": "10", "section_name": "4.2 NEURAL NETWORKS ON GRAPHS", "section_text": "Neural networks that operate on graphs have previously been introduced in Gori et al. (2005). Scarselli et al. (2009) as a form of recurrent neural network. Their framework requires the repeate. application of contraction maps as propagation functions until node representations reach a stabl. fixed point. This restriction was later alleviated in Li et al. (2016) by introducing modern practice for recurrent neural network training to the original graph neural network framework. Duvenauc. et al. (2015) introduced a convolution-like propagation rule on graphs and methods for graph-leve. classification. Their approach requires to learn node degree-specific weight matrices which does no. scale to large graphs with wide node degree distributions. Our model instead uses a single weigh. matrix per layer and deals with varying node degrees through an appropriate normalization of th. adjacency matrix (see Section 3.1).\nA related approach to node classification with a graph-based neural network was recently introduced in Atwood & Towsley (2016). They report O(N2) complexity, limiting the range of possible appli cations. In a different yet related model, Niepert et al. (2016) convert graphs locally into sequences that are fed into a conventional 1D convolutional neural network, which requires the definition of a node ordering in a pre-processing step.\nOur method is based on spectral graph convolutional neural networks, introduced in Bruna et al.. (2014) and later extended by Defferrard et al. (2016) with fast localized convolutions. In contrast. to these works. we consider here the task of transductive node classification within networks of significantly larger scale. We show that in this setting, a number of simplifications (see Section 2.2). can be introduced to the original frameworks of Bruna et al. (2014) and Defferrard et al. (2016) that. improve scalability and classification performance in large-scale networks.."}, {"section_index": "11", "section_name": "5 EXPERIMENTS", "section_text": "We test our model in a number of experiments: semi-supervised document classification in cita- tion networks, semi-supervised entity classification in a bipartite graph extracted from a knowledge graph, an evaluation of various graph propagation models and a run-time analysis on random graphs"}, {"section_index": "12", "section_name": "5.1 DATASETS", "section_text": "We closely follow the experimental setup in Yang et al. (2016). Dataset statistics are summarizec. in Table 1. In the citation network datasets-Citeseer. Cora and Pubmed (Sen et al., 2oo8)-node. are documents and edges are citation links. Label rate denotes the number of labeled nodes that are used for training divided by the total number of nodes in each dataset. NELL (Carlson et al., 2010. Yang et al., 2016) is a bipartite graph dataset extracted from a knowledge graph with 55,864 relation. nodes and 9,891 entity nodes\nTable 1: Dataset statistics, as reported in Yang et al. (2016)\nDataset Type Nodes Edges Classes Features Label rate Citeseer Citation network 3,327 4,732 6 3,703 0.036 Cora Citation network 2,708 5,429 7 1,433 0.052 Pubmed Citation network 19,717 44,338 3 500 0.003 NELL Knowledge graph 65,755 266,144 210 5,414 0.001\nCitation networks We consider three citation network datasets: Citeseer, Cora and Pubmed (Sen et al., 2008). The datasets contain sparse bag-of-words feature vectors for each document and a list of citation links between documents. We treat the citation links as (undirected) edges and construci a binary, symmetric adjacency matrix A. Each document has a class label. For training, we only use 20 labels per class, but all feature vectors.\nNELL NELL is a dataset extracted from the knowledge graph introduced in (Carlson et al., 2010) A knowledge graph is a set of entities connected with directed, labeled edges (relations). We follow the pre-processing scheme as described in Yang et al. (2016). We assign separate relation nodes r1 and r2 for each entity pair (e1,r, e2) as (e1,r1) and (e2,r2). Entity nodes are described by sparse feature vectors. We extend the number of features in NELL by assigning a unique one-hot representation for every relation node, effectively resulting in a 61,278-dim sparse feature vector per node. The semi-supervised task here considers the extreme case of only a single labeled example per class in the training set. We construct a binary, symmetric adjacency matrix from this graph by setting entries A; = 1, if one or more edges are present between nodes i and j.\nRandom graphs We simulate random graph datasets of various sizes for experiments where we measure training time per epoch. For a dataset with N nodes we create a random graph assigning 2N edges uniformly at random. We take the identity matrix Iy as input feature matrix X, thereby implicitly taking a featureless approach where the model is only informed about the identity of each node, specified by a unique one-hot vector. We add dummy labels Y, = 1 for every node."}, {"section_index": "13", "section_name": "5.2 EXPERIMENTAL SET-UP", "section_text": "Unless otherwise noted, we train a two-layer GCN as described in Section 3.1 and evaluate pre diction accuracy on a test set of 1,000 labeled examples. We provide additional experiments using deeper models with up to 10 layers in Appendix B. We choose the same dataset splits as in Yang et al. (2016) with an additional validation set of 500 labeled examples for hyperparameter opti mization (dropout rate for all layers, L2 regularization factor for the first GCN layer and number of hidden units). We do not use the validation set labels for training.\nFor the citation network datasets, we optimize hyperparameters on Cora only and use the same set. of parameters for Citeseer and Pubmed. We train all models for a maximum of 200 epochs (training. iterations) using Adam (Kingma & Ba, 2015) with a learning rate of 0.01 and early stopping with a. window size of 10, i.e. we stop training if the validation loss does not decrease for 10 consecutive epochs. We initialize weights using the initialization described in Glorot & Bengio (20i0) and. accordingly (row-)normalize input feature vectors. On the random graph datasets, we use a hidden layer size of 32 units and omit regularization (i.e. neither dropout nor L2 regularization).."}, {"section_index": "14", "section_name": "5.3 BASELINES", "section_text": "We compare against the same baseline methods as in Yang et al. (2016), i.e. label propagation (LP) (Zhu et al., 2003), semi-supervised embedding (SemiEmb) (Weston et al., 2012), manifold regularization (ManiReg) (Belkin et al., 2006) and skip-gram based graph embeddings (DeepWalk) (Perozzi et al., 2014). We omit TSVM (Joachims, 1999), as it does not scale to the large number of classes in one of our datasets\nWe further compare against the iterative classification algorithm (ICA) proposed in Lu & Getoor (2003) in conjunction with two logistic regression classifiers, one for local node features alone and one for relational classification using local features and an aggregation operator as described in Sen et al. (2oo8). We first train the local classifier using all labeled training set nodes and use it to bootstrap class labels of unlabeled nodes for relational classifier training. We run iterative classification (relational classifier) with a random node ordering for 10 iterations on all unlabeled nodes (bootstrapped using the local classifier). L2 regularization parameter and aggregation operator (count vs. prop, see Sen et al. (2008)) are chosen based on validation set performance for each dataset separately.\nLastly, we compare against Planetoid (Yang et al., 2016), where we always choose their best performing model variant (transductive vs. inductive) as a baseline.\nResults are summarized in Table 2. Reported numbers denote classification accuracy in percent. Fo. ICA, we report the mean accuracy of 100 runs with random node orderings. Results for all othe baseline methods are taken from the Planetoid paper (Yang et al., 2016). Planetoid* denotes the best model for the respective dataset out of the variants presented in their paper..\nTable 2: Summary of results in terms of classification accuracy (in percent)\nMethod Citeseer Cora Pubmed NELL ManiReg [3] 60.1 59.5 70.7 21.8 SemiEmb [28] 59.6 59.0 71.1 26.7 LP [32] 45.3 68.0 63.0 26.5 DeepWalk [22] 43.2 67.2 65.3 58.1 ICA [18] 69.1 75.1 73.9 23.1 Planetoid* [29] 64.7 (26s) 75.7 (13s) 77.2 (25s) 61.9 (185s) GCN (this paper) 70.3 (7s) 81.5 (4s) 79.0 (38s) 66.0 (48s) GCN (rand. splits) 67.9 0.5 80.1 0.5 78.9 0.7 58.41.7\nWe further report wall-clock training time in seconds until convergence (in brackets) for our method (incl. evaluation of validation error) and for Planetoid. For the latter, we used an implementation pro vided by the authors' and trained on the same hardware (with GPU) as our GCN model. We trainec and tested our model on the same dataset splits as in Yang et al. (2016) and report mean accuracy of 100 runs with random weight initializations. We used the following sets of hyperparameters for Citeseer, Cora and Pubmed: 0.5 (dropout rate), 5 . 10-4 (L2 regularization) and 16 (number of hid den units); and for NELL: 0.1 (dropout rate), 1 : 10-5 (L2 regularization) and 64 (number of hidder units).\nIn addition, we report performance of our model on 10 randomly drawn dataset splits of the same size as in Yang et al. (2016), denoted by GCN (rand. splits). Here, we report mean and standard error of prediction accuracy on the test set split in percent."}, {"section_index": "15", "section_name": "6.2 EVALUATION OF PROPAGATION MODEL", "section_text": "We compare different variants of our proposed per-layer propagation model on the citation network datasets. We follow the experimental set-up described in the previous section. Results are summa rized in Table 3. The propagation model of our original GCN model is denoted by renormalization trick (in bold). In all other cases, the propagation model of both neural network layers is replacec with the model specified under propagation model. Reported numbers denote mean classificatior accuracy for 100 repeated runs with random weight matrix initializations. In case of multiple vari- ables O, per layer, we impose L2 regularization on all weight matrices of the first layer.\nTable 3: Comparison of propagation models\nhttps://github.com/kimiyoung/planetoid\nDescription Propagation model Citeseer Cora Pubmed K = 3 69.8 79.5 74.4 K=0Th(L)XOx Chebyshev filter (Eq. 5) K = 2 69.6 81.2 73.8 1st-order model (Eq. 6) XOo+D-2AD-2XOj 68.3 80.0 77.5 Single parameter (Eq. 7) (IN +D-2AD-)XO 69.3 79.2 77.4 Renormalization trick (Eq. 8) D-AD-xO 70.3 81.5 79.0 1st-order term only D-2AD-2XO 68.7 80.5 77.8 Multi-layer perceptron xO 46.5 55.1 71.4\nHere, we report results for the mean training time per epoch (forward pass, cross-entropy calculation, backward pass) for 100 epochs on simulated random graphs, measured in seconds wall-clock time. See Section 5.1 for a detailed description of the random graph dataset used in these experiments. We compare results on a GPU and on a CPU-only implementation4 in TensorFlow (Abadi et al., 2015). Figure 2 sum marizes the results."}, {"section_index": "16", "section_name": "7.1 SEMI-SUPERVISED MODEL", "section_text": "In the experiments demonstrated here, our method for semi-supervised node classification outper-. forms recent related methods by a significant margin. Methods based on graph-Laplacian regular. ization (Zhu et al., 2003; Belkin et al., 2006; Weston et al., 2012) are most likely limited due to their assumption that edges encode mere similarity of nodes. Skip-gram based methods on the other hand are limited by the fact that they are based on a multi-step pipeline which is difficult to optimize. Our proposed model can overcome both limitations, while still comparing favorably in terms of ef-. ficiency (measured in wall-clock time) to related methods. Propagation of feature information from. neighboring nodes in every layer improves classification performance in comparison to methods like. ICA (Lu & Getoor, 2003), where only label information is aggregated.."}, {"section_index": "17", "section_name": "7.2 LIMITATIONS AND FUTURE WORK", "section_text": "Memory requirement In the current setup with full-batch gradient descent, memory requiremen grows linearly in the size of the dataset. We have shown that for large graphs that do not fit in GPt. memory, training on CPU can still be a viable option. Mini-batch stochastic gradient descent car alleviate this issue. The procedure of generating mini-batches, however, should take into account th. number of layers in the GCN model, as the Kth-order neighborhood for a GCN with K layers has tc be stored in memory for an exact procedure. For very large and densely connected graph datasets. further approximations might be necessary.\nDirected edges and edge features Our framework currently does not naturally support edge fea. tures and is limited to undirected graphs (weighted or unweighted). Results on NELL howeve show that it is possible to handle both directed edges and edge features by representing the origina. directed graph as an undirected bipartite graph with additional nodes that represent edges in the original graph (see Section 5.1 for details)..\nLimiting assumptions Through the approximations introduced in Section 2, we implicitly assume. locality (dependence on the Kth-order neighborhood for a GCN with K layers) and equal impor. tance of self-connections vs. edges to neighboring nodes. For some datasets, however, it might be beneficial to introduce a trade-off parameter A in the definition of A:.\n101 GPU 100 CPU 10-1 S 10-2 * 10-3 1k 10k 100k 1m 10m # Edges\nFigure 2: Wall-clock time per epoch for random graphs. (*) indicates out-of-memory error.\nWe have further demonstrated that the proposed renormalized propagation model (Eq. 8) offers both mproved efficiency (fewer parameters and operations, such as multiplication or addition) and better predictive performance on a number of datasets compared to a naive 1st-order model (Eq. 6) or higher-order graph convolutional models using Chebyshev polynomials (Eq. 5).\nA=A+XIN\nThis parameter now plays a similar role as the trade-off parameter between supervised and unsuper vised loss in the typical semi-supervised setting (see Eq. 1). Here, however, it can be learned via gradient descent."}, {"section_index": "18", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Christos Louizos. Taco Cohen. Joan Bruna. Zhilin Yang, Dave Herman Pramod Sinha and Abdul-Saboor Sheikh for helpful discussions. This research was funded by SAP"}, {"section_index": "19", "section_name": "REFERENCES", "section_text": "Martin Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015\nDavid K. Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan Aspuru-Guzik, and Ryan P. Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems (NIPS), pp. 2224-2232, 2015\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In A1STATS, volume 9, pp. 249-256, 2010\nWe have introduced a novel approach for semi-supervised classification on graph-structured data. Our GCN model uses an efficient layer-wise propagation rule that is based on a first-order approx imation of spectral convolutions on graphs. Experiments on a number of network datasets suggest that the proposed GCN model is capable of encoding both graph structure and node features in a. way useful for semi-supervised classification. In this setting, our model outperforms several recently. proposed methods by a significant margin, while being computationally efficient..\nJames Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in neural information processing systems (NIPS). 2016\nDavid K. Hammond, Pierre Vandergheynst, and Remi Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis. 30(2):129-150. 2011.\nQing Lu and Lise Getoor. Link-based classification. In International Conference on Machine Learn ing (ICML), volume 3, pp. 496-503, 2003.\nLaurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machin Learning Research (JMLR), 9(Nov):2579-2605, 2008.\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. Distributed repre sentations of words and phrases and their compositionality. In Advances in neural information processing systems (NIPS), pp. 3111-3119, 2013.\nMathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural net works for graphs. In International Conference on Machine Learning (ICML), 2O16.\nFranco Scarselli. Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks. 20(1:61-80. 2009\nPrithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rac Collective classification in network data. AI magazine, 29(3):93, 2008..\nNitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning. Research (JMLR). 15(1):1929-1958. 2014\nJian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scal information network embedding. In Proceedings of the 24th International Conference on Worl. Wide Web, pp. 1067-1077. ACM, 2015.\nBoris Weisfeiler and A. A. Lehmann. A reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Technicheskaya Informatsia, 2(9):12-16, 1968\nJason Weston, Frederic Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi supervised embedding. In Neural Networks: Tricks of the Trade, pp. 639-655. Springer, 2012\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog itionInIEFFC n (CVPR) 2016\nThorsten Joachims. Transductive inference for text classification using support vector machines. In\nYujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. In International Conference on Learning Representations (ICLR). 2016.\nWayne W. Zachary. An information flow model for conflict and fission in small groups. Journal oJ anthropological research, pp. 452-473. 1977.\nDengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, and Bernhard Scholkopf. Learning with local and global consistency. In Advances in neural information processing systems (NIPS), volume 16, pp. 321-328, 2004."}, {"section_index": "20", "section_name": "RELATION TO WEISFEILER-LEHMAN ALGORITHM", "section_text": "Algorithm 1: WL-1 algorithm (Weisfeiler & Lehmann, 1968)\nnan, 190o) Input: Initial node coloring (h. Output: Final node coloring (). t0; repeat for vi E V do h(t+1) hash tt+1; until stable node coloring is reached;\nHere, h denotes the coloring (label assignment) of node v; (at iteration t) and N, is its set of neighboring node indices (irrespective of whether the graph includes self-connections for every node or not). hash(:) is a hash function. For an in-depth mathematical discussion of the WL-1 algorithm see, e.g., Douglas (2011).\nWe can replace the hash function in Algorithm 1 with a neural network layer-like differentiable function with trainable parameters as follows:\njEN\nThis--loosely speaking--allows us to interpret our GCN model as a differentiable and parameter ized generalization of the 1-dim Weisfeiler-Lehman algorithm on graphs.."}, {"section_index": "21", "section_name": "A.1 NODE EMBEDDINGS WITH RANDOM WEIGHTS", "section_text": "Z = tanh tanh. 1 tanh( (Axw(0)\nNote that we here implicitly assume that self-connections have already been added to every node in the graph (for a clutter-free notation).\nA neural network model for graph-structured data should ideally be able to learn representations of nodes in a graph, taking both the graph structure and feature description of nodes into account. A well-studied framework for the unique assignment of node labels given a graph and (optionally) dis crete initial node labels is provided by the 1-dim Weisfeiler-Lehman (WL-1) algorithm (Weisfeiler & Lehmann, 1968):\nwhere ci; is an appropriately chosen normalization constant for the edge (vi, vj). Further, we can. take h(l) now to be a vector of activations of node i in the lth neural network layer. w(l) is a. layer-specific weight matrix and o() denotes a differentiable, non-linear activation function.\nFrom the analogy with the Weisfeiler-Lehman algorithm, we can understand that even an untrained GCN model with random weights can serve as a powerful feature extractor for nodes in a graph. As. an example, consider the following 3-layer GCN model:.\nWe apply this model on Zachary's karate club network (Zachary, 1977). This graph contains 34 nodes, connected by 154 (undirected and unweighted) edges. Every node is labeled by one of four classes, obtained via modularity-based clustering (Brandes et al., 2008). See Figure 3a for an illustration.\n0.00 0.05 -0.10 -0.15 0.15 0.10 0.05 0.00 0.05 (a) Karate club network (b) Random weight embedding\nFigure 3: Left: Zachary's karate club network (Zachary, 1977), colors denote communities obtainec. via modularity-based clustering (Brandes et al., 2008). Right: Embeddings obtained from an un trained 3-layer GCN model (Eq. 13) with random weights applied to the karate club network. Best. viewed on a computer screen."}, {"section_index": "22", "section_name": "A.2 SEMI-SUPERVISED NODE EMBEDDINGS", "section_text": "On this simple example of a GCN applied to the karate club network it is interesting to observe how embeddings react during training on a semi-supervised classification task. Such a visualization (see Figure 4) provides insights into how the GCN model can make use of the graph structure (and ol features extracted from the graph structure at later layers) to learn embeddings that are useful for a classification task.\nWe consider the following semi-supervised learning setup: we add a softmax layer on top of our. model (Eq. 13) and train using only a single labeled example per class (i.e. a total number of 4 labeled. nodes). We train for 300 training iterations using Adam (Kingma & Ba, 2015) with a learning rate of 0.01 on a cross-entropy loss.\nFigure 4 shows the evolution of node embeddings over a number of training iterations. The model succeeds in linearly separating the communities based on minimal supervision and the graph struc ture alone. A video of the full training process can be found on our website7.\n6We originally experimented with a hidden layer dimensionality of 2 (i.e. same as output layer), but observec that a dimensionality of 4 resulted in less frequent saturation of tanh() units and therefore visually more pleasing results.\nhttp://tkipf.github.io/graph-convolutional-networks/\nWe take a featureless approach by setting X = Iy, where Iy is the N by N identity matrix. N is the number of nodes in the graph. Note that nodes are randomly ordered (i.e. ordering contains no information). Furthermore, we choose a hidden layer dimensionality' of 4 and a two-dimensional output (so that the output can immediately be visualized in a 2-dim plot).\nFigure 3b shows a representative example of node embeddings (outputs Z) obtained from an un- trained GCN model applied to the karate club network. These results are comparable to embeddings obtained from DeepWalk (Perozzi et al., 2014), which uses a more expensive unsupervised training procedure.\n1.0 1.0 0.5 0.5 0. 0.0 0.0 0.5 -0.5 1.0 1.0 -1.0 0.5 0.0 0.5 1.0 1.0 0.5 0.0 0.5 1.0 (a) Iteration 25 (b) Iteration 50 1.0 1.0 0.5 0.5 0.0 0.0 0.5 0.5 1.0 1.0 1.0 -0.5 0.0 0.5 1.0 1.0 0.5 0.0 0.5 1.0 (c) Iteration 75 (d) Iteration 100 1.0 1.0 0.5 0.5 0.0 0.0 0.5 0.5 1.0 1.0 1.0 0.5 0.0 0.5 1.0 1.0 0.5 0.0 0.5 1.0 (e) Iteration 200 (f) Iteration 300\nFigure 4: Evolution of karate club network node embeddings obtained from a GCN model after a number of semi-supervised training iterations. Colors denote class. Nodes of which labels were provided during training (one per class) are highlighted (grey outline). Grey links between nodes denote graph edges. Best viewed on a computer screen."}, {"section_index": "23", "section_name": "EXPERIMENTS ON MODEL DEPTH", "section_text": "On each cross-validation split, we train for 400 epochs (without early stopping) using the Adam. optimizer (Kingma & Ba, 2015) with a learning rate of 0.01. Other hyperparameters are chosen as. follows: 0.5 (dropout rate, first and last layer), 5 . 10-4 (L2 regularization, first layer), 16 (numbe). of units for each hidden layer) and O.01 (learning rate). Results are summarized in Figure 5.\nIn these experiments, we investigate the influence of model depth (number of layers) on classification performance. We report results on a 5-fold cross-validation experiment on the Cora, Citeseer and Pubmed datasets (Sen et al., 2008) using all labels. In addition to the standard GCN model (Eq. 2), we report results on a model variant where we use residual connections (He et al., 2016) between hidden layers to facilitate training of deeper models by enabling the model to carry over information from the previous layer's input:\nH(l+1) (D-2AD-2H(l)W(l)\nCiteseer Cora Pubmed 0.90 0.95 0.88 0.85 0.90 0.86 0.80 0.85 0.84 0.75 0.80 raey ACennrey 0.82 0.70 0.75 ACCU 0.65 0.70 Train Train Train 0.60 0.65 0.78 Train (Residual) Train (Residual) Train (Residual) Test 0.60 Test Test 0.55 0.76 Test (Residual) Test (Residual) Test (Residual) 0.55 0.50 1 2 3 45 6 7 8 9 10 1 2 345 67 8 9 10 1 2 34 5 6 7 8 10 Number of layers Number of layers Number of layers\nFigure 5: Influence of model depth (number of layers) on classification performance. Markers. denote mean classification accuracy (training vs. testing) for 5-fold cross-validation. Shaded areas denote standard error. We show results both for a standard GCN model (dashed lines) and a model with added residual connections (He et al., 2016) between hidden layers (solid lines)..\nFor the datasets considered here, best results are obtained with a 2- or 3-layer model. We observe that for models deeper than 7 layers, training without the use of residual connections can become difficult, as the effective context size for each node increases by the size of its Kth-order neighbor hood (for a model with K layers) with each additional layer. Furthermore, overfitting can become an issue as the number of parameters increases with model depth."}] |
HJ1JBJ5gl | [{"section_index": "0", "section_name": "REPRESENTING INFERENTIAL UNCERTAINTY IN DEEP NEURAL NETWORKS THROUGH SAMPLING", "section_text": "Patrick McClure & Nikolaus Kriegeskorte\nAs deep neural networks (DNNs) are applied to increasingly challenging prob. lems, they will need to be able to represent their own uncertainty. Modelling. uncertainty is one of the key features of Bayesian methods. Bayesian DNNs that use dropout-based variational distributions and scale to complex tasks have re- cently been proposed. We evaluate Bayesian DNNs trained with Bernoulli or Gaussian multiplicative masking of either the units (dropout) or the weights (drop. connect). We compare these Bayesian DNNs ability to represent their uncertainty about their outputs through sampling during inference. We tested the calibra-. tion of these Bayesian fully connected and convolutional DNNs on two visual. inference tasks (MNIST and CIFAR-1O). By adding different levels of Gaussian noise to the test images in z-score space, we assessed how these DNNs repre- sented their uncertainty about regions of input space not covered by the training set. These Bayesian DNNs represented their own uncertainty more accurately than traditional DNNs with a softmax output. We find that sampling of weights. whether Gaussian or Bernoulli, led to more accurate representation of uncertainty compared to sampling of units. However, sampling units using either Gaussian or Bernoulli dropout led to increased convolutional neural network (CNN) clas- sification accuracy. Based on these findings we use both Bernoulli dropout and Gaussian dropconnect concurrently, which approximates the use of a spike-and- slab variational distribution. We find that networks with spike-and-slab sampling combine the advantages of the other methods: they classify with high accuracy and robustly represent the uncertainty of their classifications for all tested archi- tectures."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep neural networks (DNNs), particularly convolutional neural networks (CNN), have recently been used to solve complex perceptual and decision tasks (Krizhevsky et al.]2012] Mnih et al. 2015] Silver et al.]2016). However, these networks fail to model the uncertainty of their predictions or actions. Although many networks deterministically map an input to a probabilistic prediction they do not model the uncertainty of that mapping. In contrast, Bayesian neural networks (NNs attempt to learn a distribution over their parameters thereby offering uncertainty estimates for thei outputs (MacKay|1992] Neal[|2012). However, these methods do not scale well due to the difficulty in computing the posterior of a network's parameters.\nOne type of method for sampling from the posteriors of these networks is Hamiltonian Monte Carlc (HMC) (Neal2012). These techniques use the gradient information calculated using backprop- agation to perform Markov chain Monte Carlo (MCMC) sampling by randomly walking through. parameter space. proposed stochastic gadient Langevien dynamcis (SGLD)\nApproximate methods, in particular variational inference, have been used to make Bayesian NNs more tractable (Hinton & Van Camp1993] Barber & Bishop]1998} Graves 2011 Blundell et al. 2015). Due in large part to the fact that these methods substantially increase the number of param eters in a network, they have not been applied to large DNNs, such as CNNs.. Gal & Ghahramani"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper, we investigate how using MC sampling to model uncertainty affects a network's prob- abilistic predictions. Specifically, we test if using MC sampling improves the calibration of the. probabilistic predictions made by Bayesian DNNs with softmax output layers. We used variational distributions based on dropout and dropconnect with either Bernoulli or Gaussian sampling during both training and inference. Additonally, we propose a formulation of a spike-and-slab variational. distribution based on Bernoulli dropout and Gaussian dropconnect. We find that the spike-and-slab. networks robustly represented their uncertainty like Bayesian dropconnect networks and have the increased CNN classification accuracy of Bayesian dropout networks. Each of these variational distributions scale extremely well and make the results of this work applicable to a large range of. state-of-the-art DNNs.\nW)p(W) train p(W[ Dtrain p(Dtrain\nThis can be intractable due to both the difficulty in calculating p(Dtrain) and in calculating the joint distribution of a large number of parameters. Instead, p(W|Dtrain) can be approximated using. a variational distribution q(W). This distribution is constructed to allow for easy generation of samples. Using variational inference, q(W) is learned by minimizing:.\n2016) and Kingma et al.(2015) bypassed this issue by developing Bayesian CNNs using dropout [Srivastava et al.|2014). Dropout is a widely used regularization technique where units are dropped out of a network with a probability p during training and the output of all unit are multiplied by p during inference. A similar technique is dropconnect (Wan et al.|2013), which drops network con- nections instead of units. Gal & Ghahramani(2015) detailed how dropping units was equivalent to sampling weights from a Bernoulli-based variational distribution and that in order to make a DNN with dropout Bayesian, sampling should be used during both training and inference. Monte-Carlo (MC) sampling at inference allows a DNN to efficiently model a distribution over its outputs. One limitation of the Bayesian dropout method is that it does not model the uncertatiniy of each network parameter. The uncertainty of a DNN can then be calculated using this probability distribution In addition to Bernoulli and Gaussian distributions, there has also been work done using spike-an- slab distributions (Louizos2015), a combination of the two, which has been shown to increase the quality of linear regression (Ishwaran & Rao|2005). Interestingly, dropout and dropconnect can be seen as approximations to spike-and-slab distributions for units and weights, respectively (Louizos 2015] Gal2016] Li et al.|2016). Dropout- and dropconnect-based variational DNNs are dependent on the dropout probability, which is often used as a hyperparameter. However, work has been done on automatically learning the dropout probability during training for dropconnect (Louizos!2015) using spike-and-slab distributions and Gaussian dropout (Kingma et al.]2015).\nArtificial neural networks (NNs) can be trained using Bayesian learning by finding the maximum a posteriori (MAP) weights given the training data (Dtrain) and a prior over the weight matrix W. (p(W)):\nmax p(W|Dtrain) = max p(Dtrain|W)p(W) W W\nThis is usually done by minimizing the mean squared error (MsE) or cross entropy error for ei. ther regression or classification, respectively, while using L2 regularization, which corresponds to a Gaussian prior over the weights. At inference, the probability of the test data (Dtest) is then. calculated using only the maximum likelihood estimate (MLE) of the weights (W*):\ntest|W*\nHowever, ideally the full posterior distribution over the weights would be learned instead of just the MLE:\nMonte-Carlo (MC) sampling can then be used to estimate the probability of test data using q(W)\nFigure 1: A visualization of the different variational distributions on a simple neural network"}, {"section_index": "3", "section_name": "2.2 VARIATIONAL DISTRIBUTIONS", "section_text": "The number and continuous nature of the parameters in DNNs makes sampling from the entire distribution of possible weight matrices computationally challenging. However, variational distri butions can make sampling easier. In deep learning, the most common sampling method is dropou with Bernoulli variables. However, dropconnect, which independently samples a Bernoulli for eacl weight, and Gaussian weights have also been used. A visualization of the different methods is showr in Figure[1 All of these methods can be formulated as variational distributions where weights are sampled by element-wise multiplying the variational parameters V, the n n connection matrix with an element for each connection between the n units in the network, by a mask M, which is sampled from some probability distribution. Mathematically, this can be written as:\nFigure 2: An illustration of sampling network weights using the different variational distributions"}, {"section_index": "4", "section_name": "2.2.1 BERNOULLI DROPCONNECT & DROPOUT", "section_text": "Bernoulli distributions are simple distributions which return 1 with probability p and O with prob ability (1 - p). In Bernoulli dropconnect, each element of the mask is sampled independently, so. mi,; ~ Bernoulli(p). This sets wi,; to vi.; with probability p and O with a probability (1 - p).\nDtrain|W)q(W)dW + KL(q(W)||p(W)\nn p(Dtest|Wi) where Wi ~q(W Baseline Bernoulli Gaussian Bernoulli Gaussian Spike-and-Slab Neural Network. DropConnect DropConnect Dropout Dropout Dropout\nn ) p(Dtest|Wi) where Wi ~ q(W) n 2\nW = V o M where M ~p(M)\nFrom this perspective, the difference between dropout and dropconnect, as well as Bernoulli and Gaussian methods, is simply the probability distribution used to generate the mask sample, M (Fig ure2).\nBernoulli DropConnect Gaussian DropConnect >1 Bernoulli Dropout 1 0 Gaussian Dropout Spike-and-Slab Dropout W V M\nIn dropout, however, the weights are not sampled independently. Instead, one Bernoulli variable i sampled for each row of the weight matrix, so mi,* ~ Bernoulli(p)"}, {"section_index": "5", "section_name": "2.2.2 GAUSSIAN DROPCONNECT & DROPOUT", "section_text": "In Gaussian dropconnect and dropout, the elements of the mask are sampled from normal distri-. butions. This corresponds to sampling w.; from a Gaussian distribution centered at variational parameter Vi,j.Srivastava et al.(2014) proposed using Gaussian distribution with a mean of 1 and. a variance of o3c = (1 - p)/p, which matches the mean and variance of dropout when training. 1 time scaling is used. In Gaussian dropconnect, each element of the mask is sampled independently which results in mi,j ~ N(1, o?c). In Gaussian dropout, each element in a row has the same random. variable, so mi,* ~ N(1, o?."}, {"section_index": "6", "section_name": "2.2.3 SPIKE-AND-SLAB DROPOUT", "section_text": "A spike-and-slab distribution is the normalized linear combination of a''spike\"' of probability mas at zero and a \"slab\"' consisting of a Gaussian distribution. This spike-and-slab returns a O witl concurrently using Bernoulli dropout and Gaussian dropconnect to approximate the use of a spike and-slab variational distribution by optimizing a lower-bound of the objective function (See Ap pendix A). In this formulation, mi,; ~ bi,*N(1, ?c), where bi,+ ~ Bern(pdo) for each mask rov and o?c = Pdc/(1 - Pdc). As for Bernoulli dropout, each row of the mask M, mi,*, is multipliec by O with probability (1 Pdo), otherwise each element in that row is multiplied by a value inde pendently sampled from a Gaussian distribution as in Gaussian dropconnect. During non-sampling inference, spike-and-slab dropout uses the mean weight values and, per Bernoulli dropout, multi plies unit outputs by Pdo. This differs from the work done byLouizos(2015) and Gal (2016) i1 that they used additive Gaussian noise and learn separate means and variances for each weight. Ir contrast, we define the variance as a function of the learned weight mean vi,j. Tying the variance o a weight to its magnitude makes it only beneficial to learn large weights if they are robust to variance (Wang & Manning2013). Although we treat pdo and pdc as a hyperparameters thereby reducing the space of variational distributions we optimize over, similar methods could potentially learn thes during training (Louizos2015 Kingma et al.2015} Gal|2016).\n0 1 2 3 5 X\nFigure 3: Examples of MNIST image. es with added Gaussian noise with varying standard deviations"}, {"section_index": "7", "section_name": "3 RESULTS", "section_text": "In this paper, we investigate how using MC sampling affects a DNN's ability to represent the un certainty of it's probabalistic predictions. To test this, we trained several networks differing only in whether no sampling was performed (baseline DNN and DNN with L2-regularization), sampling. was only performed during training (dropout and dropconnect), or sampling was performed both. during training and inference (MC dropout and MC dropconnect). We used the MNIST and CIFAR 10 datasets to train networks that sampled from different variational distribution using the above. methods.\nFor these DNNs, we compared the test classification error, the uncertainty of the softmax output and the calibration of the softmax output for each type of sampling and variational distribution. The test classification error shows how well the probability distribution learned by each DNN models\nFigure 4: The MNIST test classification error, entropy, and calibration of the predictions of the fully connected networks: NN, NN+L2, Bernoulli DropConnect (BDC) with and without Monte-Carlc (MC) sampling, Gaussian DropConnect (GDC) with and without MC sampling, Bernoulli Dropou (BDO) with and without MC sampling, Gaussian Dropout with and without MC sampling, anc spike-and-slab Dropout (SSD) with and without MC sampling.\nBaseline Bernoulli DropConnect Gaussian DropConnect Bernoulli Dropout Gaussian Dropout Spike-and-Slab Dropou 0 = *as Pae Gass NN = = = MCBDC BDC - = = MCGDC GDC BDO GDO SSD Frrnneney = = = NN+L2 - = = MCDO - = = MCGDO - = = MCSSD 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0.5 0.5 C. 0.5 0.5 0 0 0 0 0 0.5 1\nBaseline Bernoulli DropConnect Gaussian DropConnect Bernoulli Dropout Gaussian Dropout Spike-and-Slab Dropout 0 = as Caa Gass NN BDC GDC BDO GDO == = MCSSD SSD Frenneney = = NN+L2 =MCBDC = = MCGDC - = = MCDO =- MCGDO 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0.5 0.5 0 0.5 0 0.5 0 0.5 0 0.5 =3 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0.5 0.5 9 = * Frenneney 0.5 0.5 0.5 0.5 0.5 0.5 0 0.5 0.5 0 0.5 0 0.5 0 0.5 0 0.5 Predicted Probability Predicted Probability Predicted Probability Predicted Probability Predicted Probability Predicted Probability\nthe data. The uncertainty shows how the probability distribution learned by each DNN is distributed across classes. A low entropy means that the probability mass is primarily located at a few la- bels and a high entropy means that the probability mass is distributed across many labels. The calibration shows how well the probability distribution learned by the DNN models it's own uncer- tainty. We evaluated how calibrated a prediction was by the following procedure: (1) We binned test set predictions by predicted probability. (2) We calculated the percentage of predictions in each predicted-probability bin that correctly predicted a target label. Perfect calibration means that targets predicted with probability z are correct in z times 100% of the cases. We therefore (3) calculated the mean squared calibration error (i.e. the mean across bins of the squared deviations between the bin-mean predicted probability and the proportion of correct predictions in that bin). We evaluated\nBaseline Bernoulli DropConnect Gaussian DropConnect Bernoulli Dropout Gaussian Dropout Spike-and-Slab Dropou Frrorr NN BDC GDC BDO GDO - = -MCBDC - = = MCGDC - = = MCGDO SSD = = = NN+L2 - = = MCDO - = = MCSSD assssssssnn 0.5 0.5 0.5 0.5 0.5 0.5 0 5 5 0 0 5 0 0 5 0 5 3 3 3 3 2 2 2 0 5 0 5 5 5 3 3 3 3 2 2 2 0= 0b 5 C. 5 0 5 0 5 0 5 Noise Std. Dev. Noise Std. Dev. Noise Std. Dev. Noise Std.Dev Noise Std. Dev. Noise Std. Dev.\nFigure 5: The calibration curves for the MNIST test set with and without Gaussian noise of the softmax outputs of the fully connected networks: NN, NN+L2, Bernoulli DropConnect (BDC) with and without Monte-Carlo (MC) sampling, Gaussian DropConnect (GDC) with and without MC sampling, Bernoulli Dropout (BDO) with and without MC sampling, Gaussian Dropout with and without MC sampling, and spike-and-slab Dropout (SSD) with and without MC sampling.\nTable 1: MNIST test error for the trained fully connected neural networks with and without Monte Carlo (MC) sampling using 100 samples\nMethod Mean Error (%) Std. Dev. NN 1.68 NN+L2 1.64 Bernoulli DropConnect 1.33 MC Bernoulli DropConnect 1.30 0.04 Gaussian DropConnect 1.24 MC Gaussian DropConnect 1.27 0.03 Bernoulli Dropout 1.45 MC Bernoulli Dropout 1.42 0.03 Gaussian Dropout 1.36 MC Gaussian Dropout 1.37 0.03 Spike-and-Slab Dropout 1.23 MC Spike-and-Slab Dropout 1.23 0.03\nthese three measures for the trained networks on the MNIST and CIFAR test set with noise sampled from Gaussian distributions with varying standard deviations (Figure[3). This tested how well mod- elled each network's uncertainty was for the test sets and the regions of input space not seen in the training set. For dropout and dropconnect, p was set to 0.5, which corresponds to the best value for regularizing a linear layerBaldi & Sadowski (2013). However in practice, different values for p have been used Srivastava et al.(2014). We found that 0.5 was a robust choice for different networks, measures and sampling methods we used. The one exception were the dropconnect parameter used for spike-and-slab distributions where O.5 made learning difficult due to the variance during training Through validation, we found that using larger values spike-and-slab probabilities (0.75 for the fully connected and 0.9 for the convolutional) allowed the networks to fit to the training data better while still maintaining good generalization."}, {"section_index": "8", "section_name": "3.1 MNIST", "section_text": "We trained two groups of DNNs, one with a fully connected (FC) architecture and one with a con volutional architecture, on digit classification using the MNIST dataset (LeCun et al.|1998). This set contains 60,000 training images and 10,000 testing images. No data augmentation was used."}, {"section_index": "9", "section_name": "3.1.1 FULLY CONNECTED NEURAL NETWORKS", "section_text": "First, we trained DNNs with two FC hidden layers, each with 800 units and ReLU non-linearities For the L2-regularized network, an L2-coefficient of 1e-5 was used for all weights. For the dropout. methods, unit sampling was performed after each FC layer. For the dropconnect methods, every. weight was sampled. The classification errors of the FC networks on the MNIST test set are shown in Table[1] Sampling during learning significantly increased accuracy in comparison to the baseline NNs, with the dropconnect-based networks being the most accurate. MC sampling at inference did not significantly increase accuracy. We found that Gaussian dropconnect and spike-and-slab dropout had the best accuracy.\nThe classification error, uncertainty, and calibration of the learned probability distributions of each. FC network for varying levels of noise are shown in Figure4] While not improving accuracy, MC. sampling led to networks that better represent their own uncertainty. As the noise in the test se was increased, the uncertainty of the networks with MC sampling highly increased, especially wher. compared to networks with no sampling at inference. This resulted in better calibrated FC networks. for all levels of noise.\nThe calibration curves show that sampling only during training, especially when using only dropout led to overconfidence through placing too much probability mass on the most predicted label (Fig ure 5). In particular, sampling only during training resulted in under-confidence for low predictec probabilities and over-confidence for high predicted probabilities. By distributing probability mass over several labels, the DNNs that sampled at inference better represented the uncertainty of their predictions.\nFigure 6: The MNIST test classification error, entropy, and calibration of the predictions of the convolutional networks: CNN, CNN+L2, Bernoulli DropConnect (BDC) with and without Monte Carlo (MC) sampling, Gaussian DropConnect (GDC) with and without MC sampling, Bernoulli Dropout (BDO) with and without MC sampling, Gaussian Dropout with and without MC sampling and spike-and-slab Dropout (SSD) with and without MC sampling.\nBaseline Bernoulli DropConnect Gaussian DropConnect Bernoulli Dropout Gaussian Dropout Spike-and-Slab Dropou 0 = as Pae aaas CNN BDC GDC BDO GDO SSD Frenneney =MCBDC = MCGDC = MCBDO ==MCGDO = MCSSD 0.5 0.5 0.5 0.5 0.5 0.5 0 0.5 0.5 0.5 0 0 0.5 0 0 0.5 0 0 0.5\nBaseline Bernoulli DropConnect Gaussian DropConnect Bernoulli Dropout Gaussian Dropout Spike-and-Slab Dropou 0=e CNN BDC BDO GDO SSD Frenneney == CNN+L2 - = MCBDC == MCGDC -=MCSSD 0.5 0.5 0.5 0.5 0.5 0.5 Nosse 0 0.5 0.5 0 0.5 0 0.5 0.5 0 0.5 1 = ee 0.5 0.5 0.5 0.5 0.5 0.5 0 0.5 0 0.5 0 0.5 0 0.5 0.5 0 0.5 5 Freeneney 0.5 0.5 0.5 0.5 0.5 0.5 Nossee 0 0.5 0.5 0 0.5 0 0.5 0.5 0.5 Predicted Probability Predicted Probability Predicted Probability Predicted Probability Predicted Probability Predicted Probability"}, {"section_index": "10", "section_name": "3.1.2 CONVOLUTIONAL NEURAL NETWORKS", "section_text": "We also trained CNNs on MNIST. Every network had two convolutional layers and a fully-connected layer (See Appendix B for details). For the L2-regularized network, an L2-coefficient of 1e-5 was. used for all weights. For Bernoulli and Gaussian dropout, dropout was performed after each con- volutional layer and after the FC layer. For Bernoulli and Gaussian dropconnect, every weight was. sampled. The classification error of the CNNs on the MNIST test set is shown in Table|2| Sampling. during training significantly increased the accuracy for the all of the networks, but especially for the. Gaussian dropout network. However, unlike for the FC networks, the dropout-based methods were more accurate than the dropconnect-based methods. Unlike for the FC networks, spike-and-slab had\nBaseline Bernoulli DropConnect Gaussian DropConnect Bernoulli Dropout Gaussian Dropout Spike-and-Slab Dropou Frr CNN BDC GDC BDO GDO SSD = = = CNN+L2 - = = MCBDC - = = MCGDC - = = MCBDO - = = MCGDO - = MCSSD uon 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 5 0 5 0 5 0 5 0 5 3 3 3 3 3 2 2 0 0 0 0 b 0 5 LO 0 5 3 5 0 5 3 3 3 3 2 2 0 0- 0E 0 0 0 5 0 5 0 5 5 0 5 Noise Std. Dev. Noise Std.Dev. Noise Std. Dev. Noise Std. Dev. Noise Std. Dev. Noise Std. Dev.\nFigure 7: The calibration curves for the MNIST test set with and without Gaussian noise of the softmax outputs of the convolutional networks: CNN, CNN+L2, Bernoulli DropConnect (BDC) with and without Monte-Carlo (MC) sampling, Gaussian DropConnect (GDC) with and without MC sampling, Bernoulli Dropout (BDO) with and without MC sampling, Gaussian Dropout with and without MC sampling, and spike-and-slab Dropout (SSD) with and without MC sampling.\nTable 2: MNIST test error for the trained convolutional neural networks (CNNs) with and without Monte-Carlo (MC) sampling using 100 samples.\naccuracies more similar to Bernoulli dropout, which classified more accurately than Gaussian drop. connect. MC sampling during inference did not significantly increase the accuracy of the networks\nThe classification error, uncertainty, and calibration of the learned probability distributions of eacl. network for varying levels of noise are shown in Figure|6] As with the FC networks, MC sampling at inference greatly increased the CNNs' ability to estimate their own uncertainty, particularly fo. inputs that are different from the training set. MC sampling led to increased entropy as inputs became. more noisy, which resulted in better calibration. In particular, this was true of both the Bernoulli anc Gaussian dropconnect networks, which very accurately represented their uncertainty even for highly noisy inputs. The spike-and-slab CNN had similar robust calibration. The calibration curves show. that not using MC sampling at inference led networks that were under-confident when making lov. probability predictions and over-confident when making high probability predictions (Figure[7)\nWe trained large CNNs on natural image classification using the CIFAR-1O dataset, which contain. 50,000 training images and 10,000 testing images (Krizhevsky & Hinton]2009). The CNNs hac 13 convolutional layer followed by a fully connected layer (See Appendix B for details). For L2 regularization, an L2-coefficient of 5e-4 was used for all weights. For the dropout networks, was used after each convolutional layer, but before the non-linearities. For the dropconnect networks all weights were sampled. During training, random horizontal flipping was used. The classificatior error of the CNNs on the CIFAR-1O test set is shown in Table 3] For each variational distribution MC sampling significantly increased test accuracy. Also, the that used dropout, including spike-and slab, had significantly higher accuracies than the networks that only used dropconnect.\nThe classification error, uncertainty, and calibration of the learned probability distributions of each network for varying levels of noise are shown in Figure[8] One of the major differences between the CIFAR-10 and the MNIST results was that using the layer-wise expectation for dropout did not produce good models, regardless of what variational distribution was used. Instead, the standard test time dropout methods led to relatively inaccurate networks with very high output entropy even when\nTable 3: CIFAR-1O test error for the trained convolutional neural networks (CNNs) with and witho Monte-Carlo (MC) sampling using 100 samples\nMethod Mean Error (%) Error Std. Dev. CNN 0.70 CNN+L2 0.70 Bernoulli DropConnect 0.59 MC Bernoulli DropConnect 0.59 0.02 Gaussian DropConnect 0.49 MC Gaussian DropConnect 0.49 0.01 Bernoulli Dropout 0.45 MC Bernoulli Dropout 0.46 0.01 Gaussian Dropout 0.38 MC Gaussian Dropout 0.37 0.01 Spike-and-Slab Dropout 0.43 MC Spike-and-Slab Dropout 0.44 0.01\nMethod Mean Error (%) Error Std. Dev. CNN 19.63 CNN+L2 19.44 Bernoulli DropConnect 17.64 MC Bernoulli DropConnect 17.29 0.05 Gaussian DropConnect 16.00 MC Gaussian DropConnect 15.63 0.04 Bernoulli Dropout 37.47 MC Bernoulli Dropout 10.19 0.06 Gaussian Dropout 24.10 MC Gaussian Dropout 9.29 0.10 Spike-and-Slab Dropout 18.05 MC Spike-and-Slab Dropout 10.44 0.03\nFigure 8: The CIFAR-10 test classification error, entropy, and calibration of the predictions of the. convolutional neural networks: CNN, CNN+L2, Bernoulli DropConnect (BDC) with and with out Monte-Carlo (MC) sampling, Gaussian DropConnect (GDC) with and without MC sampling Bernoulli Dropout (BDO) with and without MC sampling, Gaussian Dropout with and without MC sampling, and spike-and-slab Dropout (SSD) with and without MC sampling..\nBaseline Bernoulli DropConnect Gaussian DropConnect Bernoulli Dropout Gaussian Dropout Spike-and-Slab Dropout 0 = es eae aaas CNN BDC GDC BDO GDO SSD Frenneney = = CNN+L2 = = MCBDC = MCGDC = = MCDC = = MCGDO = MCSSD 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0.5 0.5 0.5 0.5 0 0.5 0 0 0 0 0 0.5 Freeneeey 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0.5 0.5 0.5 0 0.5 0 0.5 Frenneney 0.5 0F 0.5 0.5 0.5 0 0.5 0 0.5 0.5 0 0.5 0 0.5 Predicted Probability Predicted Probability Predicted Probability Predicted Probability Predicted Probability Predicted Probability\nBaseline Bernoulli DropConnect Gaussian DropConnect Bernoulli Dropout Gaussian Dropout Spike-and-Slab Dropout 0 = as aae Sass CNN BDC GDC BDO GDO SSD Frenneeey = = CNN+L2 - = = MCBDC - = = MCGDC - = = MCDC - = = MCGDO - = MCSSD 0.5 0.5 0.5 0.5 0.5 0.5 0 0 0 0 0 0 0.5 1 0 0.5 0 0.5 0 0.5 0 0.5 0.5 0 1\nFigure 9: The calibration curves for the CIFAR-10 test set with and without Gaussian noise of the softmax outputs of the convolutional neural networks: CNN, CNN+L2, Bernoulli DropConnect (BDC) with and without Monte-Carlo (MC) sampling, Gaussian DropConnect (GDC) with and without MC sampling, Bernoulli Dropout (BDO) with and without MC sampling, Gaussian Dropout with and without MC sampling, and spike-and-slab Dropout (SSD) with and without MC sampling\nno input noise was used. This agrees with the results reported byGal & Ghahramani (2015)), whc also found that using dropout at every layer can reduce accuracy if MC sampling is not used. How ever, these results differ from those of Srivastava et al.(2014). In our experience, deeper networks with higher regularization (e.g. Bernoulli dropout probabilities closer to O.5) result in traditiona. dropout inference performing significantly worse than MC dropout. As for the MNIST networks, MC sampling at inference overall greatly increased the CIFAR-10 trained CNNs' ability to estimate their own uncertainty when no or little noise was added to the test images.\nThe classification accuracies and the ability to model uncertainty of the networks with dropconnect sampling were far more robust to noise than the networks with only dropout. However, the MC dropconnect networks are significantly less accurate than the MC dropout networks for the CIFAR-\nBaseline Bernoulli DropConnect Gaussian DropConnect Bernoulli Dropout Gaussian Dropout Spike-and-Slab Dropou Frroo CNN BDC GDC BDO GDO SSD = CNN+L2 =MCBDC = MCGDC = MCDO - MCGDO -MCSSD eet 0.5 0.5 0.5 0.5 0.5 0.5 Casssr 0 0 0 0 0 0.5 0.5 0.5 0.5 0.5 0 0 1 0 0 0 0 0 0.5 1 3 3 3 3 3 3 2 2 0 0 0 0 0.5 0.5 0.5 0 0.5 0 0 0.5 0 0 0 0 0.5 3 3 3 3 3 3 2 2 0 0- 0 0.5 0.5 0.5 0.5 0 E 0.5 0- 0 0 0 0 0 0 0.5 Noise Std. Dev. Noise Std.Dev. Noise Std. Dev. Noise Std. Dev. Noise Std. Dev. Noise Std. Dev.\n10 test set when no noise was added. Networks that used traditional dropout inference instead o sampling were consistently uncertain, regardless of the noise. These networks have worse calibra tion than the MC dropout networks at low levels of noise but better calibration than the MC dropou networks at low levels of noise because they always had high uncertainty. For CIFAR-10, not us ing MC sampling resulted in networks that were generally over-confident when making predictions (Figure9). However, this was not true for the non-sampling dropout networks when no input noise was used. In that case, the networks were highly under-confident."}, {"section_index": "11", "section_name": "4 DISCUSSION", "section_text": "1. Sampling during both learning and inference improved a network's ability to represent its owi uncertainty\n2. Sampling weights independently led to networks that best represented their own uncertainty\nFor all the network architectures and datasets tested, using dropconnect sampling at training and inference resulted in the best calibrated networks overall. This was true regardless of whether drop connect sampling led to the most accurate network. This is in contrast to CNNs with Gaussian dropout sampling, which were significantly the most accurate and also the worst calibrated of the networks with sampling both during training an inference\nFor the FC networks, using dropconnect, particularly with Gaussian sampling, resulted in the most accurate networks. However, using dropout led to the most accurate CNNs. A potential cause of this is the large correlation in the information contained by adjacent elements in an image, which are often covered by the same convolutional kernel. This could mean that sampling the weights of a kernel does not provide as much regularization as the dropout methods\nUsing spike-and-slab dropout, which combines Bernoulli dropout and Gaussian dropconnect, re sulted in networks that performed well for all architectures. Spike-and-slab networks had accuracies similar to the Bernoulli dropout or Gaussian dropconnect depending on which performed better for a given architecture and task, Gaussian dropconnect for FC networks and Bernoulli dropout for CNNs. Spike-and-slab networks also were robustly well calibrated similar to all of the other dropconnect methods.\nThese scalable methods for improving a network's representation of its own uncertainty are widely applicable, since most DNNs already use dropout and getting uncertainty estimates only requires using MC sampling at inference. We plan to further investigate the use of different variational dis- tributions. We also plan to evaluate the use of dropout and dropconnect sampling on large recurrent neural networks. Our results suggest that sampling at inference allows DNNs to efficiently represent. their own uncertainty, an essential part of real-world perception and decision making.."}, {"section_index": "12", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Yarin Gal and Sergii Strelchuk for their helpful discussions regarding the. manuscript. This research was funded by the Cambridge Commonwealth. European & Internationa Trust, the UK Medical Research Council (Program MC-A060-5PR20), and a European Researcl. Council Starting Grant (ERC-2010-StG 261352).\nIn this paper, we investigated the ability of MC sampling to improve a DNN's representation of its own uncertainty. We did this by training Bayesian DNNs with either multiplicative masking of the weights (dropconnect) or units (dropout) using Bernoulli, Gaussian, or spike-and-slab sampling. Based on the results, we draw the following main conclusions:.\nMC sampling at inference improved the calibration of a network's predictions. Overall, this im. provement was particularly large for inputs from outside the training set, which traditional models classified with high confidence despite not being trained on similar inputs.."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Pierre Baldi and Peter J Sadowski. Understanding dropout. In Advances in Neural Informatior Processing Systems, pp. 2814-2822, 2013.\nDavid Barber and Christopher M Bishop. Ensemble learning in bayesian neural networks. NATO ASI SERIES F COMPUTER AND SYSTEMS SCIENCES, 168:215-238, 1998\nYarin Gal. Uncertainty in Deep Learning. PhD thesis, University of Cambridge, 2016\nYarin Gal and Zoubin Ghahramani. Bayesian convolutional neural networks with Bernoulli approx imate variational inference. In 4th International Conference on Learning Representations (ICLR workshop track, 2016.\nAlex Graves. Practical variational inference for neural networks. In Advances in Neural Informatior Processing Systems, pp. 2348-2356, 2011.\nGeoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing the. description length of the weights. In Proceedings of the sixth annual conference on Computational learning theory, pp. 5-13. ACM, 1993.\nHemant Ishwaran and J Sunil Rao. Spike and slab variable selection: frequentist and bayesian strategies. Annals of Statistics, pp. 730-773, 2005.\nDiederik P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparame terization trick. In Advances in Neural Information Processing Systems, pp. 2575-2583, 2015.\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo lutional neural networks. In Advances in neural information processing systems, pp. 1097-1105 2012.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied tc document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\nChunyuan Li, Andrew Stevens, Changyou Chen, Yunchen Pu, Zhe Gan, and Lawrence Carin. Learn ing weight uncertainty with stochastic gradient mcmc for shape classification. In The IEEE Con ference on Computer Vision and Pattern Recognition (CVPR), June 2016.\nDavid JC MacKay. A practical bayesian framework for backpropagation networks. Neural compu tation, 4(3):448-472, 1992.\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015.\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.\nFor Bayesian inference:\np(Dtest|Dtrain Dtest|W)p(W|Dtrain) )dW\nDtest|W)q(W)dW\nWhere the variational distribution q(W) is learned by maximizing the log-evidence lower bound\nlog(p(Dtrain)) > log(p(Dtrain|W))q(W)dW KL(q(W)|p(W)\nlog(p(Dtrain)) log(p(Dtrain|B,V))q(B)q(V)dVdB B KL(a(B)][n(B)) - KL(a(V)]p(V)\n= vi,j+Qvi,jEi,j Vi,j = N( : Ei.\nlog(p(Dtrain)) Og DtrainB, V)q(e)q(B)dedB B\nThis results in the following minimization objective function:\nlog(p(Dtrain|B,V))q(e)q(B)dedB + KL(q(B)|p(B)) + KL(q(V)|p(V L v := B\n1 K L(q(Vi,j]|p(Vi,j)) ) 2 Oi\nCv Lv := -> log(p(Dtrain|B, V))q(e)q(B)dedB + UV U R\n1 log(p(Dtrain|B, V)) + n B,e 1 ) p(Dtest|B,V) n (B,e)\nFor spike-and-slab dropout, as when using Bernoulli dropout, W = Bo V where bi,* ~ Bern(pdo). so if we assume independence between the random variables B and V:.\nFor a spike-and-slab distribution, each element of V is independently sampled from a Gaussian =QUi .: V can be sampled using the \"reparameterization trick'':\nBy using L2 regularization, we are optimizing a lower-bound of the KLD between q(V) and N(0, -1) by only matching the first moment (i.e. the mean):.\nTable B.1: The convolutional neural network (CNN) architecture used for MNIST.\nLayer Kernel Size # Features Stride Non-linearity Conv-1 5x5 32 1 ReLU MaxPool-1 2x2 32 2 Max Conv-2 5x5 64 1 ReLU MaxPool-2 2x2 64 2 Max FC 1500 500 - ReLU Linear 500 10\nTable B.2: The convolutional neural network (CNN) architecture used for CIFAR-10\nLayer Kernel Size # Features Stride Non-linearity Conv-1 3x3 64 1 ReLU Conv-2 3x3 64 1 ReLU MaxPool-1 2x2 64 2 Max Conv-3 3x3 128 1 ReLU Conv-4 3x3 128 1 ReLU MaxPool-2 2x2 128 2 Max Conv-5 3x3 256 1 ReLU Conv-6 3x3 256 1 ReLU Conv-7 3x3 256 1 ReLU MaxPool-3 2x2 256 2 Max Conv-8 3x3 512 1 ReLU Conv-9 3x3 512 1 ReLU Conv-10 3x3 512 1 ReLU MaxPool-4 2x2 512 2 Max Conv-11 3x3 512 1 ReLU Conv-12 3x3 512 1 ReLU Conv-13 3x3 512 1 ReLU MaxPool-5 2x2 512 2 Max FC 512 512 ReLU 1 Linear 512 10 -"}] |
Sk8J83oee | [{"section_index": "0", "section_name": "GENERATIVE ADVERSARIAL PARALLELIZATION", "section_text": "Daniel Jiwoong Im\nAIFounded Inc\nToronto, ON\ndaniel.im}@aifounded.com\nGenerative Adversarial Networks (GAN) have become one of the most studied frameworks for unsupervised learning due to their intuitive formulation. They. have also been shown to be capable of generating convincing examples in limited. domains, such as low-resolution images. However, they still prove difficult to train. in practice and tend to ignore modes of the data generating distribution. Quanti-. tatively capturing effects such as mode coverage and more generally the quality. of the generative model still remain elusive. We propose Generative Adversarial. Parallelization (GAP), a framework in which many GANs or their variants are. trained simultaneously, exchanging their discriminators. This eliminates the tight. coupling between a generator and discriminator, leading to improved convergence. and improved coverage of modes. We also propose an improved variant of the. recently proposed Generative Adversarial Metric and show how it can score indi- vidual GANs or their collections under the GAP model.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In contrast, GANs are known to be difficult to train, especially as the data generating distribution becomes more complex. There have been some attempts to address this issue. For example, Sali- mans et al.(2016) propose several tricks such as feature matching and minibatch discrimination. In this work, we attempt to address training difficulty in a different way: extending two player genera- tive adversarial games into a multi-player game. This amounts to training many GAN-like variants in parallel, periodically swapping their discriminators such that generator-discriminator coupling is reduced. Figure1provides a graphical depiction of our method.\nBesides the training dilemma, from the point of view of density estimation, GANs possess very dif ferent characteristics compared to other probabilistic generative models. Most probabilistic model. distribute the probability mass over the entire domain, whereas GAN by nature puts point-wise prob ability mass near the data. The question of whether this is desirable property or not is still an oper question However, the primary concern of this property is that GAN may fail to allocate mass tc\n'Noting that the general view in ML is that there is nothing wrong with sampling from a degenerate distr bution (Neal1998).\n{hma02,ckim07, gwtaylor}@uoguelph.ca"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The objective function is not restricted to distances in input (e.g. pixel) space, for example, recon- struction error. Moreover, there is no restriction to certain type of functional forms such as having a Bernoulli or Gaussian output distribution Compared to undirected probabilistic graphical models (Hinton et al.] 2006 Salakhutdinov & Hinton!2o09), samples are generated in a single pass rather than iteratively. Moreover, the time to generate a sample is much less than recurrent models like PixelRNN (Oord et al.]2016). Unlike inverse transformation sampling models, the latent variable size is not restricted (Hyvari- nen & Pajunen1999Dinh et al.2014)\nOG OG OG OD4 (a) GAN (c) GAP (b) GANs trained by data-parallelism\nsome important modes of the data generating distribution. We argue that our proposed model coulc alleviate this problem.\nThat our solution involves training many pairs of generators and discriminators together is a produc of the fact that deep learning algorithms and distributed systems have been co-evolving for some time. Hardware accelerators, specifically Graphics Processing Units, (GPUs) have played a funda mental role in advancing deep learning, in particular because deep architectures are so well suitec to parallelism (Coates et al.2013). Data-based parallelism distributes large datasets over disparate nodes. Model-based parallelism allows complex models to be split over nodes. In both cases, learn ing must account for the coordination and communication among processors. Our work leverages recent advances along these lines (Ma et al.]2016).\nThe concept of a two player zero-sum game is borrowed from game theory in order to train a gen erative adversarial network (Goodfellow et al.2014). A GAN consists of a generator G and dis criminator D, both parameterized as feed-forward neural networks. The goal of the generator is to generate samples that fool the discriminator from thinking that those samples are from the data distribution p(x), ad interim the discriminative network's goal is to not get tricked by the generator\nThis view is formalized into a minimax objective such that the discriminator maximizes the ex pectation of its predictions while the generator minimizes the expectation of the discriminator's predictions,\nwhere Og and 0p are the parameters (weights) of the neural networks, pp is the data distribution and pg is the prior distribution of the generative network\nThe reality of training GANs is quite different from the ideal case due to the following reasons\n1. The discriminative and generative networks are bounded by a finite number of parameters, whicl limits their modeling capacity.\nFigure 1: Depiction of GAN, Parallel GAN, and GAP. Not intended to be interpreted as a graphical. model. The difference between Figure (b) and (c) is that typical data-based parallelization is based on multiple models which share parameters. In contrast, GAP requires multiple models with their. own parameters which are structured in a bipartite formation..\nmin max V(D, G) = min max |Ex~pp[log D(x)] + Ez~pg [log (1 - D(G(z))) 0G0 p 0G 0 p\nnin max V(D, G) = min max Ex~pp[log D(x)] + Ez~pg[log (1- D(G(z))] 0G0D 0G 0D\nProposition 2 in (Goodfellow et al. 2014) illustrates the ideal concept of the solution. For two player. game, each network's gain of the utility (loss of the cost) ought to balance out the gain (loss) of the other network. In this scenario, the generator's distribution becomes the data distribution. Remark that when the objective function is convex, gradient-based training is guaranteed to converge to a saddle point.\n2. Practically speaking, the second term of the objective function in Equation[1is a bottleneck early on in training, where the discriminator can perfectly distinguish the noisy samples coming from. the generator. The argument of the log saturates and gradient will not flow to the generator. 3. The GAN objective function is known to be non-convex and it is defined over a high-dimensional. space. This often results in failure of gradient-based training to converge..\nminlog(1 - D(G(z))) -> maxlog(D(G(z))) 0G 0G\nThis provides better gradient flow in the earlier stages of training (Goodfellow et al.]2014)\nAlthough there have been cascades of success in image generation tasks using advanced GAN. Radford et al. 2015} Im et al.]2016] Salimans et al.]2016), all of them mention the problem o difficulty in training. For example, Radford et al.(2015) state that the generator ... collapsing al. samples to a single point ... is a common failure mode observed in GANs. This scenario can occui. when the generator allocates most of its probability mass to a single sample that the discriminator has. difficulty learning. Empirically, convergence of the learning curve does not correspond to improvec. quality of samples coming from the GAN and vice-versa. This is primarily caused by the third issue. mentioned above. Gradient-based optimization methods are only guaranteed to converge to a Nasl. Equilibrium for convex functions, whereas the loss surface of the neural networks used in GANs are highly non-convex and there is no guarantee that a Nash Equilibrium even exists..\nThe subject of generative modeling with GANs has undergone intensive study, and model eval uation between various types of GANs is topic of increased interest and debate (Theis et al. 2015). Our work is inspired by the Genera tive Adversarial Metric (Im et al. 2016). The GAM enables us to quantitatively evaluate any pair of GANs. The core concept of the GAM is to swap one discriminator (generator) with the other discriminator (generator) during the tes phase (see the pictorial example in Figure 8 The GAM concept can easily be extended from evaluation to the training phase\nSelecllnebeslOA DasedOnOA data parallelism, we do not train them indepen- dently with shared parameters, rather we try to produce synergy effects among different GANs during the training phase. This can be achieved simply by randomly swapping different discrimi nators (generators) every K updates. After training multiple GANS with our proposed method, we can select the best one based on the GAM. The pseudocode is shown in Algorithm|1\nWe call our proposed method generative adversarial parallelization (GAP). Note that our method is not model-specific in a sense that GAP can be applied to any extension of GANs. For example GAP can be applied to DCGAN or GRAN, or we can even apply GAP on several types of GANs simultaneously. Say, we have four GPUs available on which to parallelize models. We can allocate two GPUs for DCGANs and the remaining two GPUs for GRANs. Therefore, we view GAP as an operator rather than a model topology/architecture.\nThe first issue comes from the nature of the modelling problem. Nevertheless, due to the expres siveness of deep neural networks, they have been shown empirically to be capable of generating natural images (Radford et al.[2015f Im et al.[2016) by adopting parameter-efficient convolutional architectures. The second issue is typically addressed by inverting the generator's minimization into the maximization formulation in Equation1accordingly,\nAlgorithm 1 Training procedure of GAP\nGAP pairing over time Disc1 Gen1 Disc2 Gen2 Disc3 Gen3 Disc4 Gen4 Trmnee Disc3 Gen1 Disc4 Gen2 Disc1 Gen3 Disc1 Gen4 Disc4 Gen1 Disc4 Gen2 Disc1 Gen3 Disc1 Gen4\nFigure 2: A cartoon illustration of Generative Adversarial Parallelization. Generators and discrim inators are represented by different monks and sensei. The pairing between monks and sensei are randomly substituted overtime.\nWe argue that overfitting in GANs manifests itself differently than in reconstructive models. Let us explain using two analogies to describe this phenomenon. Consider a generator as a judo fighter. and discriminator as a sparring partner. When a judo fighter is only trained with the same sparring. partner, his/her fighting strategy will naturally adapt to the style of his/her sparring partner. Thus,. when the fighter is exposed to a new fighter with a different style, this fighter may suffer. Simi-. larly, if a student learns from a single teacher, his/her learning experience will not only be limited. but even overfitted to the teacher's style of exams (see Figure|2). Equivalently, a paired generator. and discriminator are likely to be adapted to their own strategy. Here, GAP intrinsically prevents. this problem as the generator (discriminator) periodically gets paired with different discriminator. (generator). Thus, GAP can be viewed as a regularizer.."}, {"section_index": "3", "section_name": "3.2 MODE COVERAGE", "section_text": "The kind of overfitting problem mentioned above further relates to the problem of assigning proba. bility mass to different modes of the data generating distribution - what we call mode coverage\nIn a two player generative adversarial game, the concept of overfitting still exists. However, the. realization of overfitting can be hard to notice. This is mainly due to not having a reconstructive error. function. For models with a reconstruction-based objective, samples will simply become identical to the training data as the error approaches zero. On the other hand, with the GAN objective, even. when the error approaches zero, it does not imply that the samples will look like the data. So, how. can we characterize overfitting in a GAN?.\n(a) R15 Dataset (b) GAN (c) GAPGAN4\nFigure 3: [a) The R15 dataset. Samples drawn from[b) GAN and[c) GAPGAN4. GAPGAP4 denotes four GANs trained in parallel with swapping at every epoch. The two models were trained using 100 out of 600 data points from the R15 dataset.\n0.0 0.0 ... 0.5 -0.5 1.01* % -0.5 0.0 1.0 -0.5 1.0 0.5 0.0 1.0 (a) Mixture of Gaussian (b) GAN (c) GAPGAN4\nFigure 4: a] The Mixture of Gaussians dataset. Samples drawn from[b) GAN and[c) GAPGAN4 GAPGAP4 denotes four GANs trained in parallel with swapping at every epoch. The two models were trained using 2500 examples\nLet us re-consider the example introduced in Section 2.1 Say, the generator was able to figure out a single mode from which samples are drawn that confuse the discriminator. As long as the discriminator does not learn to fix this problem, the generator is not motivated to consider any other modes. This kind of scenario allows the generator to cheat by staying within a single, or small set of modes rather than exploring alternatives.\nThe story is not exactly the same when there are several different discriminators interacting with each generator. Since different discriminators may be good at distinguishing samples from different modes, each generator must put some effort into fooling all of the discriminators by generating samples from different modes. The situation where samples from a single mode fool all of the discriminators grows much less likely as the number and diversity of discriminators and generators increases (see Figure3|and4). Full details of this visualization are provided in Section4.1"}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "As it is difficult to quantitatively assess mode coverage, first we aim to visualize samples from GAP vs. other GAN variants on low-dimensional (toy) datasets as well as low-dimensional projections on real data. Then to evaluate each model quantitatively, we apply the GAM-II metric which is a re-formulation of GAM (Im et al.][2016) which can be used to compare different GAN architectures. Its motivation and use is described in Section|4.1] We consider, in total, five GAP variants which are summarized in Table1\n2 The Theano-based DCGAN and GRAN implementations were based on https://github.com/Newmu/dcga and https://github.com/jiwoongim/GRAN, respectively.\n1.0 1.0 . * .. ..... 0.5 0.5 0.5 ..... 0.0 0.0 gg 0.5 -0.5 0.5 1.0 0.0 0.0 0.5 1.0 1.0 0.5 1.0 -0.5 0.0 0.5 1.0\nTable 1: GAP variants and their short-hand labels considered in our experiments\nName Model Description GAPD2 GAP(DCGANx2) Two DCGANs trained with GAP. GAPD4 GAPDC(DCGANx4) Four DCGANs trained with GAP. GAPG2 GAP(GRANx2) Two GRANs trained with GAP. GAPG4 GAP(GRANx4) Four GRANs trained with GAP. GAPc4 GAP(DCGANx2, GRANx2) Two DCGANs and two GRANs trained with GAP."}, {"section_index": "5", "section_name": "4.1 EXPERIMENTAL SETUP", "section_text": "All of our models are implemented in Theano (Bergstra et al.]2010) - a Python library that facil. itates deep learning research. Because every update of each model is implemented as a separate. process during training, swapping their parameters among different GANs necessitates interprocess communication3| Similar to the Theano-MPI framework, we chose to do inter-GPU memory trans. fer instead of passing through host memory in order to reduce communication overhead. Random swapping of the two discriminators' parameters is achieved with an in-place MP I_SendRecv op. eration as DCGAN and GRAN share the same architecture and therefore the same parameterization.\nThroughout the experiments, all datasets were normalized between [0, 1]. We used the same hyper parameters reported in (Radford et al.]2015) and (Im et al.]2016) for DCGAN and GRAN, re- spectively. The only additional hyper-parameter introduced by GAP is the frequency of swapping. discriminators during training. We also made deliberate fine-grained distinctions among each GAN. trained under GAP. These were: i) the generator's prior distribution was selected as either uniform or Gaussian; ii) the order of mini-batches was permuted during learning; and iii) noise was injected at the input during learning and the amount of noise was decayed over time. The point of introduc- ing these distinctions was to avoid multiple GANs converging to the same or very similar solutions.. Lastly, we used gradient clipping (Pascanu et al.|[2013) on both discriminators and generators..\nTo measure the performance of GANs, our first attempt was to apply GAM to evaluate our model. Unfortunately, we realized that GAM is not applicable when comparing GAP vs. non-GAP mod. els. This is because GAM requires the discriminator the GANs under comparison to have similar. error rates on a held-out test set. However, as shown in Figure [6] GAP boosts the generalization. of the discriminators, which causes it to have different test error rates compared to the error rate from non-GAP models. Hence, we propose a new metric that omits the GAM's constraints which. we call GAM-II. It simply measures the average (or worst case) error rate among a collection of discriminators. A detailed description of GAM-II is provided in Appendix A.1.\nDetermining whether applying GAP achieves broader mode coverage is difficult to validate in high-dimensional spaces. Therefore, we initially verified GAP and non-GAP models on two low- dimensional synthetic datasets. The R15 dataselcontains 500 two-dimensional data points with 15 clusters as shown in Figure|3a The Mixture of Gaussians datase|contains 2,500 two-dimensional data points with 25 clusters as shown in Figure4a.\nBoth discriminator and generator had four fully-connected batch-normalized layers with ReLU acti vation units. We first optimized the hyper-parameters of a single GAN based on visually inspecting. the samples that it generated (i.e. Figure|3|shows samples from the best performing single GAN that we trained). We then trained four parallelized GANs using the same hyper-parameters of the best. single GAN.\nThe samples generated from both models are shown in Figure 3 and 4 Weobservetha GAP(GAN4) produces samples that look more similar to the original dataset compared to a singl\n(a) GAP(DCGANx4) (b) GAP(GRANx4)\nFigure 5: CIFAR-10 samples. Best viewed in colour. More samples are provided in the Appendix\nGAN. The overlap of samples generated by four GANs are consistent with Figure3c Note that. as we decrease the number of training points, the overlap of GAN samples deviates from the origi. nal dataset while GAP seems not to suffer from this phenomenon. For example, when we used all 600 examples of R15, both GAN and GAP samples matched the distribution of data in Figure 3a However, as we use less training examples, GAN failed to accurately model the data distribution by. dropping modes. The samples plotted in Figure|3c are based on training each model with a random. subset of 100 examples drawn from the original 600. Based on the synthetic experiments we confirm that GAP can improve mode coverage when a limited number of training samples are available..\nIn order to gain a qualitative sense of models trained using a high dimensional dataset, we considered two experiments: i) we examined the class label predictions made on samples from each model to check how uniformly they were distributed. The histogram of the predicted classes is provided in Figure[14] ii) we created a t-SNE visualization of generated samples overlaid on top of the true data (see Appendix|A.2). We find that the intersection of data points and samples generated by GAP is. slightly better than samples generated by individual GANs. In addition to the synthetic data results these visualizations suggest some favourable properties of GAP, but we hesitate to draw any strong. conclusions from them."}, {"section_index": "6", "section_name": "O: Does GAP enhance generalization?", "section_text": "To answer this question, we considered the MNIST, CIFAR-10, and LSUN church datasets which are often used to evaluate GAN variants. MNIST and CIFAR-10 consist of 50,000 training and 10,000 test images of size 2728 and 32323 pixels, respectively. Each contains 10 different classes oi objects. The LSUN church dataset contains various outdoor church images. These high resolution images were downsampled to 64 64 pixels. The training set consists of 126,227 examples.\nOne implicit but imperfect way to measure the generalization of a GAN is to observe generalizatior of the discriminator alone. This is because the generator is influenced by the discriminator and vice. versa. If the discriminator is overfitting the training data, then the generator must be biased towards the training data as well. Here, we plot the learning curve of the discriminator during training fo both GAP(DCGAN) and GAP(GRAN).\nFigure |6|shows the learning curve for a single model versus groups of two and four models paral lelized under GAP. We observe that more parallelization leads to less of a spread between the train and validation curves indicating the ability of GAP to improve generalization. Note that in order to plot a single representative learning curve while training multiple models under GAP, we averaged the learning curves of the individual models. To demonstrate that our observations are not merely at- tributable to smoothing by averaging, we show individual learning curves of the parallelized GANs (see Figure[13|in Appendix|A.3). From now on, we will work with GAPp4 and GAPG4.\n1. 1.6 1.4 / 1.3 1.4 1.2 1.1 Cost cost 1.2 1.0 GAP(DCGANx1) dis_tr GAP(GRANx1) dis_tr 0.9 GAP(DCGANx1) dis vl GAP(GRANx1) dis vl GAP(DCGANx2) dis_tr 0.8 GAP(GRANx2) dis tr 1.0 GAP(DCGANx2) dis vl GAP(GRANx2) dis_vI GAP(DCGANx4) dis_tr 0.7 GAP(GRANx4) dis_tr GAP(DCGANx4) dis vl GAP(GRANx4) dis vl 0.8 0.6 10 20 30 40 50 60 70 80 0 5 10 15 20 25 epoch epoch (a) DCGAN (b) GRAN\nQ: How does the rate at which discriminators are swapped affect training?\n0.0 0.0 cost 0.0 3.0 2.5 1.0 0.5 0.0 10 20 30 40 50 60 70 10 15 20 25 30 epoch epoch (a) GAP(DCGAN) trained on CIFAR-10 (b) GAP(GRAN) trained on CIFAR-10\n0.5 Cost Cost :- 0.0 10 20 30 40 50 60 70 0 10 15 20 25 30\nFigure 7: The standard deviations of the validation costs at various swapping frequencies. From to to bottom: 0.1, 0.3, 0.5, 0.7, and 1.0 per epoch..\nAs noted earlier, the swapping frequency is the only additional hyper-parameter introduced by GAP We conduct a simple sensitivity analysis by plotting the validation cost of each GAN during training along with its standard deviation in Figure7] We observe that GAP(DCGAN) varies the least at a swapping frequency of O.5 - swapping twice per epoch. Meanwhile, GAP(GRANs) are not toc sensitive to swapping frequencies above O.1. Figure|12|in Appendix |A.3|plots learning curves a different swapping frequencies. Across all rates, we still see that the spread between the training and validation costs decreases with the number of GANs trained in parallel.\nWe used GAM-II to evaluate GAP (see Appendix|A.1). We first looked at the performance over four models: DCGAN, GRAN, GAPp4, and GAPg4. We also considered combining multiple GAN- variants in a GAP model (hybrid GAP). We denote this model as GAPc4. GAPc4 consists of two DCGANs and two GRANs trained with GAP. Overall, we have ten generators and ten discrimina- tors for DCGAN and GRAN: four discriminators from the individually-trained models, and four discriminators from GAP, and two discriminators from the GAP combination, GAPc4. We used the collection of all ten discriminators to evaluate the generators. Table[3]presents the results. Note that we report the minimum and maximum of average and worst error rates among four GANs. Looking\nFigure 6: Discriminator learning curves on CIFAR-10 as a proxy for generalization perfor- mance. As parallelization scales up, the spread between training and validation cost shrinks. Note that the curves corresponding to \"GAP(DCGANx2)\",\"GAP(DCGANx4)\", \"GAP(GRANx2)\" and \"GAP(GRANx4)\" are averages of the corresponding GAP models. See Figure[13|in AppendixA.3 for the individual curves before averaging.\nSamples from each CIFAR-10 and LSUN model for visual inspection are reproduced in Figures1 1718 and19\nTable 2: DCGANs versus GAP(DCGAN) evaluation using GAM-II\nMODELS DCGAN GAPp4 GAPc4 DATASET MEASURE MIN MAX MIN MAX MIN MAX Avg. 0.352 0.395 0.430 0.476 0.398 0.423 MNIST WORST 0.312 0.351 0.355 0.405 0.326 0.343 AvG. 0.333 0.368 0.526 0.565 0.888 0.902 CIFAR-10 WORST 0.173 0.225 0.174 0.325 0.551 0.615 Avg. 0.592 0.628 0.619 0.652 0.108 0.180 LSUN WORST 0.039 0.078 0.285 0.360 0.0 0.0\nTable 3: GRAN versus GAP(GRAN) evaluation using GAM-II"}, {"section_index": "7", "section_name": "5 DISCUSSION", "section_text": "We have proposed Generative Adversarial Parallelization, a framework in which several. adversarially-trained models are trained together, exchanging discriminators. We argue that this reduces the tight coupling between generator and discriminator and show empirically that this has a beneficial effect on mode coverage, convergence, and quality of the model under the GAM-II metric.. Several directions of future investigation are possible. This includes applying GAP to the evolving variety of adversarial models, like improvedGAN (Salimans et al.2016). We still view stability as. an issue and partially address it by tricks such as clipping the gradient of the discriminator. In this. work, we only explored synchronous training of GANs under GAP, however, asynchronous training may provide more stability. Recent work has explored the connection between GANs and actor critic methods in reinforcement learning (Pfau & Vinyals 2016). Under this view, we believe that. GAP may have interesting implications for multi-agent RL. Although we have assessed mode cover-. age qualitatively either directly or indirectly via projections, quantitatively assessing mode coverage for generative models is still an open research problem..\n6Unfortunately, we did not get the code provided by (Wu et al. 2016) to work on GRAN\nat the average errors, GAPp4 strongly outperforms DCGAN on all datasets. GAPg4 outperforms. GRAN on CIFAR-10 and MNIST and strongly outperforms it on LSUN. For the case of the max- imum worst-case error, GAP outperforms both DCGAN and GRAN across all datasets. However we did not find an improvement on GAPc4 based on the GAM-II metric..\nAdditionally, we estimated the log-likelihood assigned by each model based on a recently proposed evaluation scheme that uses Annealed Importance Sampling (Wu et al.]2016). With the code pro. vided by (Wu et al.]2016), we were able to evaluate DCGANs trained by GAPp4 and GAPcom 6 The results are shown in Table 4] Again, these results show that GAPp4 improves on DCGAN's performance, but there is no advantage using combined GAPc4..\nMODELS GRAN GAPG4 GAPc4 DATASET MEASURE MIN MAX MIN MAX MIN MAX Avg. 0.433 0.465 0.510 0.533 0.459 0.474 MNIST WORST 0.004 0.020 0.008 0.020 0.010 0.012 AvG. 0.289 0.355 0.332 0.416 0.306 0.319 CIFAR-10 WORST 0.006 0.019 0.048 0.171 0.001 0.023 Avg. 0.477 0.590 0.568 0.649 0.574 0.636 LSUN WORST 0.013 0.043 0.022 0.055 0.015 0.021"}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "James Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a cpu and gpu math expression compiler. In In Proc. SciPy, 2010.\nEmily Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models. using a laplacian pyramid of adversarial networks. In Proceedings of the Neural Information Processing Systems (NIPS), 2015.\nDaniel Jiwoong Im, Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating images with recurrent adversarial networks. In arXiv preprint arXiv:1602.05110, 2016.\nRadford M. Neal. Annealed importance sampling. In arXiv preprint arXiv:9803008, 1998\nAaron van den Oord. Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks In Proceedings of the International Conference on Machine Learning (ICML). 2016.\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural. networks. In Proceedings of the International Conference on Machine Learning (ICML). 2013\nDavid Pfau and Oriol Vinyals. Connecting generative adversarial networks and Actor-Critic meth ods. 6 October 2016\nRuslan Salakhutdinov and Geoffrey E. Hinton. Deep boltzmann machines. In Proceedings of the 1t011t10 o (CML)2009\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training gans. In arXiv preprint arXiv:1606.03498. 2016\nLucas Theis, Aaron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. 5 November 2015.\nYuhuai Wu. Yuri Burda, Ruslan Salakhutdinov, and Roger Grosse. On the quantitative analysis oi decoderbased generative models. In arXiv preprint arXiv:1611.04273. 2016\nLaurent Dinh, David Krueger, and Yoshua Bengio. Nice: non-linear independent components esti- mation. In arXiv preprint arXiv:1410.8516, 2014\nAapo Hyvarinen and Petteri Pajunen. Nonlinear independent component analysis: Existence and uniqueness results. Neural Networks, 12:429-439. 1999\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In arXiv preprint arXiv:1511.06434, 2015.\nGenerator 2\nFigure 8: Ilustration of the Generative Adversarial Metric\nA.1 GENERATIVE ADVERSARIAL METRIC H\nDCGAN x 4 GAP(DCGAN x 4) GAP(DCGAN x 2, GRAN x 2) Generator 1 Generator 2 Generator 3 Generator 4 Generator 1 Generator 2 Generator 3 Generator 4 Generator Generator MM M M NM MM M M M MM M Samples Samples Samples Samples Samples Samples Samples Samples Samples Samples Disc. 1 Discr. 2 Discr. 3 Discr. 4 Disc. 1 Discr. 2 Discr. 3 Discr. 4 Disc. Disc. DCGAN x 4 GAP(DCGAN x 4) GAP(DCGAN x 2, GRAN x 2) M\nFigure 9: Illustration of the Generative Adversarial Metric II\nGAM-II evaluates a model based on either the average error rate or worst error rate of a collection of discriminators given a set of samples from each model to be evaluated:\nNj 1 e(Sj) = e(Sj|Di) argmax argmax {Gj|Sj~pg;} {Gj|Sj~pg i=1 argmax e(Sj) = argmax min e(S;[Di) {Gj|Sj~Pg;} {Gj|Sj~pg,}i=1.Nj\nwhere e outputs the classification error rate, and N; is all discriminators except for the ones that the. generator j saw during training. For example, the comparison of DCGAN and GAP applied to four DCGANs is shown in Figure|9\nGenerator 1 Discriminator 2 Generator 2 Discriminator 1\nDiscriminator 2\nDiscriminator 1\nAlthough GAM is a valid metric as it measures the likelihood ratio of two generative models, it is. hard to apply in practice. This is due to having a test ratio constraint, which imposes the condition. that the ratio between test error rates be approximately unity. However, because GAP improves the generalization of GANs as shown in Figure[6] the test ratio often does not equal one (see Section4) We introduce a new generative adversarial metric, and call it GAM-II..\nDefinition. We say that GAP helps if at least one of the models trained with GAP performs bette. than a single model. Moreover, GAP strongly-helps if all models trained with GAP perform bettei than a single model.\nIn our experiments, we assess GAP based on the definition above\nIn order to get a qualitative sense of models trained using a high dimensional dataset, we consider a t-SNE map of generated samples overlaid on top of the true data. Normally, a t-SNE map is used tc visualize clusters of embedded high-dimensional data. Here, we are more interested in the overlap between true data and generated samples by visualizing clusters which we interpret as modes of the data generating distribution.\nFigure|10|and|11|present the t-SNE map of data and samples from single- and multiple- trained GANs under GAP. We find that the intersection of data points and samples generated by GAP is. slightly better than samples generated by individual GANs. This provides an incomplete view but is nevertheless a helpful visualization.\n20 20 15 15 10 10 5 0 -5 -10 10 15 -15 . CIFAR10 DCGAN GAP(DCGAN) . CIFAR10 GRAN GAP(GRAN) -20 -20 -20 -15 10 -5 0 5 10 15 20 -20 -15 -10 -5 0 5 10 15 20\nFigure 10: t-SNE mapping of data and sample points on CIFAR-10. The points are colour coded as:. Data (Black), Single Model (Magenta), and GAP (Cyan). Note that, particularly for the figure on the right, there seems to be more overlap between the data and the GAP-generated samples compared to the GAN-generated samples.\n20 20 15 15 10 10 5 0 0 -5 -5 -10 10 15 15 . LSUN GRAN GAP(GRAN) LSUN GRAN GAP(GRAN) -20 20 -20 -15 -10 -5 0 5 10 15 20 -20 -15 -10 -5 0 5 10 15 20\n15 15 10 10 5 5 oF 0 -5 -5 -10 10 -15 -15 . LSUN GRAN GAP(GRAN) . LSUN GRAN GAP(GRAN) 20 20 '20 -15 -10 -5 5 10 15 20 20 -15 -10 -5 5 10 15 20 0 0\n20 20 15 15 10 10 5 5 oF 0 -5 10 10 15 15 . CIFAR10 DCGAN GAP(DCGAN) . CIFAR10 GRAN GAP(GRAN) -20 -20 -20 -15 -10 -5 0 5 10 15 20 -20 -15 -10 -5 0 5 10 15 20\nThe supporting figures as in Figure[6|is presented for LSUN dataset in Figure[12] There are total o four plots with different swapping frequencies.\n2.5 2.5 GAP(GRANx1) dis tr GAP(GRANx1) dis tr GAP(GRANx1) dis vI GAP(GRANx1) dis vI GAP(GRANx2) dis_tr GAP(GRANx2) dis_tr 2.0 2.0 GAP(GRANx2) dis vl GAP(GRANx2) dis vl GAP(GRANx4) dis_tr GAP(GRANx4) dis_tr GAP(GRANx4) dis_vl GAP(GRANx4) dis_vl 1.5 1.5 cost cost 1.0 1.0 0.5 0.5 0.0 0.0 0 2 4 6 8 10 0 2 4 6 8 10 epoch epoch (a) swapping frequency every O.1 epoch (b) swapping frequency every 0.3 epoch. 2.5 2.5 GAP(GRANx1) dis tr GAP(GRANx1) dis tr GAP(GRANx1) dis_vl GAP(GRANx1) dis_vl GAP(GRANx2) dis tr GAP(GRANx2) dis tr 2.0 2.0 GAP(GRANx2) dis vI GAP(GRANx2) dis_vI GAP(GRANx4) dis_tr GAP(GRANx4) dis_tr GAP(GRANx4) dis vI GAP(GRANx4) dis vI 1.5 1.5 Costt Cost 1. 0.5 0.5 0.0 0.0 0 2 4 6 8 10 2 4 6 8 10 epoch epoch (c) swapping frequency every half epoch. (d) swapping frequency every epoch\nFigure 12: Averaged GAP(GRAN) learning curves trained on the LSUN Church dataset. As paral- lelization scales up, the gap between training and validation cost narrows..\nFigure 13|presents an instance of an individual learning curve in the case when multiple GANs are trained under GAP. The difference from from Figure 6 and Figure 12|is that GAP curves are represented by the learning curve of a single GAN within GAP rather than an average. Fortunately the behaviour remains the same, where the spread between training and validation cost decreases as parallelization scales up (i.e. more models in a GAP).\n1.8 1.6 1.6 1.4 1.4 Cost Cost Th 1.7 1.2! 1.0 1.0 0.8 0.8 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80 epoch epoch (a) Swapping frequency every 0.1 epoch (b) Swapping frequency every 0.3 epoch DCGAN_baseline_ DCGAN_baseline_ 4GAPswp0.5_tr 4GAPswp1.0_tr 1.6 4GAPswp0.5_vl 1.6 4GAPswp1.0_v1 2GAPswp0.5_tr 2GAPswp1.0_tr 1.4 1.4 I/ Cost Cost 1.2 1.2 1.0 1.0 0.8 0.8 10 20 30 40 50 60 70 80 0 10 20 30 40 50 60 70 80 epoch epoch enoch\nFigure 13: Each GAP model represented by the learning curve of a single DCGAN within the GAP(DCGAN) trained on CIFAR-10. This demonstrates that the observed behaviour of reducing the spread between training and validation cost is not simply an effect of averaging\nWe observed the distribution of class predictions on samples from each model in order to check how closely they match the training set distribution (which is uniform for MNIST). We trained a simple logistic regression on MNIST that resulted in a ~99% test accuracy rate. The histogram of the predicted classes is provided in Figure 14] We looked at the exponentiated expected KL divergence between the predicted distribution and the (uniform) prior distribution, also known as the \"Inceptior Score\" (Salimans et al.]2016). The results are shown in Table[5\nTable 5: Inception Score of GAP\n0.14 0.12 0.10 0.10 0.10 0.08 0.08 0.06 0.04 0.02 0.02 0.0 0.00 0.00 0.0 (a) DCGAN (b) GAP(DCGANx4) (c) GRAN (d) GAP(GRANx4)\nFigure 14: The distribution of the predicted class labels of samples from various models made by a separately trained logistic regression.\nFigure 15: CIFAR-10 samples generated by GAP[DCGANx4]. Best viewed in colour\nFigure 16: CIFAR-10 samples generated by GAP[GRANx4]. Best viewed in colour\nFigure 17: LSUN Church samples generated by GAP[DCGANx4] at 0.3 swapping frequency. Bes viewed in colour.\nFigure 18: LSUN Church samples generated by GAP[GRANx4] at 0.5 swapping frequency. Bes viewed in colour.\n(a) Samples from one of the DCGANs trained using (b) Samples from one of the GRANs trained using GAPc4 GAPc4\n(a) Samples from one of the fine-tuned DCGAN using (b) Samples from one of the fine-tuned GRAN using GAPc4 GAPc4\nFigure 20: CIFAR-10 samples trained by GAP(DCGANx2, GRANx2).\nWe also tried fine-tuning individually-trained GANs using GAP, which we denote as GAPF4 GAPF4 consists of two trained DCGANs and two trained GRANs. They are then fine-tuned us- ing GAP for five epochs. Samples from the fine-tuned models are shown in Figure20.\nFigure 19: CIFAR-10 samples trained by GAP(DCGANx2, GRANx2)"}] |
rJg_1L5gg | [{"section_index": "0", "section_name": "INCREMENTAL SEOUENCE LEARNING", "section_text": "Edwin D. de Jong\nDepartment of Information and Computing Sciences Utrecht University\nhttps://edwin-de-jong.github.io/\nDeep learning research over the past years has shown that by increasing the scope or difficulty of the learning problem over time, increasingly complex learning problems can be addressed. We study incremental learning in the context of sequence learning, using generative RNNs in the form of multi-layer recurrent Mixture Density Networks. While the potential of incremental or curriculum learning to enhance learning is known, indiscriminate application of the principle does not necessarily lead to improvement, and it is essential therefore to know which forms of incremental or curriculum learning have a positive effect. This research contributes to that aim by comparing three instantiations of incremental or curriculum learning. We introduce Incren a simnle incremental anproach to\nIncremental Sequence Learning starts out by using only the first few steps of each. sequence as training data. Each time a performance criterion has been reached, the length of the parts of the sequences used for training is increased..\nTo evaluate Incremental Sequence Learning and comparison methods, we introduce and make available a novel sequence learning task and data set: predicting and classifying MNIST pen stroke sequences, where the familiar handwritten digit images have been transformed to pen stroke sequences representing the skeletons of the digits."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "lne d1g1ts Ve find that Incremental Sequence Learning greatly speeds up sequence learning nd reaches the best test performance level of regular sequence learning 20 times aster, reduces the test error by 74%, and in general performs more robustly; it isplays lower variance and achieves sustained progress after all three comparison nethods have stopped improving. The two other instantiations of curriculum. arning do not result in any noticeable improvement. A trained sequence prediction nodel is also used in transfer learning to the task of sequence classification, where. is found that transfer learning realizes improved classification performance. ompared to methods that learn to classify from scratch..\nDeep learning research over the past years has shown that by increasing the scope or difficulty of the. learning problem over time, increasingly complex learning problems can be addressed. This principle has been described as Incremental learning byElman (1991), and has a long history. Schlimmer and. Granger (1986) described a pseudo-connectionist distributed concept learning approach involving. incremental learning.Elman(1991) defined Incremental Learning as an approach where the training. data is not presented all at once, but incrementally; see also Elman (1993). Giraud-Carrier(2000 defines Incremental Learning as follows: \"A learning task is incremental if the training examples used. to solve it become available over time, usually one at a time.\"Bengio et al.[(2009) introduced the framework of Curriculum Learning. The central idea behind this approach is that a learning system is. guided by presenting gradually more and/or more complex concepts. A formal definition is provided. specifying that the distribution over examples converges monotonically towards the target training\nAn extension of the notion of incremental learning is to also let the learning task vary over time This approach, known as Transfer Learning or Inductive Transfer, was first described byPratt[(1993). Thrun (1996) reported improved generalization performance for lifelong learning and described. representation learning, whereas Caruana(1997) considered a Multitask learning setup where. tasks are learned in parallel while using a shared representation. In coevolutionary algorithms, the coevolution of representations with solutions that employ them, see e.g.Moriarty|(1997); de Jong. and Oates(2002), provides another approach to representation learning. Representation learning can. be seen as a special form of transfer learning, where one goal is to learn adequate representations. and the other goal, addressed in parallel or sequentially, is to use these representations to address the. learning problem.\nSeveral of the recent successes of deep learning can be attributed to representation learning anc. incremental learning.Bengio et al.(2013) provide a review and insightful discussion of representation learning.Parisotto et al.(2015) report experiments with transfer learning across Atari 2600 arcade. games where up to 5 million frames of training time in each game are saved. More recently, successful. transfer of robot learning from the virtual to the real world was achieved using transfer learning, see Rusu et al.(2016). And at the annual ImageNet Large-Scale Visual Recognition Challenge (ILSVRC). the depth of networks has steadily increased over the years, so far leading up to a network of 152. layers for the winning entry in the ILSVRC 2015 classification task; see[He et al.(2015).\nWe study incremental learning in the context of sequence learning. The aim in sequence learning is to predict, given a step of the sequence, what the next step will be. By iteratively feeding the predicted output back into the network as the next input, the network can be used to produce a complete sequences of variable length. For a discussion of variants of sequence learning problems see|Sun and Giles|(2001); a more recent treatment covering recurrent neural networks as used here is provided byLipton(2015).\nAn interesting challenge in sequence learning is that for most sequence learning problems of interes. the next step in a sequence does not follow unambiguously from the previous step. If this wer. the case, i.e. if the underlying process generating the sequences satisfies the Markov property, th. learning problem would be reduced to learning a mapping from each step to the next. Instead, step. in the sequence may depend on some or all of the preceding steps in the sequence. Therefore, a mai. challenge faced by a sequence learning model is to capture relevant information from the part o. the sequence seen so far. This ability to capture relevant information about future sequences it ma receive must be developed during training; the network must learn the ability to build up interna. representations which encode relevant aspects of the sequence that is received.."}, {"section_index": "2", "section_name": "1.3 INCREMENTAL SEOUENCE LEARNING", "section_text": "The dependency on the partial sequence received so far provides a special opportunity for incremental. learning that is specific to sequence learning. Whereas the examples in a supervised learning problem. bear no known relation to each other, the steps in a sequence have a very specific relation; later steps. in the sequence can only be learned well once the network has learned to develop the appropriate. internal state summarizing the part of the sequence seen so far. This observation leads to the idea that. sequence learning may be expedited by learning to predict the first few steps in each sequence first. and, once reasonable performance has been achieved and (hence) a suitable internal representation of the initial part of the sequences has been developed, gradually increasing the length of the partial. sequences used for training.\nA prefix of a sequence is a consecutive subsequence (a substring) of the sequence starting from the first element; e.g. the prefix S3 of a sequence S consists of the first 3 steps of S. We define. Incremental Sequence Learning as an approach to sequence learning whereby learning starts out by using only a short prefix of each sequence for training, and where the length of the prefixes used for training is gradually increased, up to the point where the complete sequences are used. The structure of sequence learning problems suggests that adequate modeling of the preceding part of the sequence.\nis a requirement for learning later parts of the sequence; Incremental Sequence Learning draws th consequence of this by learning to predict the earlier parts of the sequences first.\nIn presenting the framework of Curriculum Learning,Bengio et al.(2009) provide an example within the domain of sequence learning, more specifically concerning language modeling. There, the vocabulary used for training on word sequences is gradually increased, i.e. the subset of sequences used for training is gradually increased; this is analogous to one of the comparison methods used here. Another specialization of Curriculum Learning to the context of sequence learning described byBengio et al.(2015) addresses the discrepancy between training, where the true previous step is presented as input, and inference, where the previous output from the network is used as input: with scheduled sampling, the probability of using the network output as input is adapted to gradually increase over time.Zaremba and Sutskever(2014) apply curriculum learning in a sequence-to sequence learning context where a neural network learns to predict the outcome of Python programs The generation of programs forming the training data is parameterized by two factors that control the complexity of the programs: the number of digits of the numbers used in the programs and the degree of nesting. While a number of different instantiations of incremental or curriculum learning have been described in the context of sequence learning, no clear guidance is available on which forms are effective. The particular form explored here of learning to predict the earlier parts of sequences first is straightforward, it makes use of the particular structure of sequence learning problems, and it is easy to implement: yet it has received very limited attention so far.\nThe classification of MNIST digit images, see LeCun and Cortes(2010), is one example of a task or. which the success of deep learning has been demonstrated convincingly; a test error rate of 0.23% was. obtained by|Ciresan et al.(2012) using Multi-column Deep Neural Networks. To obtain a sequence learning data set for evaluating Incremental Sequence Learning, we created a variant of the familia MNIST handwritten digit data set provided byLeCun and Cortes(2010) where each digit image is. transformed into a sequence of pen strokes that could have generated the digit..\nOne motivation for representing digits as strokes is the notion that when humans try to discern digit. or letters that are difficult to read, it appears natural to trace the line so as to reconstruct what patl. the author's pen may have taken. Indeed, Hinton and Nair (2005) note that the idea that patterns car be recognized by figuring out how they were generated was already introduced in the 1950's, anc. describe a generative model for handwritten digits that uses two pairs of opposing springs whose. stiffnesses are controlled by a motor program..\nPen stroke sequences also form a natural and efficient representation for digits; handwriting constitute. a canonical manifestation of the manifold hypothesis, according to which \"real-world data presentec. in high dimensional spaces are expected to concentrate in the vicinity of a manifold M of mucl. lower dimensionality dM, embedded in high dimensional input space Rd\"; see|Bengio et al.(2013 Specifically: (i) the vast majority of the pixels are white, (ii) almost all digit images consist of a singl connected set of pixels, and (iii) the shapes mostly consist of smooth curved lines. This suggests tha. collections of pen strokes form a natural representation for the purpose of recognizing digits..\nThe relevance of the manifold hypothesis can also be appreciated by considering the space of all 2-. 28x28 binary pixel images; when sampling uniformly from this space, one is likely to only encounte. images resembling TV noise, and the chances of observing any of the 70000 MNIST digit image. is astronomically small. By contrast, a randomly generated pen stroke sequence is not unlikely t. resemble a part of a digit, such as a short straight or curved line segment. This increased alignment o the digit data with its representation in the form of pen stroke sequences implies that the amount o. computation required to address the learning problem can potentially be vastly reduced..\nFigure 1: The original image (top left), thresholded image, thinned image, and actual extracted pei stroke image.\nThe MNIST handwritten digit data set consists of 60000 training images and 10000 test images, each forming 28 x 28 bit map images of written numerical digits from 0 to 9. The digits are transformed Into one or more pen strokes, each consisting of a sequence of pen offset pairs (dx, dy). To extract the pen stroke sequences, the following steps are performed:\nIncremental thesholding Starlln Irom the or1gina1 MN1 S1 grayscale image, the iollowing. characteristics are measured: The number of nonzero pixels The number of connected components, for both the 4-connected and 8-connected Variants. Starting from a thresholding level of zero, the thresholding level is increased stepwise until either (A) the number of 4-connected or 8-connected components changes, (B) the. number of remaining pixels drops below 50% of the original number, or (C) the thresholding. level reaches a preselected maximum level (250). When any of these conditions occur,. the previous level (i.e. the highest thresholding level for which none of these conditions. occurred) is selected. 2. A common method for image thinning, described byZhang and Suen (1984), is applied.. 3. After the thresholding and thinning steps, the result is a skeleton of the original digit image. that mostly consists of single-pixel-width lines.. 4. Finding a pen stroke sequence that could have produced the digit skeleton can be viewed. as a Traveling Salesman Problem where, starting from the origin, all points of the digit. skeleton are visited. Each point is represented by the pen offset (dx, dy) from the previous. to the current point. For any transition to a non-neighboring pixel (based on 8-connected. distance), an extra step is inserted with (dx, dy) = (0, O) and with eos = 1 (end-of-stroke), to. indicate that the current stroke has ended and the pen is to be lifted off the paper. At the. end of each sequence, a final step with values (0, 0, 1, 1) is appended. The fourth value. represents eod, end-of-digit. This final tuple of the sequence marks that both the current. stroke and the current sequence have ended, and forms a signal that the next input presented. to the network will belong to another digit..\nStroke sequence 3334353637 16 18 10 20 32302928 2726 25 O4\n(1, 0)(1, 0) (6, 4) (1, -1) (1, 1) (0, 1) (-1, 1) (1, 1) (1, 1) (0, 1) (0, 1) (-1, 1) (-1, -1) (-1, 1) (-1, 0) (-1, 0)\nFigure 2: Example of a pen stroke image\nIt is important to note that the thinning operation discards pixels and therefore information; this implies that the sequence learning problem constructed here should be viewed as a new learning problem, i.e. performance on this new task cannot be directly compared to results on the original MNIST classification task. While for many images the thinned skeleton is an adequate representation that retains original shape, in other cases relevant information is lost as part of the thinning process\nDistribution of Sequence Lengths 2000 Frenneeey 009T 5 0 20 40 60 80 100 Sequence length\nFigure 3: Distribution of sequence lengths. The average sequence length is approximately 40 steps\nWe adopt the approach to generative neural networks described byGraves(2013) which makes use o mixture density networks, introduced byBishop (1994). One sequence corresponds to one complet. image of a digit skeleton, represented as a sequence of (dx, dy, eos, eod) tuples, and may contai one or more strokes; see previous section..\nThe network has four input units, corresponding to these four input variables. To produce the inpu for the network, the (dx, dy) pairs are scaled to yield two real-valued input variables dx and dy. The\ndx dy eos eod 6 4 0 0 1 -1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 1 0 0 -1 1 0 0 1 1 0 0 1 1 0 0 0 1 0 0 0 1 0 0 -1 1 0 0 -1 1 0 0 -1 0 0 0 -1 0 0 0 -1 -1 0 0 0 0 1 1\nTable 1: Corresponding sequence. The origin is at. the top left, and the positive vertical direction is. downward. From the origin to the first point, the first offset is 6 steps to the right and 4 down: (6. 4). Then to the second point: 1 to the right and 1 up, (1, -1); etc\nvariables indicating the end-of-stroke (EOS) and end-of-digit (EOD) are binary inputs. Two hidder LSTM layers, seeHochreiter and Schmidhuber(1997), of 200 units each are used.\nInputs Hidden Hidden Outputs Layer 1 layer 2 #units 4 200 200 6*17+2+10=114 T dx 1 H2 Mixture: x 17 dy [neighboring layers 01 fully connected]. 2 p eos eos eod eod 0 1 Class outputs. ... 9\nFigure 4: Network architecture; see text\nThe input units receive one step of a sequence at a time, starting with the first step. The goal for the output units is to predict the immediate next step in the sequence, but rather than trying to directly predict dx and dy, the output units represent a mixture of bivariate Gaussians. The output layei consists of the end of stroke signal (EOS), and a set of means ' , standard deviations o', correlation. p', and mixture weights ' for each of the M mixture components, where the number of mixture components M = 17 was found empirically to yield good results and is used in the experiments presented here. Additionally, a binary indicator signaling the end of digit (EOD) is used, to mark the end of each sequence. In addition to these output elements for predicting the pen stroke sequences 10 binary class variable outputs are added, representing the 10 digit classes. This facilitates switching the task from sequence prediction to sequence classification, as will be discussed later; the output of these units is ignored in the sequence prediction experiments. The number of output units depends or the number of mixture components used, and equals 6M + 2 + 10 = 114.\nFor regularization, we found in early experiments that using the maximum weight as a regularizatior term produced better results than using the more common L-2 regularization. This approach can be. viewed as L-oo-norm regularization, and has been used previously in the context of regularization see e.g Schmidt et al.(2008)\nThe definition of the sequence prediction loss Lp follows Graves(2013), with the difference th: terms for the eod and for the L-oo loss are included:.\nT log eost if (xt+1)3=1 -log }N(xt+1|t,ot) L(x) = log (1 - eost) otherwise t=1 log eodt if (xt+1)4 =1 + Aw ( log (1 eodt) otherwise\nBelow we describe Incremental Sequence Learning and three comparison methods, where two of. the comparison methods are other instantiations of curriculum learning, and the third comparison is regular sequence learning without a curriculum learning aspect..\nAll three curriculum learning methods employ a threshold criterion based on the training RMSE once a specified level of the RMSE has been reached, the set of training examples (determined by the number of sequence steps used, the number of sequences used, or the number of digits) is increased We note that many possible variants of this simple adaptive scheme are possible, some of which may. provide improvements of the results.\nThe configuration of the baseline method, regular sequence learning, is as follows. The number o. mixture components M = 17, two hidden layers of size 200 are used. A batch size of 50 sequences per batch is used in these first experiments. The learning rate is a = 0.0025, with a decay rate of. 0.99995 per epoch. The order of training sequences (not steps within the sequences) is randomized. The weight of the regularization component X = 0.25. In these first experiments, a subset of 10 000. training sequences and 5 O00 test sequences is used. The error measure in these figures is the RMSE. of the pen offsets (unscaled) predicted by the network given the previous pen movement..\nThe RMSE is calculated based on the difference between the predicted and actual (dx, dy) pairs scaled back to their original range of pixel units, so as to obtain an interpretable error; the eos and eod components of the error, which do form part of the loss, are not used in this error measure. For the method where the sequence length is varied, the number of individual points (input-target pairs) that must be processed per sequence varies over the course of a run. The number of sequences processed (or collections thereof such as batches or epochs) is therefore no longer an adequate measure of computational expense; performance is therefore reported as a function of the number of points processed.\nDetails per method\nRegular sequence learning The baseline method is regular sequence learning; here, all training data is used from the. Outset. Incremental Sequence Learning: increasing sequence length. Predicting the second step of a sequence given the first step is a straightforward mapping. problem that can be handled using regular supervised learning methods. The predictior. of later steps in the sequence can potentially depend on all preceding steps, and for som cases may only be learned once an effective internal representation has been develope. that summarizes relevant information present in the preceding part of the sequence. Fo. predicting the 17th step for example, the available input consist of the previous 16 steps. and the network must learn to construct a compact representation of the preceding steps tha have been seen. More specifically, it must be able to distinguish between subspaces of th sequence space that correspond to different distributions for the next step in the sequence. The number of possible contexts grows exponentially with the position in the sequence, anc. the task of summarizing the preceding sequence therefore potentially becomes more difficul. as a function of the position within the sequence. The problem of learning to predict steps. later on in the sequence is therefore potentially much harder than learning to predict th. earlier steps. In Incremental Sequence Learning therefore, the length of sequences presentec to the network is increased as learning progresses.. Increasing training set size Bengio et al.[(2009) describe an application of curriculum learning to sequence learning. where the task is to predict the best word which can follow a given context of words in a. correct English sentence. The curriculum strategy used there is to grow the vocabulary size. Transferring this to the context of pen stroke sequence generation, the most straightforwar. translation is to use subsets of the training data that grow in size, where the order of example. that are added to the training set is random.. Increasing number of classes The network is first presented with sequences from only one digit class; e.g. all zeros. The. nhorofoloccecic sedunti1\nFigures5|shows a comparison of the results of the four methods. The baseline method (in red) does not use curriculum learning, and is presented with the entire training set from the start. Incremental Sequence Learning (in green) performs markedly better than all comparison methods. It reaches the best test performance of the baseline methods twenty times faster; see the horizontal dotted black line Moreover, Incremental Sequence Learning greatly improves generalization; on this subset of the data the average test performance over 10 runs reaches 1.5 for Incremental Sequence Learning vs 3.9 for regular sequence learning, representing a reduction of the error of 74%.\nExperiment 1: RNN, sequence-based batch size Test error, average of 10 runs Regular sequence learning Incremental sequence learning Incremental number of classes 8 Incremental number of sequences O Best test performance for regular sequence learning 6 O RMSE 20 C WMW mwwwwwwwwwwwwwwwww.wwwwwwmwmqwwwwmw MMM O 0e+00 2e+06 4e+06 6e+06 8e+06 1e+07 Number of sequence steps processed\nFigure 5: Comparison of the test error of the four methods, averaged over ten runs. The dotted lines. indicate, at each point in time, which fraction of the training data has been made available at that point for the method of the corresponding color..\nIncremental Sequence Learning The initial sequence length is 2, meaning that the first two points of each sequence are use. i.e. after feeding the first point as input, the second point is to be predicted. Once the trainin. RMSE drops below the threshold value of 4, the length is doubled, up to the point where. reaches the maximum sequence length. Increasing training set size The initial training set size is 10. Each time the RMSE threshold of 4 is reached, this amour. is doubled, up to the point where the complete set of training sequences is used.. Increasing number of digit classes The initial number of classes is 1, meaning that only sequences representing the first dig. (zero) are used. Each time the RMSE threshold of 4 is reached, this amount is doubled, u to the point where all 10 digit classes are used..\nTable 2: Best value for the average over 10 runs of the test set error obtained by each of the methods in Experiment 1. Incremental Sequence Learning achieves a reduction of 74% compared to regular sequence learning.\nThe two other curriculum methods do not provide any speedup or advantage compared to the baseline method, and in fact result in a higher test error; indiscriminate application of the curriculum learning principle apparently does not guarantee improved results and it is important therefore to discover which forms of curriculum learning can confer an advantage.\nH2: Effectively learning later parts of the sequence requires an adequate internal representation of. the preceding part of the sequence, which must be learned first; this formed the motivation for the Incremental Sequence Learning method\nTo test the first hypothesis, H1, we design a second experiment where the batch size is no longer. defined in terms of the number of sequences, but in terms of the number of points or sequence steps.. where the number of points is chosen such that the expected total number of points for the baseline. method remains the same. Thus, whereas a batch for regular sequence learning contains 50 sequences. of length 40 on average yielding 2000 points, Incremental Sequence Learning will start out with. batches containing 1000 sequences of 2 points each, yielding the same total number of points.\nIn summary, the adaptive and initially smaller batch size of Incremental Sequence Learning explains part of the observed improvements, but not all. We therefore test to what extent hypothesis H2 plays a role. To see whether the ability to first learn a suitable representation based on the earlier parts of the sequences plays a role, we compare the situation where this effect is ruled out. A straightforward way to achieve this is to use Feed-Forward Neural Networks (FFNNs); whereas Recurrent Neural Networks (RNNs) are able to learn such a representation by learning to build up relevant internal state, FFNNs lack this ability. Therefore if any advantage of Incremental Sequence Learning is seen when using FFNNs, it cannot be due to hypothesis H2. Conversely, if using FFNNs removes the advantage, the advantage must have be due to the difference between FFNNs and RNNs, which exactly corresponds to the ability to build up an informative internal representation, i.e. H2. Since\nWe furthermore note that the variance of the test error is substantially lower than for each of the other methods, as seen in the performance graphs; and where the three comparison methods reach their. best test error just before 4 : 106 processed sequence steps and then begin to deteriorate, the test error. for incremental sequence learning continues to steadily decrease over the course of the run..\npossible hypotheses: H1: The number of sequences per batch is fixed (50), but the number of sequence steps or points varies, and is initially much smaller (2) for Incremental Sequence Learning. Thus, when measured in terms of the number of points that are being processed, the batch size for Incremental Sequence Learning is initially much smaller than for the remaining methods, and it increases adaptively ove. time. Hypothesis H1 therefore is that (A) the smaller batch size improves performance, see Keskar et al.(2016) for earlier findings in this direction, and/or (B) the adaptive batch size aspect has a oositive effect on performance.\nFigure|6|shows the results. This change reduces the speedup during the earlier part of the runs, and thus partially explains the improvements observed with Incremental Sequence Learning. However, part of the speedup is still present, and moreover the three other observed improvements remain:\nIncremental Sequence Learning still features strongly improved generalization performance Incremental Sequence Learning still has a much lower variance of the test error. Incremental Sequence Learning still continues improving at the point where the test perfor mance of all other methods starts deteriorating.\nExperiment 2: RNN, point-based batch size Test error, average of 10 runs O Regular sequence learning Incremental sequence learning Incremental number of classes Incremental number of sequences Best test performance for regular sequence learning 8 O 6 O RMSE 4 O 20 2 O 0e+00 2e+06 4e+06 6e+06 8e+06 1e+07 Number of sequence steps processed\nFigure 6: Comparison of the test error of the four methods, averaged over ten runs\nwe want to explain the remaining part of the effect, we also use a batch size based on the number c points, as in Experiment 2.\nFigure|7 shows the results. As the figure shows, when using FFNNs, the advantage of Incremental Sequence Learning is entirely lost. This provides a clear demonstration that both of the hypotheses H1 and H2 play a role. Together the two hypotheses explain the total effect of the difference, suggesting that the proposed hypotheses are also the only explanatory factors that play a role.\nIt is interesting to compare the performance of the RNN and their FFNN variants, by comparing. the results of Experiments 2 and 3. From this comparison, it is seen that for Incremental Sequence Learning, the RNN variant achieves improved performance compared to the FFNN variant, as would be expected, since a FFNN cannot make use of any knowledge of the preceding part of the sequence. and is thus limited to learning a general mapping between two subsequent pen offsets pairs (dxk, dyk. and (dxk+1, dyk+1). However, it is the only method of the four to do so; for all three other methods. around the point where test performance for the RNN variants starts to deteriorate (after around 4 . 106 processed sequence steps), FFNN performance continues to improve and surpasses that of the. RNN variants. This suggests that Incremental Sequence Learning is the only method that is able to. utilize information about the preceding part of the sequence, and thereby surpass FFNN performance. In terms of absolute performance, a strong further improvement can be obtained by using the entire. training set, as will be seen in the next section. These results suggest that learning the earlier parts of. the sequence first can be instrumental in sequence learning..\nTo further analyze why variation of the sequence length has a particularly strong effect on sequence. learning, we evaluate how the relative difficulty of learning a sequence step relates to the positior. within the sequence. To do so, we measure the average loss contribution of the points or steps withir a sequence as a function of their position within the sequence, as obtained with a learning method tha.\nExperiment 3: FFNN, point-based batch size Test error, average of 10 runs. 1'T 6 Regular sequence learning. Incremental sequence learning Incremental number of classes Incremental number of sequences Best test performance for regular sequence learning. 8 O 6 RMSEE 4 20 2 O 00 0e+00 2e+06 4e+06 6e+06 8e+06 1e+07 Number of sequence steps processed\nFigure 7: Comparison of the test error of the four methods, averaged over ten runs\nlearns entire sequences (no incremental learning), averaged over the first hundred epochs of training Figure|8|shows the results.\nLoss vs.sequence position 10- 20-0 SSoT 30-0 40-0 30- 0 10 20 30 40 50 60 Position in sequence\nFigure 8: The figure shows the average loss contribution of the points or steps within a sequenc. as a function of their position within the sequence (see text). The first steps are fundamentall unpredictable. Once some context has been received, the loss for the next steps steeply drops. Late on in the sequence however, the loss increases strongly. This effect may be explained by the fact tha the number of possible preceding contexts increases exponentially, thus posing stronger requirement on the learning system for steps later on in the sequence, and/or by the point that later parts of th sequences can only be learned adequately once earlier parts have been learned first, as later steps car. depend on any of the earlier steps..\nThe first steps are fundamentally unpredictable as the network cannot know which example it wil receive next; accordingly, at the start of the sequence, the error is high, as the method cannot knov in advance what the shape or digit class of the new sequence will be. Once the first steps of the sequence have been received and the context increasingly narrows down the possibilities, the loss fo the prediction of the next steps steeply drops. Subsequently however, as the position in the sequence advances, the loss increases strongly, and exceeds the initial uncertainty of the first steps. This effec may be explained by the fact that the number of possible preceding contexts increases exponentially thus posing stronger requirements on the learning system for steps later on in the sequence.\nThe results reported so far were based on a subset of 10000 training sequences and 5000 test. sequences, in order to complete a sufficient number of runs for each of the experiments within a reasonable amount of time. Given the positive results obtained with Incremental Sequence Learning we now train this method on the full MNIST Pen Stroke Sequence Data Set, consisting of 60000 training sequences and 1o000 test sequences (Experiment 4). In these experiments, a batch size ol 500 sequences instead of 50 is used.\nFigure 9 shows the results. Compared to the performance of the above experiments, a strong improvement is obtained by training on this larger set of examples; whereas the best test error in the results above was slightly above 1.5, the test performance for this experiment drops below one a test error of O.972 on the full test data set is obtained. An strking finding is that while initially the test error is much larger than the train error, the test error continues to improve for a long time and approaches the training error very closely; in other words, no overtraining is observed even for relatively long runs where the training performance appears to be nearly converged."}, {"section_index": "3", "section_name": "6.4 TRANSFER LEARNING", "section_text": "The first task considered here was to perform sequence learning: predicting step t+1 of a sequence given step t. To adequately perform this task, the network must learn to detect which digit it is being fed; the initial part of a sequence representing a 2 or 3 for example is very similar, but as evidence is growing that the current sequence represents a 3, that information is vital in predicting how the stroke will continue.\nGiven that the network is expected to have built up some representation of what digit it is reading, an interesting test is to see whether it is able to switch to the task of sequence classification. The inpu presentation remains the same: at every time step, the recurrent neural network is fed one step oi the sequence of pen movements representing the strokes of a digit. However, we now also read the output of the 10 binary class variable outputs. The target for these is a one-hot representation of the digit, i.e. the target value for the output corresponding to the digit is one, and all nine other targe values are zero. To obtain the output, softmax is used, and the sequence classification loss Lc for the classification outputs is the cross entropy, weighted by a factor y = 10:\nN 1 [ynlogyn + (1 - yn)log(1 - yn). N n=1\nIn the following experiments, the loss consists of the sequence classification loss Lc, to whic optionally the earlier sequence prediction loss L p is added, regulated by a binary parameter 3:\nL =Lc+Lp\nThe network is asked for a prediction of the digit class after each step it receives. Clearly, accurate classification is impossible during the first part of a sequence; before the first point is received, the sequence could represent any of the 10 digits with equal probability. As the sequence is received step by step however, the network receives more information. The prediction produced after receiving the one-but-last step of the sequence, i.e. at the point where the network was previously asked to predici the last step, is used as its final answer for predicting the digit class.\nExperiment 4: RNN on full MNIST Pen Stroke Sequence Data Set Sequence-based batch size 190 00T RMSE Test error Training error 0e+00 2e+06 4e+06 6e+06 8e+06 1e+07 Number of sequence steps processed Experiment 4: RNN on full MNIST Pen Stroke Sequence Data Set Sequence-based batch size 5 Test error Training error 3 2 0.0e+00 5.0e+07 1.0e+08 1.5e+08 2.0e+08 2.5e+08 3.0e+08 Number of sequence steps processed\nRSWE Test error Training error 0e+00 2e+06 4e+06 6e+06 8e+06 1e+07 Number of sequence steps processed Experiment 4: RNN on full MNIsT Pen Stroke Sequence Data Set\nFigure 9: Performance on full MNIST Pen Stroke Sequence Data Set, zoomed to first part of the rur and same experiment, results for the full run..\nExperiment 4: RNN on full MNIST Pen Stroke Sequence Data Set Sequence-based batch size 6 5 5 RSWE Test error Training error 3 3 0.0e+00 5.0e+07 1.0e+08 1.5e+08 2.0e+08 2.5e+08 3.0e+08 Number of sequence steps processed\nFigure 10|shows the results; indeed the network is able to build further on its ability to predict pen. stroke sequences, and learns the sequence classification task faster and more accurately than an. identical network that learns the sequence classification task from scratch; in this first and straight. forward transfer learning experiment based on the MNIST stroke sequence data set, a classificatior accuracy of 96.0% is reached We note that performance on the MNIST sequence data cannot be compared to results obtained with the original MNIST data set, as the information in the input data is. vastly reduced. This result sets a first baseline for the MNIST stroke sequence data set; we expect. there is ample room for improvement. Simultaneously learning sequence prediction and sequence. classification does not appear to provide an advantage, neither for transfer learning nor for learning. from scratch.\n1This performance was reached after training for 7 : 107 sequence steps, i.e. roughly twice as long as the rur shown in the chart\nTransfer learning: sequence classification and sequence prediction Starting from a trained sequence prediction model as obtained in Experiment 4, the earlier loss function is augmented with the sequence classification loss: = Lc + L p Transfer Learning: sequence classification only Starting from a trained sequence prediction model, the loss function is switched such that it only reflects the classification performance, and no longer tracks the sequence prediction performance: L = Lc Learning from scratch, sequence classification and sequence prediction In this variant, learning starts from scratch, and both classification loss and prediction loss are used, as in the first experiment: L = Lc + L p Learning from scratch, sequence classification only L = LC\nExperiment 5: Transfer learning from sequence prediction to sequence classification 96 0 8'0 9L'0 90 990 Transfer Learning, classification only. 9T'0 Transfer learning, classification and prediction. Learn from scratch, classification only. Learn from scratch, classification and predictior. 0e+00 2e+07 4e+07 6e+07 8e+07 Number of sequence steps processed\nFigure 10: Using the sequence prediction model as a starting point for sequence classification: starting from a trained sequence prediction network, the task is switched to predicting the class of the digit (red and black lines). A comparison with learning a digit classification model from scratch (blue and green lines) shows that the internal state built up to predict sequence steps is helpful in predicting the class of the digit represented by the sequence."}, {"section_index": "4", "section_name": "7.1 DEVELOPMENT DURING TRAINING", "section_text": "Z\nAfter 80 batches After 140 batches After 530 batches After 570 batches After 650 batche.\nbatch#540\nFigure 11: Movie showing what the network has learned over time. The movie shows the output fo. three sequences of the test data at different stages during training. To view, click the image or visi this link: https://edwin-de-jong.github.io/blog/isl/rnn-movies/generative-rnn-training-movie.gif.\nStep 1 2 3 X 1 2 3 y 2 4 6 Input Target Input Target\nFigure 12: Training: the target of a training step is used as the next input..\nDuring training, the network receives each sequence step by step, and after each step, it outputs its expectation of the offset of the next point. In these figures and movies, we visualize the predictions of the network for a given sequence at different stages of the training process. All results have been obtained from a single run of Incremental Sequence Learning\nAfter training, the trained network can be used to generate output independently. The guidance that is present during training in the form of receiving each next step of the sequence following a prediction is not available here. Instead, the output produced by the network is fed back into the network as its next input, see Figures 12 and|13 Figure|14 shows example results\nStep 1 2 3 X 1 2.2 3.1 y 2 3.8 5.9 Input Output Input Output\ntep 1 2 3 Step 1 2 3 X X y 6 y 3.8 5.9 Input Target Input Target Input Output Input Output\nFigure 13: Generation: the output of the net- work is used as the next input..\nOutput resembling a 2 Output resembling a 3 Output resembling a 4"}, {"section_index": "5", "section_name": "7.3 SEOUENCE CLASSIFICATION", "section_text": "The third analysis of the behavior the trained network is to view what happens during sequence classification. At each step of the sequence, we monitor the ten class outputs and visualize their output. As more steps of the sequence are being received, the network receives more information and adjusts its expectation of what digit class the sequence represents.\nMNIST stroke sequence test image 25 Classification output 5 10 15 20 25 30 35\nClassification output for a se quence representing a 0. Ini tially, as the downward part of the curved stroke is being re- ceived. the network believes the sequences represents a 4 After passing the lowest point of the figure, it assigns higher likelihood to a 6. Only at the very end, just in time before the sequence ends, the predic tion of the network switches for the last time, and a high probability is assigned to the correct class.\nThere are many possible ways to apply the principles of incremental or curriculum learning tc. sequence learning, but so far a general understanding of which forms of curriculum sequence learning. have a positive effect is missing. We have investigated a particular approach to sequence learning. where the training data is initially limited to the first few steps of each sequence. Gradually, as the\nFigure 14: Unguided output of the network: after each step, the network's output is fed back as the next input. Clearly, the network has learned the ability to independently produce long sequences. representing different digits that occurred in training data.\nClassification output tfor a sequence representing a 9. While receiving the sequence, the dominant prediction of the network is that the sequence represents a five; the open loop of the 9 and the straight top line may contribute to this. When the last points are re- ceived. the network consider a 9 to be more likely, but some ambiguity remains.\nClassification output for a se- quence representing a 3. Ini- tially, the networks estimates the sequence to represent a 7. Next, it expects a 2 is more likely. After 20 points have been received, it concludes (correctly) that the sequences represents a 3.\nA first observation was that with Incremental Sequence Learning, the time required to attain the best. test performance level of regular sequence learning was much lower; on average, the method reachec this level twenty times faster, thus achieving a significant speedup and reduction of the computationa cost of sequence learning. More importantly, Incremental Sequence Learning was found to reduce the test error of regular sequence learning by 74%\nTo analyze the cause of the observed speedup and performance improvements, we first increase the number of sequences per batch for Incremental Sequence Learning, so that all methods use the. same number of sequence steps per batch. This reduced the speedup, but the improvement of the generalization performance was maintained. We then replaced the RNN layers with feed forward. network layers, so that the networks can no longer maintain information about the earlier part of the sequences. This completely removed the remaining advantage. This provides clear evidence that the improvement in generalization performance is due to the specific ability of an RNN to. build up internal representations of the sequences it receives, and that the ability to develop these representations is aided by training on the early parts of sequences first..\nWe conclude that Incremental Sequence Learning provides a simple and easily applicable approach to sequence learning that was found to produce large improvements in both computation time and generalization performance. The dependency of later steps in a sequence on the preceding steps is characteristic of virtually all sequence learning problems. We therefore expect that this approach can yield improvements for sequence learning applications in general, and recommend its usage, given that exclusively positive results were obtained with the approach so far.\nThe Tensorflow implementation that was used to perform these experiments is available here: ht tps\nThe MNIsT stroke sequence data set is available for download here: ht t ps : //github.com/edwin-de-jong/mnist-digits-stroke-sequence-data/wiki/ MNIST-digits-stroke-sequence-data\nThe code for transforming the MNIST digit data set to a pen stroke data Set has also been made available: https://github sequence com/edwin-de jong/mnist-digits-as stroke-sequences/wiki/ MNist-digits-as-stroke-sequences (code)\nThe author would like to thank Max Welling, Dick de Ridder and Michiel de Jong for valuabl comments and suggestions on earlier versions."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "network learns to predict the early parts of the sequences, the length of the part of the sequences used for training is increased. We name this approach Incremental Sequence Learning, and find that it. strongly improves sequence learning performance. Two other forms of curriculum sequence learning. used for comparison did not display improvements compared to regular sequence learning. The. origins of this performance improvement are analyzed in comparison experiments, as detailed below..\nNext, we trained Incremental Sequence Learning on the full MNIST stroke sequence data set, and found that the use of this larger training set further improves sequence prediction performance. The. trained model was then used as a starting point for transfer learning, where the task was switched from sequence prediction to sequence classification..\nBengio, Y., Courville, A., and Vincent, P. (2013). Representation learning: A review and ney perspectives. IEEE Trans. Pattern Anal. Mach. Intell., 35(8):1798-1828.\nBishop, C. (1994). Mixture density networks. Technical Report NCRG/94/0041, Aston University.\nCaruana, R. (1997). Multitask learning. Mach. Learn., 28(1):41-75\nGiraud-Carrier, C. (2000). A note on the utility of incremental learning. A1 Commun., 13(4):215-223\nGraves, A. (2013). Generating se uences with recurrent neural networks. CoRR, abs/1308.0850\nHe, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep residual learning for image recognition. CoRR abs/1512.03385.\nHinton, G. E. and Nair, V. (2005). Inferring motor programs from images of handwritten digits. Ir Advances in Neural Information Processing Systems 18 [Neural Information Processing System. NIPS 2005, December 5-8, 2005, Vancouver, British Columbia, Canada], pages 515-522.\nLeCun, Y. and Cortes, C. (201O). MNIST handwritten digit database\nParisotto, E., Ba, L. J., and Salakhutdinov, R. (2015). Actor-mimic: Deep multitask and transfe reinforcement learning. CoRR, abs/1511.06342\nPratt, L. Y. (1993). Discriminability-based transfer between neural networks. In Advances in Neural Information Processing Systems 5, [NIPS Conference], pages 204-211, San Francisco, CA, USA Morgan Kaufmann Publishers Inc..\nde Jong, E. D. and Oates, T. (2002). A coevolutionary approach to representation development. Proceedings of the ICML-2002 Workshop on Development of Representations, pages 1-6.\nElman, J. L. (1991). Incremental learning, or the importance of starting small. crl technical report 9101. Technical report, University of California, San Diego.\nElman, J. L. (1993). Learning and development in neural networks: The importance of starting small Cognition, 48:781-99.\nKeskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M., and Tang, P. T. P. (2016). On large-batcl training for deep learning: Generalization gap and sharp minima. CoRR, abs/1609.04836\nRusu, A. A., Vecerik, M., Rothorl, T., Heess, N., Pascanu, R., and Hadsell, R. (2016). Sim-to-real robot learning from pixels with progressive nets. arxiv:1610.04286 [cs.ro]. Technical report, Deep Mind.\nZaremba, W. and Sutskever, I. (2014). Learning to execute. CoRR, abs/1410.4615.\nZhang, T. Y. and Suen, C. Y. (1984). A fast parallel algorithm for thinning digital patterns. Commun ACM, 27(3):236-239\nThrun, S. (1996). Is learning the n-th thing any easier than learning the first. In Advances in Neural Information Processing Systems, volume 8, pages 640-646."}] |
SkC_7v5gx | [{"section_index": "0", "section_name": "THE POWER OF SPARSITY IN CONVOLUTIONAL NEURAL NETWORKS", "section_text": "Soravit Changpinyo\nDepartment of Computer Science University of Southern California Los Angeles, CA 90020, USA\nDeep convolutional networks are well-known for their high computational and memory demands. Given limited resources, how does one design a network that. balances its size, training time, and prediction accuracy? A surprisingly effective. approach to trade accuracy for size and speed is to simply reduce the number of. channels in each convolutional layer by a fixed fraction and retrain the network. In many cases this leads to significantly smaller networks with only minimal changes to accuracy. In this paper, we take a step further by empirically examining a. strategy for deactivating connections between filters in convolutional layers in a way that allows us to harvest savings both in run-time and memory for many. network architectures. More specifically, we generalize 2D convolution to use a. channel-wise sparse connection structure and show that this leads to significantly. better results than the baseline approach for large networks including VGG and Inception V3."}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "Deep neural networks combined with large-scale labeled data have become a standard recipe fo achieving state-of-the-art performance on supervised learning tasks in recent years. Despite of thei success, the capability of deep neural networks to model highly nonlinear functions comes with higl computational and memory demands both during the model training and inference. In particular, th number of parameters of neural network models is often designed to be huge to account for the scale diversity, and complexity of data that they learn from. While advances in hardware have somewha alleviated the issue, network size, speed, and power consumption are all limiting factors when i comes to production deployment on mobile and embedded devices. On the other hand, it is well known that there is significant redundancy among the weights of neural networks. For example Denil et al. (2013) show that it is possible to learn less than 5% of the network parameters anc predict the rest without losing predictive accuracy. This evidence suggests that neural networks ar often over-parameterized.\nThese motivate the research on neural network compression. However, several immediate ques tions arise: Are these parameters easy to identify? Could we just make the network 5% of its size and retrain? Or are more advanced methods required? There is an extensive literature in the last few years that explores the question of network compression using advanced techniques, including network prunning, loss-based compression, quantization, and matrix decomposition. We overviev many of these directions in the next section. However, there is surprisingly little research on whethe. this over-parameterization can simply be re-captured by more efficient architectures that could be obtained from original architectures via simple transformations.\nOur approach is inspired by a very simple yet successful method called depth multiplier (Howard. 2017). In this method the depth (the number of channels) of each convolutional layer in a given. network is simply reduced by a fixed fraction and the network is retrained. We generalize this ap proach by removing the constraint that every input filter (or channel) must be fully connected to. every output filter. Instead, we use a sparse connection matrix, where each output convolution chan-.\n* The work was done while the author was doing an internship at Google Research\nsandler,azhmogin}@google.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "nel is connected only to a small random fraction of the input channels. Note that, for convolutional networks, this still allows for efficient computation since the one channel spatial convolution across the entire plane remains unchanged\nWe empirically demonstrate the effectiveness of our approach on four networks (MNIST, CIFAR Net, Inception-V3 and VGG-16) of different sizes. Our results suggest that our approach outper-. forms dense convolutions with depth multiplier at high compression rates..\nFor Inception V3 (Szegedy et al.2016), we show that we can train a network with only about 300K. of convolutional parametersand about 100M multiply-adds that achieves above 52% accuracy af- ter it is fully trained. The corresponding depth-multiplier network has only about 41% accuracy. Another network that we consider is VGG-16n, a slightly modified version of VGG-16 (Simonyar & Zisserman2015), with 7x fewer parameters and similar accuracy(2|We found VGG-16n to star training much faster than the original VGG-16 which was trained incrementally in the original lit erature. We explore the impact of sparsification and the number of parameters on the quality of the network by building the networks up to 30x smaller than VGG-16n (200x smaller than the origina VGG-16).\nIn terms of model flexibility, sparse connections allow for an incremental training approach, where connection structure between layers can be densified as training progresses. More importantly, the incremental training approach can potentially speed up the training significantly due to savings ir the early stages of training.\nThe rest of the paper is organized as follows. Section2 summarizes relevant work. We describe our approach in Section 3 and then present some intuition in Section 4] Finally, we show ouj experimental results in Section5\nOur work is closely related to a compression technique based on network pruning. However, the important difference is that we do not try to select the connections which are redundant. Instead, we just fix a random connectivity pattern and let the network train around it. We also give a brief overview of other two popular techniques: quantization and decomposition, though these directions are not the main focus and could be complementary to our work..\nMore complex regularizers have also been considered.Wen et al.[(2016) and [Li et al.[(2016) pu structured sparsity regularizers on the weights, while[Murray & Chiang(2015) put them on the hid. den units. Feng & Darrell[(2015) explore a nonparametric prior based on the Indian buffet processe (Griffiths & Ghahramani!2011) on layers. Hu et al.(2016) prune neurons based on the analysis o their outputs on a large dataset. Anwar et al.(2015b) consider special sparsity patterns: channel-wis. (removing a feature map/channel from a layer), kernel-wise (removing all connections between tw feature maps in consecutive layers), and intra-kernel-strided (removing connections between tw features with particular stride and offset). They also propose to use particle filter to decide the. importance of connections and paths during training..\nAnother line of work explores fixed network architectures with some subsets of connections re moved. For example, LeCun et al.[(1998) remove connections between the first two convolutional feature maps in a completely uniform manner. This is similar to our approach but they only con\nNetwork pruning Much initial work on neural network compression focuses on removing unim- portant connections using weight decay.Hanson & Pratt (1989) introduce hyperbolic and exponen. tial biases to the objective. Optimal Brain Damage (LeCun et al.J1989) and Optimal Brain Surgeon (Hassibi & Stork]1993) prune the networks based on second-order derivatives of the objectives Recent work by Han et al.(2015} 2016a) alternates between pruning near-zero weights, which are encouraged by l1 or l2 regularization, and retraining the pruned networks..\nsider a pre-defined pattern in which the same number of input feature map are assigned to each. output feature map (Random Connection Table in Torch's SpatialConvolutionMap function). Fur- ther, they do not explore how sparse connections affect performance compared to dense networks. Along a similar vein, Ciresan et al.[(2011) remove random connections in their MNIST experi- ments. However, they do not try to preserve the spatial convolutional density and it might be a. challenge to harvest the savings on existing hardware.Ioannou et al. (2016a) explore three types of. hierarchical arrangements of filter groups for CNNs, which depend on different assumptions about co-dependency of filters within each layer. These arrangements include columnar topologies in- spired by AlexNet (Krizhevsky et al.[|2012), tree-like topologies previously used by Ioannou et al. (2016b), and root-like topologies. Finally, Howard(2017) proposes the depth multiplier method to. scale down the number of filters in each convolutional layer by a factor. In this case, depth multi- plier can be thought of channel-wise pruning mentioned in (Anwar et al.] 2015b). However, depth multiplier modifies the network architectures before training and removes each layer's feature maps. in a uniform manner.\nWith the exception of (Anwar et al.] 2015b] Li et al.] [2016} Ioannou et al.] 2016a) and depth multi- plier (Howard2017), the above previous work performs connection pruning that leads to irregular network architectures. Thus, those techniques require additional efforts to represent network con. nections and might or might not allow for direct computational savings.\nQuantization Reducing the degree of redundancy of model parameters can be done in the form of quantization of network parameters.Hwang & Sung(2014);Arora et al.(2014) and|Courbariaux et al.(2015]2016); Rastegari et al.(2016) propose to train CNNs with ternary weights and binary weights, respectively.Gong et al.(2014) use vector quantization for parameters in fully connected layers.Anwar et al.(2015a) quantize a network with the squared error minimization.Chen et al. (2015) randomly group network parameters using a hash function. We note that this technique could be complementary to network pruning. For example, Han et al. (2016a) combine connection pruning in (Han et al.2015) with quantization and Huffman coding.\nDecomposition Another approach is based on low-rank decomposition of the parameters. De. composition methods include truncated SVD (Denton et al.] 2014), decomposition to rank-1 bases. (Jaderberg et al.]2014), CP decomposition (PARAFAC or CANDECOMP) (Lebedev et al.]2015 Tensor-Train decomposition of Oseledets(2011) (Novikov et al.]2015), sparse dictionary learning. of|Mairal et al.(2009) and PCA (Liu et al.]2015), asymmetric (3D) decomposition using reconstruc tion loss of non-linear responses combined with a rank selection method based on PCA accumulate energy (Zhang et al.|2015b a), and Tucker decomposition using the kernel tensor reconstruction los combined with a rank selection method based on global analytic variational Bayesian matrix factor. ization (Kim et al.2016).\nHinton et al.[(2012); Srivastava et al.[(2014) propose Dropout for regularizing fully connected layers within neural networks layers by randomly setting a subset of activations to zero during training. Wan et al.(2013) later propose DropConnect, a generalization of Dropout that instead randomly sets. a subset of weights or connections to zero. Our approach could be thought as related to DropCon-. nect, but (1) we remove connections before training; (2) we focus on connections between convolu. tional layers; and (3) we kill connections in a more regular manner by restricting connection patterns. to be the same along spatial dimensions..\nRecently, Han et al.(2016b) and Jin et al.(2016) propose a form of regularization where droppec connections are unfrozen and the network is retrained. This idea is similar to our incremental training approach. However, (1) we do not start with a full network; (2) we do not unfreeze connections all at once; and (3) we preserve regularity of the convolution operation\nNetwork compression and architectures are closely related. The goal of compression is to remove redundancy in network parameters; therefore, the knowledge about traits that determine architec ture's success would be desirable. Other than the discovery that depth is an important factor (Ba & Caruana2014), little is known about such traits."}, {"section_index": "3", "section_name": "3.1 DEPTH MULTIPLIER", "section_text": "We first give a description of the depth multiplier method used in Howard(2017). Given a hy. perparameter a E (0,1|, the depth multiplier approach scales down the number of filters in each. convolutional layers by Q. Note that depth here refers to the third dimension of the activation vol ume of a single layer, not the number of layers in the whole network..\nLet ny-1 and nj be the number of input and output filters at layer l, respectively. After the oper-. ation n-1 and ni become ani-1 and ani and the number of parameters (and the number of multiplications) becomes ~ a? of the original number..\nThe result of this operation is a network that is both 1/a2 smaller and faster. Many large networks. can be significantly reduced in size using this method with only a small loss of precision (Howard 2017). It is our belief that this method establishes a strong baseline to which any other advanced techniques should compare themselves. To the best of our knowledge, we are not aware of such. comparisons in the literature.\nInstead of looking at depth multiplier as deactivating channels in the convolutional layers, we car look at it from the perspective of deactivating connections. From this point of view, depth multiplier kills the connections between two convolutional layers such that (a) the connection patterns are still the same across spatial dimensions and (b) all \"alive\"' input channels are fully connected to all \"alive\"' output channels.\nWe generalize this approach by relaxing (b) while maintaining (a). That is, for every output chan-. nel, we connect it to a small subset of input channels. In other words, dense connections between a small number of channels become sparse connections between larger number of channels. This. can be summarized in Fig.1] The advantage of this is that the actual convolution can still be com-. puted efficiently because sparsity is introduced only at the outer loop of the convolution operation. and we can still take the advantage of the continuous memory layout. For more details regarding. implementations of the two approaches, please refer to the Appendix..\nMore concretely, let ni-1 and ni be the number of channels of layer l - 1 and layer l, respectively For a sparsity coefficient a, each output filter j only connects to an a fraction of filters of the previous layer. Thus, instead of having a connectivity matrix Wsi; of dimension k2 nt-1 nt. we have a sparse matrix with non-zero entries at Wsa.,j, where aij is an index matrix of dimension. 1.2 YOm nd k is the kernel siz."}, {"section_index": "4", "section_name": "3.2.1 INCREMENTAL TRAINING", "section_text": "In contrast to depth multiplier, a sparse convolutional network defines a connection pattern on a much bigger network. Therefore, an interesting extension is to consider incremental training: we start with a network that only contains a small fraction of connections (in our experiments we use 1% and O.1%) and add connections over time. This is motivated by an intuition that the network car use learned channels in new contexts by introducing additional connections. The potential practical\nSome previous work performs architecture search but without the main goal of doing compression (Murray & Chiang2015] De Brabandere et al.]2016). Recent work proposes shortcut/skip con nections to convolutional networks. See, among others, highway networks (Srivastava et al.]2015). residual networks (He et al.]2016a b), networks with stochastic depth (Huang et al.]2016b), and densely connected convolutional networks (Huang et al.]2016a).\nA CNN architecture consist of (1) convolutional layers, (2) pooling layers, (3) fully connected layers and (4) a topology that governs how these layers are organized. Given an architecture, our general goal is to transform it into another architecture with a smaller number of parameters. In this paper, we limit ourselves to transformation functions that keep the general topology of the input architec- ture intact. Moreover, the main focus will be on the convolutional layers and convolution operations. as they impose highest computational and memory burden for most if not all large networks.\nInput Input nl-1 n1-1 dimension H Output nl Output n1 oatial tial dimension Wx\nadvantage of this approach is that since we start training with very small networks and grow them over time, this approach has a potential to speed up the whole training process significantly. We note that depth multiplier will not benefit from this approach as any newly activated connections would. require learning new filters from scratch.."}, {"section_index": "5", "section_name": "4 ANALYSIS", "section_text": "Lemma 1 Any dense convolutional neural network with no cross-channel nonlinearities, distinct weights and biases, and with l hidden layers of sizes n1, n2, ..., nt, has at least I-1 ni! distinct. equivalent networks which produce the same output..\nN(I) = S 0 0t 0 C 0 l-1 0... 0 01 0 Ci(I\nwhere we use o to denote function composition to avoid numerous parentheses. The convolution operator C operates on input with ni-1 channels and produces an output with n; channels. Now, fix arbitrary set of permutation functions , where ; can permute depth of size ni. Since , is a linear function, it follows that C, = -C,,-1 is a valid convolutional operator, which can be. obtained from C; by permuting its bias according to ; and its weight matrix along input and output dimensions according to ;-1 and ; respectively. For a new network defined as:.\nN'(I) = S' o 0 C 0 0l-1 0... 0 1 0 C(I)\nwhere o is an identity operator and S' = S o t, we claim that N'(I) = N(I). Indeed, since nonlinearities do not apply cross-depth we have nOn-1 = on and thus:.\nN'(I) = S' 0 0 C[ 0 0l-1 0... 001 0 C](I) = = SonoooCont-1O..ono01ooCi(I) =N(I)\nThus, any set of permutations on hidden units defines an equivalent network\nIt is obvious that sparse networks are much more immune to parameter permutation - indeed every. channel at layer l is likely to have a unique tree describing its connection matrix all the way down Exploring this direction is an interesting open question..\nFigure 1: Connection tensors of depth multiplier (left) and sparse random (right) approaches for nt-1 = 5 and nt = 10. Yellow denotes active connections. For both approaches, the connection pattern is the same across spatial dimension and fixed before training. However, in the sparse random approach, each output channel is connected to a (possibly) different subset of input channels, and vice versa.\nIn this section, we approach a question of why sparse convolutions are frequently more efficient than the dense convolutions with the same number of parameters. Our main intuition is that the sparse convolutional networks promote diversity. It is much harder to learn equivalent set of channels as, at high sparsity, channels have distinct connection structure or even overlapping connections. This can be formalized with a simple observation that any dense network is in fact a part of an exponentially large equivalence class, which is guaranteed to produce the same output for every input.\nProof Let I denote the input to the network, C, be the convolutional operator, ; denote the non- linearity operator applied to the i-th convolution layer and S be a final transformation (e.g. softmax classifier). We assume that o, is a function that operates on each of the channels independently. We note that this is the case for almost any modern network. The output of the network can then be written as:"}, {"section_index": "6", "section_name": "5 EXPERIMENTS", "section_text": "MNIST AND CIFAR-10 We use standard networks provided by TensorFlow. For MNIST, it has. 3-layer convolutional layers and achieves 99.5% accuracy when fully trained. For CIFAR-10, it has 2 convolutional layers and achieves 87% accuracy.\nImAGENETWe use open source Inception-V3 (Szegedy et al.]2016) network and a slightly mod ified version of VGG-16 (Simonyan & Zisserman2015) called VGG-16n on ImageNet ILSVRC 2012 (Deng et al.2009Russakovsky et al.2015)\nRandom connections Connections are activated according to their likelihood from the uniform distribution. In addition, they are activated in such a way that there are no connections going in or coming out of dead filters (i.e., any connection must have a path to input image and a path to the final prediction.). All connections in fully connected layers are retained.\nImplementation details All code is implemented in TensorFlow (Abadi et al.]2015). Deactiva ing connections is done by applying masks to parameter tensors. The Inception-v3 and VGG-16 networks are trained on 8 Tesla K80 GPUs, each with batch size 256 (32 per gpu) and batch normal ization was used for all networks\nWe first compare depth multiplier and sparse random for the two small networks on MNIST and CIFAR-10. We compare the accuracy of the two approaches when the numbers of connections are roughly the same, based on a hyperparameter a. For dense convolutions, we pick a multiplier and each filter depth is scaled down by /a and then rounded up. In sparse convolutions, a fraction a of connections are randomly deactivated if those parameters connect at least two filters on each layer: otherwise, a fraction of a is used instead if the parameters connect layers with only one filter left. The accuracy numbers are averaged over 5 rounds for MNIST and 2 rounds on CIFAR-10.\nWe show in Fig.2|and Fig.3|that the sparse networks have comparable or higher accuracy for the. same number of parameters, with comparable accuracy at higher density. We note however thai these networks are so small that at high compression rates most of operations are concentrated at the first layer, which is negligible for large networks. Moreover, in MNIST example, the size of network changes most dramatically from 2000 to 2 million parameters, while affecting accuracy only by 1% This observation suggests that there might be benefits of maintaining the number of filters to be high. and/or breaking the symmetry of connections. We explore this in the next section.."}, {"section_index": "7", "section_name": "5.2.2 INCEPTION-V3 ON IMAGENET", "section_text": "We consider different values of sparsity ranging from O.003 to 1, and depth multiplier from O.05 to 1. Our experiments show (see Table[1|and Fig.4) significant advantage of sparse networks over equivalently sized dense networks. We note that due to time constraints the reported quantitative numbers are preliminary, as the networks have not finished converging. We expect the final numbers to match the reported number for Inception V3|Szegedy et al.(2016), and the smaller networks to have comparable improvement.\nIn this section, we demonstrate the effectiveness of the sparse random approach by comparing it to the depth multiplier approach at different compression rates. Moreover, we examine several settings in the incremental training where connections gradually become active during the training process.\n#Parameters vs. Precision @1 for MNIsT (Epoch=10) #MultiplyAdds vs. Precision @1 for MNIsT (Epoch=10) 100.0 100.0 + Sparse Convolutions Sparse Convolutions Dense Convolutions Dense Convolutions 99.5 99.5 [ uo!s!aud 99.0 O uo!s!naad 99.0 98.5 98.5 98.0 98.0 97.5 97.5 97.0 97.0 1 k 3 k 10 k 30 k 100 k 300 k 1 M 3 M 30 k 100 k 300 k 1 M 3 M 10 M 30 M #Parameters #Mu|tip|vAdds\nFigure 2: Comparison of accuracy (averaged over 5 rounds) vs. Number of parameters/Number o multiply-adds between dense and sparse convolutions on MNIST dataset. Note that though spars convolution result in better parameter trade-off curve, the multiply-add curve shows the opposit. pattern.\nAccuracy for sparse convolutions\nAccuracy Tor sparse convolutons Sparsity MAdds Params P@1 0.50/0.01 43.0 M 90 k 40.3 0.003 82.0 M 158 k 46.1 0.01 104 M 287 k 52.3 0.03 208 M 724 k 59.5 0.10 628 M 2.3 M 67.2 0.30 1.80 B 6.6 M 73 0.60 3.50 B 13 M 75 1.00 5.70 B 22 M 77\n#Parameters vs. Precision @1 for ClFAR'10 (epoch=1200) #MultiplyAdds vs. Precision @1 for ClFAR'10 (epoch=1200) II Sparse Convolutions II Sparse Convolutions 0.86 - --Dense Convolutions 0.86 - -Dense Convolutions 0.84 0.84 0.82 0.82 0.80 0.80 pree 0.78 0.78 0.76 0.76 0.74 0.74 0.72 1 M 0.72 100 k 300 k 3 M 300 k 1 M 3 M 10 M 30 M #Parameters #Multip|vAdds\nFigure 3: Comparison of accuracy (averaged over 2 rounds) vs. Number of parameters/Number of multiply-adds between dense and sparse convolutions on CIFAR-10 dataset..\nTable 1: Inception V3: Preliminary quantitative results after 100 Epochs. Note the smallest sparse network is actually a hybrid network - we used both depth multiplier (0.5) and sparsity (0.01). The number of parameters is the number of parameters excluding the softmax layer.\nAccuracy for Depth Multiplier\nMultiplier MAdds Params P@ 1 0.05 55.0M 56k 24.6 0.10 75.0M 170k 38.6 0.20 183M 718k 54.2 0.30 439M 1.8M 64.0 0.50 1.40B 5.4M 72.3 0.80 3.40B 13M 75.6\nIn our experiments with the VGG-16 network (Simonyan & Zisserman|2015), we modify the model architecture (calling it VGG-16n) by removing the two fully-connected layers with depth 4096 and replacing them with a 2 2 maxpool layer followed by a 3 3 convolutional layer with the depth of 1024. This alone sped up our training significantly. The comparison between depth multiplier and sparse connection approaches is shown in Fig.5 The modified VGG-16n network has about 7 times fewer parameters, but appears to have comparable precision.\n#Parameters vs. Precision @1 for Inception v3 (step=1150000) o#MultiplyAdds vs. Precision @1 for Inception v3 (step=1150000 08 Sparse Convolutions Sparse Convolutions Dense Convolutions Dense Convolutions 0.7 0.7 0.6 0.6 uo 0.5 0. U 0.4 0.4 0.3 0.3 0.2 0.2. 30 k 100 k 300 k 1M 3 M 10 M 30 M 30 M 100M 300 M 1 B 3 B 10 #Parameters #Multip|yAdds\nFigure 4: Inception V3: Comparison of Precision@1 vs. Number of parameters/Number of multiply-adds between dense and sparse convolutions on ImageNet/Inception-V3. The full network corresponds to the right-most point of the curve..\n#Parameters vs. Precision @1 for VGG (epoch=100) #MultiplyAdds vs. Precision @1 for VGG (epoch=100) 0.8 0.8 Sparse Convolutions Sparse Convolutions Dense Convolutions Dense Convolutions 0.7 0.7 0. 0.6 0. O. O. 0.4 0.3 0.3 0.2 0.2 100 k 300 k 1 M 3 M 10 M 30 M 100 M 300 M 100 M 300 M 1 B 3B 10 B 30 B #Parameters #Multip|vAdds\nIncremental training for Inception V3 0.8 0.7 0.6 0.5 0.4 0.3 10k Doubling (a =0.001) 0.2 25k Doubling ( =0. 01) 50k Doubling ( =0.01) 0.1 Inception V3 Saturation points 0.0 0 100 k 200 k 300 k 400 k 500 k 6001 Cton\nFigure 6: Incremental Training Of Inception V3: We show Precision@ 1 during the training process. where the networks densify over time. The saturation points show where the networks actually reacl their full density.\nFigure 5: VGG 16: Preliminary Quantitative Results. Comparison of Precision@1 vs. Number of parameters/Number of multiply-adds between dense and sparse convolutions on ImageNet/VGG 16n. The full network corresponds to the right-most point of the curve. Original VGG-16 as de- scribed in Simonyan & Zisserman (2015) (blue star) and the same model trained by us from scratch (red cross) are also shown.\n25k Doublinga=0.01) 50k Doubling a=0.01) Inception V3 Saturation points 300 k 400 k 500 k 600 k\nIn our experiments, we initially start training Inception-V3 with only 1% or 0.1% of connections enabled. Then, we double the number of connections every T steps. We use T = 10, O00, T =. 25, 000 and T = 50, 000. The results are presented in Fig.6 We show that the networks trained with. the incremental approach regardless of the doubling period can catch up with the full Inception-V3. network (in some cases with small gains). Moreover, they recover very quickly from adding more. (untrained) connections. In fact, the recovery is so fast that it is shorter than our saving interva for all the networks except for the network with 10K doubling period (resulting in the sharp drop). We believe that incremental training is a promising direction to speeding up the training of large. convolutional neural networks since early stages of the training require much less computation.."}, {"section_index": "8", "section_name": "6 CONCLUSION", "section_text": "We have proposed a new compression technique that uses a sparse random connection structure. between input-output filters in convolutional layers of CNNs. We fix this structure before training and use the same structure across spatial dimensions to harvest savings from modern hardware. W show that this approach is especially useful at very high compression rates for large networks. Fo. example, this simple method when applied to Inception V3 (Fig.4), achieves AlexNet-level accuracy. (Krizhevsky et al.]2012) with fewer than 400K parameters and VGG-level one (Fig.[5) with roughl 3.5M parameters. The simplicity of our approach is instructive in that it establishes a strong baseline. to compare against when developing more advanced techniques. On the other hand, the uncanny match in performance of dense and equivalently-sized sparse networks with sparsity > O.1 suggest that there might be some fundamental property of network architectures that is controlled by th. number of parameters, regardless of how they are organized. Exploring this further might yiel. additional insights on understanding neural networks..\nIn addition, we show that our method leads to an interesting novel incremental training technique where we take advantage of sparse (and smaller) models to build a dense network. One interesting open direction is to enable incremental training not to simply densify the network over time, but alsc increase the number of channels. This would allow us to grow the network without having to fix its original shape in place."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corradd Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffre. Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg. Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoi. Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viega. Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow. Large-scale machine learning on heterogeneous systems, 2015. URLhttp: / /tensorf1ow. org/ Soft. ware available from tensorflow.org.\nSajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Fixed point optimization of deep convolutional neura networks for object recognition. In ICASSP, 2015a..\nSajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Structured pruning of deep convolutional neural networks arXiv preprint arXiv:1512.08571, 2015b\nSanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some deep represen tations. In ICML, 2014.\nJimmy Ba and Rich Caruana. Do deep nets really need to be deep? In NIPs, 2014\nWenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing neura networks with the hashing trick. In ICML, 2015.\nFinally, we show that incremental training is a promising direction. We start with a very sparse model and increase its density over time, using the approach described in Sect.3.2.1 We note that a naive approach where we simply add filters results in training process basically equivalent to as if it started from scratch in every step. On the other hand, when the network densifies over time, all channels already possess some discriminative power and that information is utilized.\nDan C. Ciresan, Ueli Meier, Jonathan Masci, Luca M. Gambardella, and Jurgen Schmidhuber. Hig performance neural networks for visual object classification. arXiv preprint arXiv:1102.0183, 2011\nMatthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural network. with binary weights during propagations. In NIPS, 2015.\nMatthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural net works: Training deep neural networks with weights and activations constrained to +1 or -1. arXiv preprin arXiv:1602.02830, 2016.\nBert De Brabandere, Xu Jia, Tinne Tuytelaars, and Luc Van Gool. Dynamic filter networks. In NIPs, 2016\nJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.\nMisha Denil, Babak Shakibi, Laurent Dinh, Marc Aurelio Ranzato, and Nando de Freitas. Predicting parameter in deep learning. In NIPS, 2013.\nEmily L. Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In NIPs, 2014..\nJiashi Feng and Trevor Darrell. Learning the structure of deep convolutional networks. In ICCV, 2015\nYunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.\nThomas L. Griffiths and Zoubin Ghahramani. The indian buffet process: An introduction and review. Journa of Machine Learning Research, 12(Apr):1185-1224, 2011.\nSong Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neura. network. In NIPS, 2015.\nSong Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural network wi pruning, trained quantization and huffman coding. In ICLR, 2016a..\nSong Han, Jeff Pool, Sharan Narang, Huizi Mao, Shijian Tang, Erich Elsen, Bryan Catanzaro, John Tran, and William J. Dally. Dsd: Regularizing deep neural networks with dense-sparse-dense training flow. arXiv preprint arXiv:1607.04381, 2016b.\nStephen Jose Hanson and Lorien Y. Pratt. Comparing biases for minimal network construction with back propagation. In NIPS, 1989\nBabak Hassibi and David G. Stork. Second order derivatives for network pruning: Optimal brain surgeon. In NIPS, 1993.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. Ir CVPR, 2016a.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In ECCV, 2016b.\nGeoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Improving. neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.\nAndrew G. Howard. Mobilenets: Efficient convolutional neural networks for mobile vision applications. Ir forthcoming, 2017.\nHengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. Network trimming: A data-driven neuron prunin approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250, 2016.\nGao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks. arXiv preprin arXiv:1608.06993, 2016a.\nGao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochastic depth arXiv preprint arXiv:1603.09382, 2016b\nKyuyeon Hwang and Wonyong Sung. Fixed-point feedforward deep neural network design using weights +1 0, and -1. In IEEE Workshop on Signal Processing Systems (SiPS), 2014.\nYani Ioannou, Duncan Robertson, Roberto Cipolla, and Antonio Criminisi. Deep roots: Improving cnn effi ciency with hierarchical filter groups. arXiv preprint arXiv:1605.06489, 2016a.\nMax Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with lov rank expansions. In BMVC, 2014.\nXiaojie Jin, Xiaotong Yuan, Jiashi Feng, and Shuicheng Yan. Training skinny deep neural networks wit iterative hard thresholding methods. arXiv preprint arXiv:1607.05423, 2016.\nYong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. Compression of deep convolutional neural networks for fast and low power mobile applications. In ICLR, 2016.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012\nVadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempitsky. Speeding-up con volutional neural networks using fine-tuned cp-decomposition. In ICLR, 2015.\nYann LeCun, John S. Denker, Sara A. Solla, Richard E. Howard, and Lawrence D. Jackel. Optimal brair damage. In NIPS, 1989.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to documen recognition. Proceedings of the IEEE, 86(11):2278-2324. 1998\nHao Li. Asim Kadav. Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets arXiv preprint arXiv:1608.08710. 2016\nBaoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolutional neural networks. In CVPR, 2015.\nKenton Murray and David Chiang. Auto-sizing neural networks: With applications to n-gram language models In EMNLP, 2015.\nAlexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P. Vetrov. Tensorizing neural networks In NIPS, 2015.\nMohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classificatio using binary convolutional neural networks. In ECCV, 2016.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition In ICLR, 2015.\nRupesh K. Srivastava, Klaus Greff, and Jurgen Schmidhuber. Training very deep networks. In NIPs, 2015\nYani Ioannou, Duncan Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi. Training cnns witl low-rank filters for efficient image classification. In ICLR, 2016b\nJulien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. Online dictionary learning for sparse coding In ICML. 2009.\nXiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. Accelerating very deep convolutional networks for. classification and detection. IEEE Transactions on Pattern Analvsis and Machine Intelligence. 2015a\nXiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, and Jian Sun. Efficient and accurate approximations of nonlinear convolutional networks. In CVPR, 2015b."}, {"section_index": "10", "section_name": "A ADDITIONAL DETAILS ON DENSE VS. SPARSE CONVOLUTIONS", "section_text": "We contrast naive implementations of dense and sparse convolutions (cf. Sect.3) in Algorithm.. and Algorithm[2] We emphasize that we do not use sparse matrices and only introduce sparsity fron. channel to channel. Thus, walltime will be mostly in terms of Multiply-Adds; the basic operatio. (convolving the entire image plane in Line 8 of both algorithms) is unchanged..\nAlgorithm 1 Naive implementation of dense convolution\nAlgorithm 2 Naive implementation of sparse convolution\nI:Inputs: 2: - input: Data tensor 3: - W: Parameter tensor 4: - input_channels: Array of input channel IDs 5: - output_channels_connected to_i: Array of array of output channel IDs specif tions to each input channel. 6: for i in input_channels do. 7: for index, o in enumerate(output_channels_connected_to_i[il) do. 8: output[o] output[o] + convolve(input[i], W[i, index, ...]) 9: end for 10: end for 11: return output\n1: Inputs: 2: - input: Data tensor 3: - W: Parameter tensor 4: - input_channels: Array of input channel IDs 5: - output_channels: Array of output channel IDs 6: for i in input_channels do. 7: for o in output_channels do. 8: output[o] output[o] + convolve(input[i], W [i, o,.. .]) 9: end for 10: end for 11: return output"}] |
SyK00v5xx | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Word embeddings computed using diverse methods are basic building blocks for Natural Language. Processing (NLP) and Information Retrieval (IR). They capture the similarities between words (e.g., (Bengio et al.]2003) Collobert & Weston2008) Mikolov et al.]2013a)Pennington et al. 2014)). Recent work has tried to compute embeddings that capture the semantics of word sequences. (phrases, sentences, and paragraphs), with methods ranging from simple additional composition of. the word vectors to sophisticated architectures such as convolutional neural networks and recurrent neural networks (e.g.,(Iyyer et al.2015] Le & Mikolov2014] Kiros et al.2015}Socher et al. 2011; Blunsom et al.]2014] Tai et al.[2015] Wang et al.]2016)). Recently, (Wieting et al.]2016 learned general-purpose, paraphrastic sentence embeddings by starting with standard word embed. dings and modifying them based on supervision from the Paraphrase pairs dataset (PPDB), and. constructing sentence embeddings by training a simple word averaging model. This simple methoc leads to better performance on textual similarity tasks than a wide variety of methods and serves as a good initialization for textual classification tasks. However, supervision from the paraphrase dataset. seems crucial, since they report that simple average of the initial word embeddings does not work. very well.\nHere we give a new sentence embedding method that is embarrassingly simple: just compute the. weighted average of the word vectors in the sentence and then remove the projections of the average vectors on their first singular vector (\"common component removal''). Here the weight of a word w is a/(a + p(w)) with a being a parameter and p(w) the (estimated) word frequency; we call this"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "The current paper goes further, showing that the following completely unsuper vised sentence embedding is a formidable baseline: Use word embeddings com outed using one of the popular methods on unlabeled corpus like Wikipedia, rep resent the sentence by a weighted average of the word vectors, and then modify them a bit using PCA/SVD. This weighting improves performance by about 10% to 30% in textual similarity tasks, and beats sophisticated supervised methods in cluding RNN's and LSTM's. It even improves Wieting et al's embeddings. This simple method should be used as the baseline to beat in future, especially when labeled training data is scarce or nonexistent.\nsmooth inverse frequency (SIF)|'|This method achieves significantly better performance than the unweighted average on a variety of textual similarity tasks, and on most of these tasks even beats some sophisticated supervised methods tested in (Wieting et al.|2016), including some RNN and LSTM models. The method is well-suited for domain adaptation settings, i.e., word vectors trained on various kinds of corpora are used for computing the sentence embeddings in different testbeds. It is also fairly robust to the weighting scheme: using the word frequencies estimated from different corpora does not harm the performance; a wide range of the parameters a can achieve close-to-best results, and an even wider range can achieve significant improvement over unweighted average\nOf course, this SIF reweighting is highly reminiscent of TF-IDF reweighting from information retrieval (Sparck Jones 1972) Robertson2004) if one treats a \"sentence\" as a \"document\"' and make the reasonable assumption that the sentence doesn't typically contain repeated words. Such reweightings (or related ideas like removing frequent words from the vocabulary) are a good rule of thumb but has not had theoretical justification in a word embedding setting..\nThe current paper provides a theoretical justification for the reweighting using a generative model for sentences, which is a simple modification for the Random Walk on Discourses model for generating text in (Arora et al.][2016). In that paper, it was noted that the model theoretically implies a sentence embedding, namely, simple average of embeddings of all the words in it.\nWe modify this theoretical model, motivated by the empirical observation that most word embedding methods, since they seek to capture word cooccurence probabilities using vector inner product, end up giving large vectors to frequent words, as well as giving unnecessarily large inner products to. word pairs, simply to fit the empirical observation that words sometimes occur out of context in documents. These anomalies cause the average of word vectors to have huge components along semantically meaningless directions. Our modification to the generative model of (Arora et al.. 2016) allows \"smoothing\"' terms, and then a max likelihood calculation leads to our SIF reweighting.\nInterestingly, this theoretically derived SIF does better (by a few percent points) than traditional TF-. IDF in our setting. The method also improves the sentence embeddings of Wieting et al., as seen in Table 1. Finally, we discovered that -contrary to widespread belief-Word2Vec(CBOw) also does not use simple average of word vectors in the model, as misleadingly suggested by the usual expression Pr[w[w1, w2,..., w5] exp(w . ( , Vw,)). A dig into the implementation shows. it implicitly uses a weighted average of word vectors -again, different from TF-IDF- and this weighting turns out to be quite similar in effect to ours. (See Section|3.1)."}, {"section_index": "2", "section_name": "2 RELATED WORK", "section_text": "Our work is most directly related work to (Arora et al.2016), which proposed a random walk mode. for generating words in the documents. Our sentence vector can be seen as approximate inference of the latent variables in their generative model.\nPhrase/Sentence/Paragraph embeddings. Previous works have computed phrase or sentence embeddings by composing word embeddings using operations on vectors and matrices e.g. (Mitchell & Lapata2008] 2010]Blacoe & Lapata]2012). They found that coordinate-wise multipli cation of the vectors performed very well among the binary operations studied. Unweighted averag ing is also found to do well in representing short phrases (Mikolov et al.]2013a). Another approach is recursive neural networks (RNNs) defined on the parse tree, trained with supervision (Socher et al.[[2011) or without (Socher et al.[2014). Simple RNNs can be viewed as a special case where the parse tree is replaced by a simple linear chain. For example, the skip-gram model (Mikolov et al. 2013b) is extended to incorporate a latent vector for the sequence, or to treat the sequences rathe than the word as basic units. In (Le & Mikolov2014) each paragraph was assumed to have a latent\n1The code is available on https://github.com/PrincetonML/SIF\nWord embeddings. Word embedding methods represent words as continuous vectors in a low. dimensional space which capture lexical and semantic properties of words. They can be obtained from the internal representations from neural network models of text (Bengio et al.]2003Collobert & Weston2008]Mikolov et al.] 2013a) or by low rank approximation of co-occurrence statis- tics (Deerwester et al.l|1990] Pennington et al.|2014). The two approaches are known to be closely related (Levy & Goldberg2014 Hashimoto et al.[2016}Arora et al.]2016).\nThe directed inspiration for our work is (Wieting et al.]2016) which learned paraphrastic sentence embeddings by using simple word averaging and also updating standard word embeddings based on supervision from paraphrase pairs; the supervision being used for both initialization and training..\nWe briefly recall the latent variable generative model for text in (Arora et al.|2016). The model treats corpus generation as a dynamic process, where the t-th word is produced at step t. The process is driven by the random walk of a discourse vector ct E Rd. Each word w in the vocabulary has a vector in d as well; these are latent variables of the model. The discourse vector represents \"what is being talked about.\"' The inner product between the discourse vector ct and the (time-invariant word vector vw for word w captures the correlations between the discourse and the word. The probability of observing a word w at time t is given by a log-linear word production model from Mnih and Hinton:\nThe discourse vector ct does a slow random walk (meaning that ct+1 is obtained from ct by adding a small random displacement vector), so that nearby words are generated under similar discourses. I was shown in (Arora et al.J|2016) that under some reasonable assumptions this model generates be havior -in terms of word-word cooccurrence probabilities-that fits empirical works like word2vec and Glove. The random walk model can be relaxed to allow occasional big jumps in ct, since simple calculation shows that they have negligible effect on cooccurrence probabilities of words The word vectors computed using this model are reported to be similar to those from Glove anc word2vec(CBOW).\nIn this paper, towards more realistic modeling, we change the mode1 (1) as follows. This model has two types of \"smoothing term', which are meant to account for the fact that some words occur ou1 of context, and that some frequent words (presumably \"the\", \"and \" etc.) appear often regardless of the discourse. We first introduce an additive term ap(w) in the log-linear model, where p(w) is the unigram probability (in the entire corpus) of word and a is a scalar. This allows words to occur even if their vectors have very low inner products with cs. Secondly, we introduce a common discourse vector co E Rd which serves as a correction term for the most frequent discourse that is often related to syntax. (Other possible correction is left to future work.) It boosts the co-occurrence probability of words that have a high component along co.\nConcretely, given the discourse vector cs, the probability of a word w is emitted in the sentence s is modeled by,\nexp((cs, Vw)) Pr[w emitted in sentence s | cs] = ap(w) + (1 a where cs = co + (1 - )cs, CO\nwhere a and are scalar hyperparameters, and Zc. = wev exp ((cs, Vw)) is the normalizing. constant (the partition function). We see that the model allows a word w unrelated to the discourse\nparagraph vector, which influences the distribution of the words in the paragraph. Skip-thought of (Kiros et al.[2015) tries to reconstruct the surrounding sentences from surrounded one and treats the hidden parameters as their vector representations. RNNs using long short-term memory (LSTM) capture long-distance dependency and have also been used for modeling sentences (Tai et al.2015). Other neural network structures include convolution neural networks, such as (Blunsom et al.l|2014) that uses a dynamic pooling to handle input sentences of varying length and do well in sentiment prediction and classification tasks.\nPr[w emitted at time t ct] exp ((ct, Vw))\nOur improved Random Walk model. Clearly, it is tempting to define the sentence embedding as. follows: given a sentence s, do a MAP estimate of the discourse vectors that govern this sentence.. We note that we assume the discourse vector ct doesn't change much while the words in the sentence. were emitted, and thus we can replace for simplicity all the ct's in the sentence s by a single discourse vector cs. In the paper (Arora et al.2016), it was shown that the MAP estimate of cs is -up to. multiplication by scalar--the average of the embeddings of the words in the sentence..\ncs to be emitted for two reasons: a) by chance from the term ap(w); b) if w is correlated with the common discourse vector co.\nComputing the sentence embedding. The word embeddings yielded by our model are actually the same.2|The sentence embedding will be defined as the max likelihood estimate for the vector cs that generated it. ( In this case MLE is the same as MAP since the prior is uniform.) We borrow the key modeling assumption of (Arora et al.|2016), namely that the word vw's are roughly uniformly dispersed, which implies that the partition function Zc is roughly the same in all directions. So assume that Zc. is roughly the same, say Z for all cs. By the model (2) the likelihood for the sentence is\nexp (Vw,Cs) p[scs] = p(wcs) = ap(w) +(1-a) Z wEs wEs\nexp({Vw,cs)) fw(cs) = log |ap(w) + (1- a Z\n1 1 - Q V f w(Cs) = exp ap(w) + (1- a) exp({vw,cs)) /Z Z\nfw(Cs) ~Jw O+VT (1-a)/(aZ - constant p(w) + (1-a)/(aZ)\nTherefore, the maximum likelihood estimator for c, on the unit sphere (ignoring normalization) is approximately3\na 1-Q fw(cs) x arg max Vw, where a = o(w)+a aZ wEs wEs\nTo estimate cs, we estimate the direction co by computing the first principal component of cs's fo a set of sentences4 In other words, the final sentence embedding is obtained by subtracting th. projection of cs's to their first principal component. This is summarized in Algorithm[1\n2we empirically discovered the significant common component co in word vectors built by existing meth ods, which inspired us to propose our theoretical model of this paper. 3Note that maxc:l|I=1 C + (c, g) = g/l|g|| for any constant C. 4Here the first principal component is computed without centralizing cs'\nThat is, the MLE is approximately a weighted average of the vectors of the words in the sentence Note that for more frequent words w, the weight a/(p(w) + a) is smaller, so this naturally leads to a down weighting of the frequent words.\n1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 word2vec weighting 0.1 our weighting (a=0.0001) 0 10-10 10~8 10~6 10:4 102 100 Word frequency\nFigure 1: The subsampling probabilities in word2vec are similar to our weighting scheme\nWord2vec (Mikolov et al.]2013b) uses a sub-sampling technique which downsamples word w with probability proportional to 1/p(w) where p(w) is the marginal probability of the word w. This heuristic not only speeds up the training but also learns more regular word representations. Here we explain that this corresponds to an implicit reweighting of the word vectors in the model and therefore the statistical benefit should be of no surprise.\nRecall the vanilla CBOW model of word2vec\nVg(Vw) =y({Vt,Vw))Ut= Q(Vwt-5+ Vwt- +Uwt-3 o+w+.\nVg(Vw) =Q(J5Vwt-5+ J4Vwt-4+ J3Vwt-3+ J2Vwt-2+ J1Vwt-1\nIn fact, the expectation E[g(vw)] corresponds to the gradient of a modified word2vec model with. a weighted model can also share the same form of what we derive from our random walk model as in equation (3). Moreover, the weighting q(w) closely tracks our weighting scheme a/(a + p(w)) when using parameter a = 10-4; see Figure[1for an illustration. Therefore, the expected gradient here is approximately the estimated discourse vector in our model! Thus, word2vec with sub-sampling gradient heuristic corresponds to a stochastic gradient update method for using our. weighting scheme.\n5 Pr[wt | Wt-1,..., Wt-5] exp((Ut, Vw)), where Ut Uwt- i=1\nIt can be shown that the loss (MLE) for the single word vector vw (from this occurrence) can be abstractly written in the form.\nV g(Uw) =Q(q(Wt-5)Vwt-5+q(Wt-4)Vwt-4+ q(Wt-3)Vwt-3+q(Wt-2)Vwt-2+q(Wt-1)Vwt-i\nTable 1: Experimental results (Pearson's r 100) on textual similarity tasks. The highest score in each row is in boldface. The methods can be supervised (denoted as Su.), semi-supervised (Se.) or unsupervised (Un.). \"GloVe+WR' stands for the sentence embeddings obtained by applying our method to the GloVe word vectors; \"PSL+WR' is for PSL word vectors. See the main text for the description of the methods."}, {"section_index": "3", "section_name": "4.1 TEXTUAL SIMILARITY TASKS", "section_text": "Datasets. We test our methods on the 22 textual similarity datasets including all the datasets from. SemEval semantic textual similarity (STS) tasks (2012-2015) (Agirre et al.. 2012 20132014 Ag1r rea et a1.[2015), and the SemEval 2015 Twitter task (Xu et a1.[2015) and the SemEval 2014 Seman tic Relatedness task (Marelli et al.| 2014). The objective of these tasks is to predict the similarity. between two given sentences. The evaluation criterion is the Pearson's coefficient between the pre. dicted scores and the ground-truth scores.\nResults collected from [Wieting et al. 2016) except tfidf-GloVe Our approach Su. Supervised Un. Se. Un. Se. or not Tasks PP PP DAN RNN iRNN LSTM LSTM ST avg- tfidf- avg- GloVe PSL -proj. (no) (o.g.) GloVe GloVe PSL +WR +WR STS'12 58.7 60.0 56.0 48.1 58.4 51.0 46.4 30.8 52.5 58.7 52.8 56.2 59.5 STS'13 55.8 56.8 54.2 44.7 56.7 45.2 41.5 24.8 42.3 52.1 46.4 56.6 61.8 STS'14 70.9 71.3 69.5 57.7 70.9 59.8 51.5 31.4 54.2 63.8 59.5 68.5 73.5 STS'15 75.8 74.8 72.7 57.2 75.6 63.9 56.0 31.0 52.7 60.6 60.0 71.7 76.3 SICK'14 71.6 71.6 70.7 61.2 71.2 63.9 59.0 49.8 65.9 69.4 66.4 72.2 72.9 Twitter'15 52.9 52.8 53.7 45.1 52.9 47.6 36.1 24.7 30.3 33.8 36.3 48.0 49.0\n1. Unsupervised: ST, avg-GloVe, tfidf-GloVe. ST denotes the skip-thought vectors (Kiro et al. 2015), avg-GloVe denotes the unweighted average of the GloVe vectors (Penningtor et al. 2014) and tfidf-GloVe denotes the weighted average of GloVe vectors using TF-IDI weights. 2. Semi-supervised: avg-PSL. This method uses the unweighted average of the PARAGRAM SL999 (PSL) word vectors from (Wieting et al.|2015). The word vectors are trained using labeled data, but the sentence embedding are computed by unweighted average withou training. 3. Supervised: PP, PP-proj., DAN, RNN, iRNN, LSTM (o.g.), LSTM (no). All these methods are initialized with PSL word vectors and then trained on the PPDB dataset. PP and PP proj. are proposed in (Wieting et al.]2016). The first is an average of the word vectors, anc the second additionally adds a linear projection. The word vectors are updated during th training. DAN denotes the deep averaging network of (Iyyer et al.||2015). RNN denotes th classical recurrent neural network, and iRNN denotes a variant with the activation being th identity, and the weight matrices initialized to identity. The LSTM is the version from (Ger et al.2002), either with output gates (denoted as LSTM(o.g.)) or without (denoted as LSTM (no)).\nOur method can be applied to any types of word embeddings. So we denote the sentence embeddings. obtained by applying our method to word embeddings method \"XXX\"' as \"XXX+WR6|To get a. completely unsupervised method, we apply it to the GloVe vectors, denoted as GloVe+WR. The weighting parameter a is fixed to 10-3, and the word frequencies p(w) are estimated from the.\n0.8 PSL+WR 0.7 GloVe+WR 0.55 SN+WR 0.6 0.50 0.5 0.45 0.4 s,uos 0.3 pearse 0.40 0.2 P 0.35 PSL+WR PSL GloVe+WR GloVe 0.1 SN+WR SN 0.30 0.0 10-5 10 103 10-2 10-1 100 enwiki poliblogs commoncrawl text8 Weighting parameter a (a) (b)\nFigure 2: Effect of weighting scheme in our method on the average performance on STS 2012 tasks. Best viewed in color. (a) Performance v.s. weighting parameter a. Three types of word vectors. (PSL, GloVe, SN) are tested using p(w) estimated on the enwiki dataset. The best performance. is usually achieved at a = 10-3 to a = 10-4. (b) Performance v.s. datasets used for estimating. p(w). Four datasets (enwiki, poliblogs, commoncrawl, text8) are used to estimate p(w) which is then used in our method. The parameter a is fixed to be 10-3. The performance is almost the same. for different settings.\ncommoncrawl dataset!7|This is denoted by GloVe+WR in Table[1 We also apply our method on the PSL vectors, denoted as PSL+WR, which is a semi-supervised method.\nResults. The results are reported in Table[1] Each year there are 4 to 6 STS tasks. For clarity, we. only report the average result for the STS tasks each year; the detailed results are in the appendix\nThe unsupervised method GloVe+WR improves upon avg-GloVe significantly by 10% to 30%, and beats the baselines by large margins. It achieves better performance than LSTM and RNN and is comparable to DAN, even though the later three use supervision. This demonstrates the power of this simple method: it can be even stronger than highly-tuned supervisedly trained sophisticated models. Using TF-IDF weighting scheme also improves over the unweighted average, but not as much as our method.\nThe semi-supervised method PSL+WR achieves the best results for four out of the six tasks and is comparable to the best in the rest of two tasks. Overall, it outperforms the avg-PSL baseline and al the supervised models initialized with the same PSL vectors. This demonstrates the advantage ol our method over the training for those models.\nWe study the sensitivity of our method to the weighting parameter a, the method for computing word vectors, and the estimated word probabilities p(w). First, we test the performance of three.\n7It is possible to tune the parameter a to get better results. The effect of a and the corpus for estimating word frequencies are studied in Section|4.1.\nWe also note that the top singular vectors co of the datasets seem to roughly correspond to the syntactic information or common words. For example, closest words (by cosine similarity) to co in the SICK dataset are \"just\", \"when\", \"even\", \"one\", \"up\", \"little\", \"way\", \"there\", \"while\", and \"but.\"\nFinally, in the appendix, we showed that our two ideas all contribute to the improvement: for GloVe. vectors, using smooth inverse frequency weighting alone improves over unweighted average by about 5%, using common component removal alone improves by 10%, and using both improves by. 13%.\nTable 2: Results on similarity, entailment, and sentiment tasks. The sentence embeddings are com puted unsupervisedly, and then used as features in downstream supervised tasks. The row for sim ilarity (SICK) shows Pearson's r 100 and the last two rows show accuracy. The highest score ir. each row is in boldface. Results in Column 2 to 6 are collected from (Wieting et al.]2016), and. those in Column 7 for skip-thought are from (Lei Ba et al.]2016).\ntypes of word vectors (PSL, GloVe, and SN) on the STS 2012 tasks. SN vectors are trained on the. enwiki dataset (Wikimedia]2012) using the method in (Arora et al.]2016), while PSL and GloVe. vectors are those used in Table[1[ We enumerate a E {10-,3 10- : 1 i 5} and use the p(w) estimated on the enwiki dataset. Figure[2a shows that for all three kinds of word vectors, a. wide range of a leads to significantly improved performance over the unweighted average. Best performance occurs from a = 10-3 to a = 10-4\nNext, we fix a = 10-3 and use four very different datasets to estimate p(w): enwiki (wikipedia, 3 billion tokens), poliblogs (Yano et al.2o09) (political blogs, 5 million), commoncrawl (Buck et al. 2014) (Internet crawl, 800 billion), text8 (Mahoney2008) (wiki subset, 1 million). Figure|2b|shows performance is almost the same for all four settings..\nThe fact that our method can be applied on different types of word vectors trained on different corpora also suggests it should be useful across different domains. This is especially important for unsupervised methods, since the unlabeled data available may be collected in a different domain from the target application."}, {"section_index": "4", "section_name": "4.2 SUPERVISED TASKS", "section_text": "The sentence embeddings obtained by our method can be used as features for downstream super. vised tasks. We consider three tasks: the SICK similarity task, the SICK entailment task, and th Stanford Sentiment Treebank (SST) binary classification task (Socher et al.|2013). To highlight th representation power of the sentence embeddings learned unsupervisedly, we fix the embedding. and only learn the classifier. Setup of supervised tasks mostly follow (Wieting et al.]2016) to al. low fair comparison, i.e., the classifier a linear projection followed by the classifier in (Kiros et al.. 2015). The linear projection maps the sentence embeddings into 2400 dimension (the same as th. skip-thought vectors), and is learned during the training. We compare our method to PP, DAN. RNN, and LSTM, which are the methods used in Section4.1 We also compare to the skip-though. vectors (with improved training in (Lei Ba et al.|2016)).\nResults. Our method gets better or comparable performance compared to the competitors. It gets. the best results for two of the tasks. This demonstrates the power of our simple method. We em. phasize that our embeddings are unsupervisedly learned, while DAN, RNN, LSTM are trained with. supervision. Furthermore, skip-thought vectors are much higher dimensional than ours (though pro jected into higher dimension, the original 300 dimensional embeddings contain all the information).\nThe advantage is not as significant as in the textual similarity tasks. This is possibly because sim ilarity tasks rely directly upon cosine similarity, which favors our method's approach of removing. the common components (which can be viewed as a form of denoising), while in supervised tasks.. with the cost of some label information, the classifier can pick out the useful components and ignore the common ones.\nFinally, we speculate that our method doesn't outperform RNN's and LSTM's for sentiment tasks because (a) the word vectors -and more generally the distributional hypothesis of meaning -has known limitations for capturing sentiment due to the \"antonym problem\"', (b) also in our weightec average scheme, words like \"not\"' that may be important for sentiment analysis are downweighted a lot. To address (a), there is existing work on learning better word embeddings for sentiment analysi: (e.g., (Maas et al.|2011). To address (b), it is possible to design weighting scheme (or learn weights for this specific task.\nA interesting feature of our method is that it ignores the word order. This is in contrast to that RNN'. and LSTM's can potentially take advantage of the word order. The fact that our method achieve. better or comparable performance on these benchmarks raise the following question: is word orde not important in these benchmarks? We conducted an experiment suggesting that word order does. play some role."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. Semeval-2012 task 6: A pi- lot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical anc Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pp 385-393. Association for Computational Linguistics, 2012.\nEneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, and Weiwei Guo. Sem 2013 shared task: Semantic textual similarity. in second joint conference on lexical and computational seman\n8On the random order datasets, the hyperparameters are enumerated as in the other experiments, and the best results are reported.\nDataset RNN LSTM (no) LSTM (o.g.) original 73.13 85.45 83.41 similarity (SICK) random 54.50 77.24 79.39 original 76.4 83.2 82.0 entailment (SICK) random 61.7 78.2 81.0 original 86.5 86.6 89.2 sentiment (SST) random 84.2 82.9 84.1\nTable 3: Comparison of results on the original datasets and the ones with words randomly shuffled in sentences. The rows with label \"original'' are the results on the original datasets, and those with label \"random' are the results on the randomly shuffled datasets. The row for similarity (SiCK) shows Pearson's r 100 and the other rows show accuracy..\nWe trained and tested RNN/LSTM on the supervised tasks where the words in each sentence are randomly shuffled, and the results are reported in Table|38|It can be observed that the performance drops noticeably. Thus our method -which ignores word order-must be much better at exploit ing the semantics than RNN's and LSTM's. An interesting future direction is to explore if some ensemble idea can combine the advantages of both approaches.\nThis work provided a simple approach to sentence embedding, based on the discourse vectors in the random walk model for generating text (Arora et al.|2016). It is simple and unsupervised, but achieves significantly better performance than baselines on various textual similarity tasks, and can even beat sophisticated supervised methods such as some RNN and LSTM models. The sentence embeddings obtained can be used as features in downstream supervised tasks, which also leads to better or comparable results compared to the sophisticated methods.\nThis work was supported in part by NSF grants CCF-1527371, DMS-1317308, Simons Investigator Award. Simons Collaboration Grant, and ONRN00014- 16-1-2329. Tengyu Ma was supported in addition by Simons Award in Theoretical Computer Science and IBM PhD Fellowship..\nEneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Wei wei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. Semeval-2014 task 10: Multilingua semantic textual similarity. In Proceedings of the 8th international workshop on semantic evalu ation (SemEval 2014), pp. 81-91, 2014.\nEneko Agirrea, Carmen Baneab, Claire Cardiec, Daniel Cerd, Mona Diabe, Aitor Gonzalez-Agirrea Weiwei Guof, Inigo Lopez-Gazpioa, Montse Maritxalara, Rada Mihalceab, et al. Semeval-2015. task 2: Semantic textual similarity, english, spanish and pilot on interpretability. In Proceedings. of the 9th international workshop on semantic evaluation (SemEval 2015), pp. 252-263, 2015\nSanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. A latent variable. model approach to PMI-based word embeddings. Transaction of Association for Computational Linguistics, 2016.\nYoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilisti language model. Journal of Machine Learning Research, 2003..\nTatsunori B. Hashimoto, David Alvarez-Melis, and Tommi S. Jaakkola. Word embeddings as metric recovery in semantic spaces. Transactions of the Association for Computational Linguistics. 2016\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation. 1997\nJ. Lei Ba, J. R. Kiros, and G. E. Hinton. Layer Normalization. ArXiv e-prints, 2016\nAndrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christoph. Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meetir of the Association for Computational Linguistics: Human Language Technologies-Volume 1, p 142-150. Association for Computational Linguistics. 2011.\nMarco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. SemEval-2014, 2014.\nTomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space word representations. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2013b\nJeff Mitchell and Mirella Lapata. Composition in distributional models of semantics. Cognitiv science, 2010.\nJeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. Proceedings of the Empiricial Methods in Natural Language Processing. 2014.\nStephen Robertson. Understanding inverse document frequency: on theoretical arguments for idf Journal of documentation, 2004.\nKai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representation. from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075, 2015\nSida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 5Oth Annual Meeting of the Association for Computational Linguistics: Short Papers- Volume 2, pp. 90-94. Association for Computational Linguistics, 2012\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. Distributed repre sentations of words and phrases and their compositionality. In Advances in Neural Information. Processing Systems, 2013a.\nRichard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng. Dy-. namic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems, 2011.\nKaren Sparck Jones. A statistical interpretation of term specificity and its application in retrieval Journal of documentation, 1972\nYashen Wang, Heyan Huang, Chong Feng, Qiang Zhou, Jiahui Gu, and Xiong Gao. Cse: Conceptua sentence embeddings based on attention model. In Proceedings of the 54th Annual Meeting of th Association for Computational Linguistics (Volume 1: Long Papers), 2016..\nJohn Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu, and Dan Roth. From paraphrase database to compositional paraphrase model and back. Transactions of the Association for Com putational Linguistics, 2015.\nJohn Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. Towards universal paraphrasti sentence embeddings. In International Conference on Learning Representations, 2016.\nWikimedia. English Wikipedia dump. http://dumps.wikimedia.org/enwiki/latest/enwiki-latest pages-articles.xml.bz2, 2012. Accessed Mar-2015.\nTae Yano, William W Cohen, and Noah A Smith. Predicting response to political blog posts wit topic models. In Proceedings of Human Language Technologies: The 2009 Annual Conferenc. of the North American Chapter of the Association for Computational Linguistics, 2009.\nht=f(WxWxt +Wnht-1+b\nwhere f is the activation, Wx, Wn and b are parameters, and xt is the t-th token in the sentence The sentence embedding of RNN is just the hidden vector of the last token. iRNN is a special RNN with the activation being the identity, the weight matrices initialized to identity, and b initialized tc zero. LSTM (Hochreiter & Schmidhuber1997) is a recurrent neural network architecture designec to capture long-distance dependencies. Here, the version from (Gers et al.| 2002) is used.\nThe supervised methods are initialized with PARAGRAM-SL999 (PSL) vectors, and trained usin the approach of (Wieting et al.]2016) on the XL section of the PPDB data (Pavlick et al.]2015 which contains about 3 million unique phrase pairs. After training, the final models can be used t. generate sentence embeddings on the test data. For hyperparameter tuning they used 100k example sampled from PPDB XXL and trained for 5 epochs. Then after finding the hyperparameters tha. maximize Spearman's coefficients on the Pavlick et al. PPDB task, they are trained on the entire X section of PPDB for 10 epochs. See (Wieting et al.]2016) and related papers for more details abot. these methods.\nThe tfidf-GloVe method is a weighted average of the GloVe vectors, where the weights are define by the TF-IDF scheme. More precisely, the embedding of a sentence s is.\nwhere N is the total number of sentences and N, is the number of sentences containing w, and 1 is added to avoid division by O. In the experiments, we use all the textual similarity datasets to. compute IDFw.\nDetailed experimental results. In the main body we present the average results for STS tasks by year. Each year there are actually 4 to 6 STS tasks, as shown in Table4] Note that tasks with the. same name in different years are actually different tasks. Here we provide the results for each tasks in Table|5] PSL+WR achieves the best results on 12 out of 22 tasks, PP and PP-proj. achieves on 3 tfidf-GloVe achieves on 2, and DAN, iRNN, and GloVe+WR achieves on 1. In general, our methoc improves the performance significantly compared to the unweighted average, though on rare cases such as MSRpar it can decrease the performance..\nTable 4: The STS tasks by year. Note that tasks with the same name in different years are actually different tasks.\nThe competitors. We give a brief overview of the competitors. RNN is the classical recurrent\n1 IDFwVw U s s wEs\nwhere IDF, is the inverse document frequency of w, and s denotes the number of words in the sentence. Here, the TF part of the TF-IDF scheme is taken into account by the sum over w E s. Furthermore, when computing IDF,, each sentence is viewed as a \"document':\n1+N IDF, := log 1+Nw\nSTS'12 STS'13 STS'14 STS'15 MSRpar headline deft forum. anwsers-forums MSRvid OnWN deft news. answers-students SMT-eur FNWN headline belief OnWN SMT images headline SMT-news OnWN images tweet news.\nResults collected from [Wieting et al.2016) except tfidf-GloVe Our approach Supervised Su. Un. Se. Un. Se. or not Tasks PP PP DAN RNN iRNN LSTM LSTM ST avg- tfidf- avg- GloVe PSL -proj. (no) (o.g.) GloVe GloVe PSL +WR +WR MSRpar 42.6 43.7 40.3 18.6 43.4 16.1 9.3 16.8 47.7 50.3 41.6 35.6 43.3 MSRvid 74.5 74.0 70.0 66.5 73.4 71.3 71.3 41.7 63.9 77.9 60.0 83.8 84.1 SMT-eur 47.3 49.4 43.8 40.9 47.1 41.8 44.3 35.2 46.0 54.7 42.4 49.9 44.8 OnWN 70.6 70.1 65.9 63.1 70.1 65.2 56.4 29.7 55.1 64.7 63.0 66.2 71.8 SMT-news 58.4 62.8 60.0 51.3 58.1 60.8 51.0 30.8 49.6 45.7 57.0 45.6 53.6 STS'12 58.7 60.0 56.0 48.1 58.4 51.0 46.4 30.8 52.5 58.7 52.8 56.2 59.5 headline 72.4 72.6 71.2 59.5 72.8 57.4 48.5 34.6 63.8 69.2 68.8 69.2 74.1 OnWN 67.7 68.0 64.1 54.6 69.4 68.5 50.4 10.0 49.0 72.9 48.0 82.8 82.0 FNWN 43.9 46.8 43.1 30.9 45.3 24.7 38.4 30.4 34.2 36.6 37.9 39.4 52.4 SMT 39.2 39.8 38.3 33.8 39.4 30.1 28.8 24.3 22.3 29.6 31.0 37.9 38.5 STS'13 55.8 56.8 54.2 44.7 56.7 45.2 41.5 24.8 42.3 52.1 46.4 56.6 61.8 deft forum 48.7 51.1 49.0 41.5 49.0 44.2 46.1 12.9 27.1 37.5 37.2 41.2 51.4 deft news 73.1 72.2 71.7 53.7 72.4 52.8 39.1 23.5 68.0 68.7 67.0 69.4 72.6 headline 69.7 70.8 69.2 57.5 70.2 57.5 50.9 37.8 59.5 63.7 65.3 64.7 70.1 images 78.5 78.1 76.9 67.6 78.2 68.5 62.9 51.2 61.0 72.5 62.0 82.6 84.8 OnWN 78.8 79.5 75.7 67.7 78.8 76.9 61.7 23.3 58.4 75.2 61.1 82.8 84.5 76.4 75.8 74.2 58.0 76.9 58.7 tweet news 48.2 39.9 51.2 65.1 64.7 70.1 77.5 STS'14 70.9 71.3 69.5 57.7 70.9 59.8 51.5 31.4 54.2 63.8 59.5 68.5 73.5 answers-forum 68.3 65.1 62.6 32.8 67.4 51.9 50.7 36.1 30.5 45.6 38.8 63.9 70.1 answers-student 78.2 77.8 78.1 64.7 78.2 71.5 55.7 33.0 63.0 63.9 69.2 70.4 75.9 belief 76.2 75.4 72.0 51.9 75.9 61.7 52.6 24.6 40.5 49.5 53.2 71.8 75.3 headline 74.8 75.2 73.5 65.3 75.1 64.0 56.6 43.6 61.8 70.9 69.0 70.7 75.9 81.4 80.3 77.5 71.4 81.1 70.4 64.2 17.7 67.5 72.9 69.9 81.5 84.1 images STS'15 75.8 74.8 72.7 57.2 75.6 63.9 56.0 31.0 52.7 60.6 60.0 71.7 76.3 SICK'14 71.6 71.6 70.7 61.2 71.2 63.9 59.0 49.8 65.9 69.4 66.4 72.2 72.9 Twitter'15 52.9 52.8 53.7 45.1 52.9 47.6 36.1 24.7 30.3 33.8 36.3 48.0 49.0\nTable 5: Experimental results (Pearson's r 1o0) on textual similarity tasks. The highest score in each row is in boldface. The methods can be supervised (denoted as Su.), semi-supervised (Se.) or unsupervised (Un.). \"GloVe+WR' stands for the sentence embeddings obtained by applying our method to the GloVe word vectors; \"PSL+WR\" is for PSL word vectors. See the main text for the description of the methods.\nEffects of smooth inverse frequency and common component removal. There are two key idea in our methods: smooth inverse frequency weighting (W) and common component removal (R). It is instructive to see their effects separately. Let GloVe+W denote the embeddings using only smootl inverse frequency, and GloVe+R denote that using only common component removal. Similarl define PSL+W and PSL+R. The results for these methods are shown in Table|6] When using GloV vectors, W alone improves the performance of the unweighted average baseline by about 5%, F alone improves by 10%, W and R together improves by 13%. When using PSL vectors, W get 10%, R gets 10%, W and R together gets 13%. In summary, both techniques are important fo obtaining significant advantage over the unweighted average."}, {"section_index": "6", "section_name": "A.2 SUPERVISED TASKS", "section_text": "hL = WoVL,hR = WpVR\nRecall that our method has two steps: smooth inverse frequency weighting and common component removal. For the experiments here, we do not perform the common component removal, since it can be absorbed into the projection step. For the weighted average, the hyperparameter a is enumerated. in {10-i, 3 10-i : 2 i 3}. The other hyperparameters are enumerated as in (Wieting et al. 2016), and the same validation approach is used to select the final values..\nSetup of supervised tasks mostly follow (Wieting et al.]2016) to allow fair comparison: the sentence embeddings are fixed and fed into some classifier which are trained. For the SiCK similarity task given a pair of sentences with embeddings vt and vR, first do a linear projection:.\nwhere W, is of size 300 d, and is learned during training. dn = 2400 matches the dimension of the skip-thought vectors. Then ht and hR are used in the objective function from (Tai et al.][2015) Almost the same approach is used for the entailment task.For the sentiment task, the classifier has a fully-connected layer with a sigmoid activation followed by a softmax layer.\nUnsuperviseds Semi-supervised Tasks avg-GloVe GloVe+W GloVe+R Glo Ve+ WR avg-PSL PSL+W PSL+R PSL+WR MSRpar 47.7 43.6 36.4 35.6 41.6 40.9 42.5 43.3 MSRvid 63.9 78.7 79.4 83.8 60.0 80.4 76.4 84.1 SMT-eur 46.0 51.1 48.5 49.9 42.4 45.0 45.1 44.8 OnWN 55.1 54.3 68.3 66.2 63.0 67.8 71.0 71.8 SMT-news 49.6 42.2 45.6 45.6 57.0 56.2 50.7 53.6 STS'12 52.5 54.0 55.6 56.2 52.8 58.1 57.2 59.5 headline 63.8 63.8 68.9 69.2 68.8 72.6 72.7 74.1 OnWN 49.0 68.0 75.4 82.8 48.0 69.8 73.5 82.0 FNWN 34.2 23.0 34.9 39.4 37.9 49.3 40.7 52.4 SMT 22.3 29.5 36.4 37.9 31.0 39.2 37.3 38.5 STS'13 42.3 44.0 53.9 56.6 46.4 57.7 56.0 61.8 deft forum 27.1 29.1 39.8 41.2 37.2 45.8 45.3 51.4 deft news. 68.0 68.5 66.6 69.4 67.0 75.1 67.4 72.6 headline 59.5 59.3 64.6 64.7 65.3 68.9 68.5 70.1 images 61.0 74.1 78.4 82.6 62.0 82.9 80.2 84.8 OnWN 58.4 68.0 77.6 82.8 61.1 77.6 77.7 84.5 tweet news. 51.2 57.3 73.2 70.1 64.7 73.6 77.9 77.5 STS'14 54.2 59.4 66.7 68.5 59.5 70.7 69.5 73.5 answers-forum 30.5 41.4 58.4 63.9 38.8 56.0 61.0 70.1 answers-student 63.0 61.5 73.2 70.4 69.2 73.3 76.8 75.9 belief 40.5 47.7 69.5 71.8 53.2 64.3 71.3 75.3 headline 61.8 64.0 70.1 70.7 69.0 74.5 74.6 75.9 images 67.5 75.4 77.9 81.5 69.9 83.4 79.9 84.1 STS'15 52.7 58.0 69.8 71.7 60.0 70.3 72.7 76.3 SICK'14 65.9 70.5 70.6 72.2 66.4 73.1 70.3 72.9 Twitter'15. 30.3 33.8 50.6 48.0 36.3 45.7 51.9 49.0\nTable 6: Experimental results (Pearson's r 100) on textual similarity tasks using only smootl inverse frequency, using only common component removal, or using both..\nSNLI. The first experiment is for the 3-class classification task on the SNLI dataset (Bowman. et al.|[2015). To compare to the results in (Bowman et al.]2015), we used their experimental setup.. In particular, we applied our method to 300 dimensional GloVe vectors and used an additional tanh. neural network layer to map these 300d embeddings into 300 dimensional space, then used the code provided by the authors of (Bowman et al.]2015), trained the classifier on our 100 dimensional. sentence embedding for 120 passes over the data, using their default hyperparameters. The results are shown in Table[7 Our method indeed gets slightly better performance.\nOur test accuracy is worse than those using more sophisticated models (e.g., using attention mech- anism), which are typically 83% - 88%; see the website of the SNLI projecf[for a summary. An interesting direction is to study whether our idea can be combined with these sophisticated models to get improved performance.\nSentence model. Train Test 100d Sum of words 79.3 75.3 100d RNN 73.1 72.2 1O0d LSTM RNN 84.8 77.6 Our method. 83.9 78.2\nSentence model Train Test 100d Sum of words 79.3 75.3 100d RNN 73.1 72.2 1OOd LSTM RNN 84.8 77.6 Our method 83.9 78.2\nIMDB. The second experiment is the sentiment analysis task on the IMDB dataset, studied. in (Wang & Manning2012). Since the intended application is semi-supervised or transfer learning. we also compared performance with fewer labeled examples\nHere we report the experimental results on two more datasets, comparing to known results on them\nTable 7: Accuracy in 3-class classification on the SNLI dataset for each model. The results in the first three rows are collected from (Bowman et al.2015). All methods used 100 dimensional. sentence embeddings\n# labeled examples NB-SVM Our method. 50k 0.91 0.85 1k 0.84 0.82 200 0.73 0.77\nTable 8: Accuracy in sentiment analysis on the IMDB dataset for NB-SVM (Wang & Manning 2012) and our method.\nOur method gets worse performance on the full dataset, but its decrease in performance is better with less labeled examples. showing the benefit of using word embeddings. Note that our sentence embeddings are unsupervised, while that in the NB-SVM method takes advantage of the labels. Another comment is that sentiment analysis appears to be the best case for Bag-Of-Word methods. whereas it may be the worst case for word embedding methods (See Table2) due to the well-known antonymy problem -distributional hypothesis fails for distinguishing \"good\" from \"bad.\""}] |
Bk2TqVcxe | [{"section_index": "0", "section_name": "DISCOVERING OBJECTS AND THEIR RELATIONS FROM ENTANGLED SCENE REPRESENTATIONS", "section_text": "Our world can be succinctly and compactly described as structured scenes of ob-. jects and relations. A typical room, for example, contains salient objects such. as tables, chairs and books, and these objects typically relate to each other by. virtue of their correlated features, such as position, function and shape. Humans. exploit knowledge of objects and their relations for learning a wide spectrum of. tasks, and more generally when learning the structure underlying observed data. In this work, we introduce relation networks (RNs) - a general purpose neural. network architecture for object-relation reasoning. We show that RNs are capable. of learning object relations from scene description data. Furthermore, we show. that RNs can act as a bottleneck that induces the factorization of objects from entangled scene description inputs, and from distributed deep representations of. scene images provided by a variational autoencoder. The model can also be used. in conjunction with differentiable memory mechanisms for implicit relation dis-. covery in one-shot learning tasks. Our results suggest that relation networks are a. powerful architecture for solving a variety of problems that require object relation. reasoning."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The ability to reason about objects and relations is important for solving a wide variety of tasks (Spelke et al.][1992] Lake et al.]2016). For example, object relations enable the transfer of learned knowledge across superficial (dis)similarities (Tenenbaum et al.[ 2011): the predator-prey relation ship between a lion and a zebra is knowledge that is similarly useful when applied to a bear and a salmon, even though many features of these animals are very different.\nIn this work, we introduce a neural network architecture for learning to reason about - or model. - objects and their relations, which we call relation networks (RNs). RNs accomplish their goal. by adhering to a few fundamental design principles. Firstly, RNs are designed to be invariant to permutations of object descriptions in their input. For example, RN representations of the object sel {table, chair, book} will be identical for arbitrary re-orderings of the elements of the set. Secondly. RNs are designed to learn relations across multiple objects rather than within a single object - a. basic defining property of object relations. This design principle manifests itself in the use of shared. computations across groups of objects. In designing the RN architecture, we took inspiration from. the recently developed Interaction Network (IN) (Battaglia et al.]2016) which is more generally. concerned with modelling temporal interactions..\nIn principle, a deep network with a sufficiently large number of parameters and a large enougl training set should be capable of matching the performance of a RN. However, such networks woul have to learn both the permutation invariance of objects and the relational structure of the objects ir the execution of a desired computation. This quickly becomes unfeasible as the number of objects and relations increase.\nDenotes equal contribution\nT. Lillicrap, P. Battaglia\nraposo, adamsantoro, barrettdavid, razp, countzero peterbattaglia}@google.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "To test the ability of RNs to discover relations between objects, we turned to the classification c scenes, wherein classification boundaries were defined by the relational structure of the objects ij the scenes. There are various ways of encoding scenes as observable data; for example, scene de scription data can consist of sets of co-occurring objects and their respective features (location, size color, shape, etc.). A typical room scene, then, might consist of the object set {table, chair, lamp and their respective descriptions (e.g., the table is large and red). Similar datasets have been used fo many decades in cognitive psychology to study human relation learning (Palmer 1975). Here we consider synthetic scene description data generated by hierarchical probabilistic generative model of objects and object features.\nWe begin by motivating RNs as a general purpose architecture for reasoning about object relations Critically, we describe how RNs implement a permutation invariant computation on implicit groups of factored \"objects.\" We then demonstrate the utility of RNs for classification of static scenes where classification boundaries are defined by the relations between objects in the scenes. Next we exploit RNs' implicit use of factored object representations to demonstrate that RNs can induce the factorization of objects from entangled scene inputs. Finally, we combine RNs with memory augmented neural networks (MANNs) (Santoro et al.]2016) to solve a difficult one-shot learning task, demonstrating the ability of RNs' to act in conjunction with other neural network architectures to rapidly discover new object relations from entirely new scenes."}, {"section_index": "3", "section_name": "2 MODEL", "section_text": "RNs are inspired by Interaction Networks (INs) (Battaglia et al.2016), and therefore share simila. functional insights. Both operate under the assumption that permutation invariance is a necessary requirement for problems with inputs that can be described with a graphical structure. However INs use relations between objects as input to determine object interactions, mainly for the purpose of reasoning about dynamics. RNs compute object relations, and hence aim to determine object relational structure from static inputs.\nSuppose we have an object, o = (o', o2, ..., on), represented as a vector of n features encoding properties such as the object's type, color, size, position, etc. A collection of m objects can be gathered into a m n matrix D, called the scene description. Although the term scene description alludes to visual information, this need not be the case; scenes can be entirely abstract, as can the objects that constitute the scene, and the features that define the objects.\nWe can imagine tasks (see section [3) that depend on the relations, r, between objects. One sucl task is the discovery of the relations themselves. For example, returning to the predator and prey analogy, the predator-prey relation can be determined from relative features between two animals - such as their relative sizes, perhaps. Observation of the size of a single animal, then, does not inforn whether this animal is a predator or prey to any other given animal, since its size necessarily needs to be compared to the sizes of other animals.\nOne implementation of gs would process the entire contents of D without exploiting knowledg. that the features in a particular row are related through their description of a common object, an would instead have to learn the appropriate parsing of inputs and any necessary sub-functions. r = fo(gy(o1, o1, .., om)). An alternative approach would be to impose a prior on the parsing. of the input space, such that g operates on objects directly: r = fo(gy(o1), gy(o2), ..., gy(Om)). A third middle-ground approach - which is the approach taken by RNs - recognizes that relation necessarily exist in the context of a set of objects that have some capacity to be related. Thus, th computation of relations should entail some common function across these sets. For example, g may. oairs of ohiects\nThere are a number of possible functions that could allow for the discovery of object relations (figure|1). Consider a function g, with parameters . The function gs can be defined to operate on a particular factorization of D; for example, gy(D), or gy(D1,2,..., Di,j, ..., Dm,n). We are interested in models defined by the composite function f o g, where f is a function that returns a prediction r.\nFor RNs, g is implemented as a multi-layered perceptron (MLP) that operates on pairs of ob- jects. The same MLP operates on all possible pairings of objects from D, to produce sum- maries si,j. Depending on the complexity of the problem, any aggregation function a can be chosen to operate on si,j. However, to ensure the input to f is order invariant, the aggregation function should be commutative and associative, with summation being a natural choice. Thus. 7 = f(a(S1,2,S1,3,, Sm-1,m)) = f(i,j Si,j) = f(i, gy(Oi, 0j)). Training therefore en- tails the optimization of two MLPs: fo and gah\nThe order invariance of the aggregation function is a critical feature of the model, since without this invariance, the model would be unable to take advantage of the shared gal, function that operates on all permuted pairs of objects"}, {"section_index": "4", "section_name": "3.1 DATASETS", "section_text": "To probe a model's ability to both infer relations from scene descriptions and implicitly use relations to solve more difficult tasks - such as one-shot learning - we first developed datasets of scene descriptions and their associated images. To generate a scene description, we first defined a graph o object relations (see figure2. For example, suppose there are four types of squares, with each type being identified by its color. A graph description of the relations between each colored square coul identify the blue square as being a parent of the orange square. If the type of relation is \"position, then this particular relation would manifest as blue squares being independently positioned in the scene, and orange squares positioned in close proximity to blue squares. Similarly, suppose we have triangles and circles as the object types, with color as the relation. If triangles are parents tc circles, then the color of triangles in the scene will be randomly sampled, while the color of a circle will be derived from the color of the parent triangle. Datasets generated from graphs, then, impose solutions that depend on relative object features. That is, no information can be used from withii object features alone - such as a particular coordinate position, or RGB color value.\nGraphs define generative models that can be used to produce scenes. For scenes with position relations, root node coordinates (op, oP), were first randomly chosen in a bounded space. Children were then randomly assigned to a particular parent object, and their position was determined as a function of the parent's position: (ox, oy) = (op + d cos(0c), op + d sin(0c)). Here, 0c ~ U(op - /3, 0P + /3), where 0P is the angle computed for the parent. For root nodes, then 0P ~ U(0, 2) d is a computed distance: d = do + d1, where do is a minimum distance to prevent significant object overlap, and d1 is sampled from a half-normal distribution. This same pattern - inherit features from parents and apply noise - is used to generate scenes from graphs that define other relations, such as color. For the case of color, the inherited features are RGB values. Ultimately, scene descriptions consisted of matrices with 16 rows, with each row describing the object type (four rows for each of four types), position, color, and size.\nCustom datasets were required to both explicitly test solutions to the task of inferring object re lations, and to actively control for solutions that do not depend on object relations. For example\n(a) No Prior (b) Object prior (c) Object pair prior\nFigure 1: Model types. RNs are constructed to operate with an explicit prior on the input space (c) Features from all pairwise combinations of objects act as input to the same mlp, gy..\nconsider the common scenario of child objects positioned close to a parent object, analogous tc. chairs positioned around a table. In this scenario, the count information of objects (i.e., that there are more child objects than parent objects) is non-relational information that can nonetheless be used to constrain the solution space; the prediction of the relation between objects doesn't entirely. depend on explicitly computed relations, such as the relative distance between the child objects anc. the parent objects. To return to the kitchen scene analogy, one wouldn't need to know the relative. distance of the chairs to the table; instead, knowing the number of chairs and number of tables could. inform the relation, if it is known that less frequently occurring objects tend to be parents to more. frequently occurring objects. Although information like object count can be important for solving. many tasks, here we explicitly sought to test models' abilities to compute and operate on objec. relations.\nThe datasets will be made freely available"}, {"section_index": "5", "section_name": "3.2 TASK DESCRIPTIONS", "section_text": "The tasks on which we assessed the RN's performance fell into three categories. The first category involved the classification of scenes. In this task, the network was given a scene description as input with the target being a binary vector description of the edges between object types (which constitute the nodes of the graph - see figure 2. Training data consisted of 5000 samples derived from 5 10, or 20 unique classes (i.e., graphs), with testing data comprising withheld within-class samples. Although the target was a vector description of the generating graph, this task was fundamentally one of classification, and the use of one-hot vector labels could have constituted equivalent training Nonetheless, since class membership can only be determined from the relational structure of the objects within a particular sample, the ability to classify subsumes the ability to infer the relations.\nThe second category of tasks tested the ability of the RN to classify scenes - and hence, reason abou object relations - from unstructured input domains (see figure 3. Since RNs operate on factorec object representations, this task specifically probed the ability of RNs to induce the learning of object factorizations from entangled scene descriptions. In the first set of experiments we broke the highly structured scene description input by passing it through a fixed permutation matrix. To decode the entangled state we used a linear layer whose output provided the input to the RN. We then asked the network to classify scenes, as in the previous tasks. In the second set of experiments we pushec\nRelation Type Object Types Example Graph Example Cluster Example Relations Distance Position Color AOT Position Color AOT Mix Size\nFigure 2: Objects and relations. Relation types (column one) between object types (column two) can be described with directed graphs (column three). Shown in the fourth column are cropped. clusters from example scenes generated by a model based on the directed graph shown in column three. In the last column are example of relations that can be used to inform class membership; for. example, the distances between pairs objects, or the differences in color between pairs of objects that may inform the particular graphical structure, and hence generative model, used to generate the. Scene.\nthis idea further, and tested the ability of the RN to operate on distributed representations of image depictions of scenes. The images were passed through a variational autoencoder (VAE), and the latent variables were provided as input to a RN..\nFigure 3: Scene entangling. To test the ability of the RN to operate on entangled scene representa. tions, we multiplied a flattened vector representation of the scene description by a fixed permutatiol matrix B (a), or passed image depictions of the scenes through a VAE and used the latent code as. input to a RN with an additional linear layer (b)..\nThe final category of tasks tested the implicit use of discovered relations to solve difficult overar ching problem: one-shot learning (Vinyals et al.[2016} Santoro et al.]2016f Lake et al.2015). Ir the one-shot learning task, sequences of samples were fed to a memory-augmented neural network (MANN) with a relational network pre-processor. Sequences - or episodes - consisted of 50 ran- dom samples generated from five unique graphs, from a pool of 1900 total classes, presented jointly with time-offset label identifiers, as per[Hochreiter et al. (2001) and Santoro et al.(2016). Critically the labels associated with particular classes change from episode-to-episode. So, the network must depend on within-episode knowledge to perform the task; it must learn the particular label assignec to a class within an episode, and learn to assign this label to future samples of the same class, withir the same episode. Labels are presented as input in a time-offset manner (that is, the correct label for the sample presented at time t is given as input to the network at time t + 1) to enable learning of ar arbitrary binding procedure. Unique information from a sample - which is necessarily something pertaining to the relations of the contained objects - must be extracted, bound to its associated label and stored in memory. Upon subsequent presentations of samples from this same class, the network must query its memory, and use stored information to infer class membership. There is a critica difference in this task compared to the first. In this task, identifying class labels constantly change episode-to-episode. So, the network cannot simply encode mappings from certain learned relational structures to class labels. Instead, the only way the network can solve the task is to develop ar ability to compare and contrast extracted relational structures between samples as they occur withir an episode. Please see the appendix for more details on the task setup.\nFor inferring relations from latent representations of pixels we used a variational autoencoder (VAE. (Kingma & Welling2013) with a convolutional neural network (CNN) as the feature encoder and. deconvolutional network as the decoder (see figure 3b). The CNN consisted of two processing blocks. The input to each block was sent through four dimension preserving parallel convolutior streams using 8 kernels of size 1x1, 3x3, 5x5, and 7x7, respectively. The outputs from these con. volutions were passed through a batch normalization layer, rectified linear layer and concatenated. This was then convolved again with a down-sampling kernel of size 3x3, halving the dimension size. of the input, and, except for the final output layer, again passed through batch normalization anc. rectified linear layers. The entire CNN consisted of two of these blocks positioned serially. There\nr r gradient VAE RN RN linear layer D disentangled scene stopped gradient linear layer D B permuted scene description (a) Entagled scene description. (b) Distributed latent representation\nfore, input images of size 32x32 were convolved to feature-maps of size 8x8. The feature decode consisted of these same blocks, except convolution operations were replaced with deconvolution. Operations.\nThe final feature maps provided by the CNN feature encoder were then passed to a linear layer whose outputs constituted the observed variables x for the VAE. x, which was decomposed intc. and o, was then used with an auxiliary Gaussian noise variable e to infer the latent variables. z = + eo. These latent variables were then decoded to generate the reconstruction x, as per the. conventional implementation of VAEs. However, z was also fed as input to a linear layer, whicl. projected it to a higher dimensional space - the scene description D (see figure 3b). Importantly. this connection - from z to D - did not permit the backward flow of gradients to the VAE. Thi prevented the VAE architecture from contributing to the RN's disentangling solution.."}, {"section_index": "6", "section_name": "4.2 MEMORY-AUGMENTED NEURAL NETWORK", "section_text": "For implicit discovery of relations from scene descriptions, the RN was used as a pre-processor foi. a memory-augmented neural network (MANN). The MANN was implemented as in Santoro et al. (2016), and the reader is directed here for full details on using networks augmented with external. memories. Briefly, the core MANN module consists of a controller - a long-short term memory. (LSTM) (Hochreiter & Schmidhuber1997) - that interacts with read and write heads, which in turr. interact with an external memory store. The external memory store consists of a number of memory slots, each of which contains a vector \"memory.' During reading, the LSTM takes in an input and. produces a query vector, which the read head uses to query the external memory store using a cosine. distance across the vectors stored in the memory slots, and returns a weighted sum of these vectors. based on the cosine distance. During writing, the LSTM outputs a vector that the write head uses tc. write into the memory store using a least recently used memory access mechanism (Santoro et al.. 2016)."}, {"section_index": "7", "section_name": "4.3 TRAINING DETAILS", "section_text": "The sizes of the RN - in terms of number of layers and number of units for both fo and gy were {200, 200}, {500, 500}, {1000, 1000}, or {200, 200, 200}. The MLP baseline models usec equivalent sizes. We experimented with different sizes for the summary vector si.i, which is the output from the RN. Performance is generally robust to the choice of size, with similar result. emerging for 100, 200, or 500. The MANN used a LSTM controller size of 200, 128 memory slots 40 for the memory size, and 4 read and write heads.\nThe Adam optimizer was used for optimization (Kingma & Ba|2014), with learning rate of 1e-4 fo1. the scene description tasks, and a learning rate of 1e-5 for the one-shot learning task. The number. of iterations varied for each experiment, and are indicated in the relevant figures. All figures show. performance on a withheld test-set, constituting 2-5% of the size of the training set. The number of. training samples was 5000 per class for scene description tasks, and 200 per class (for 100 classes for the pixel disentangling experiment. We used minibatch training, with batch-sizes of 100 for the. scene description experiments, and 16 (with sequences of length 50) for the one-shot learning task.."}, {"section_index": "8", "section_name": "5 RESULTS", "section_text": "Here, we trained RNs on variations of the classification task described in section|3.2|and contrastec their performance with that of MLPs of different sizes and depths. First, we compared the perfor mance of these models on scenes where the relational structure was defined by position (figure4) After 200,o00 iterations the RNs reached a cross entropy loss of O.01 on a withheld test set, with MLPs of similar size failing to match (figure4a, top). In fact, the smallest RN performed signifi. cantly better than the largest MLP. The performance of the MLPs remained poor even after 1 millior iterations (cross entropy loss above O.2 - not shown). This result was consistent for datasets with 5. 10 and 20 scene classes (figure4a, bottom).\nThese results were obtained using targets that explicitly described the relations between objects types (binary vector indicating the edges of the graph). Using one-hot vectors as output targets resulted in very similar results, with the RNs' accuracy reaching 97% (see figure7|in appendix) Our reasoning for using graph descriptions as targets relates to the potential of the RN model tc represent relations between individual pairs of object types independent of other relations that migh be present in a scene. Explicitly targeting the individual relations could in principle allow the mode to learn the particular components that form the overall scene structure. Indeed, when trained tc classify hundreds of scene classes, the RN was then able to generalize to unobserved classes (se figure|8Jin appendix). (Note: this generalization to unseen classes would be impossible with the us of one-hot labels, since there is no training information that may inform the vector label a particula unseen class). Additionally, the ability to generalize to unobserved classes suggests that the RN is able to generalize in a combinatorially complex object-relation space because of its ability tc learn compositional structure. It is able to use pieces (i.e., specific relations) of learned informatior and combine them in unique, never-before-seen ways, which is a hallmark feature of compositiona learning.\n0.4 0.4 0.3 0.3 RN MLP Model size RN MLP Model size 1000, 1000 1000, 1000 0.2 0.2 500, 500 500, 500 200, 200 200, 200 200, 200, 200 0.1 200, 200, 200 0.1 0.0 0.0 0 50000 100000 150000 200000 0 40000 80000 120000 160000 Iteration Iteration 0.4 0.4 0.3 0.3 0.2 RN MLP 0.2 RN MLP 5 classes 5 classes 10 classes 10 classes 0.1 20 classes 0.1 20 classes 0.0 0.0 0 50000 100000 150000 200000 0 40000 80000 120000 160000 Iteration Iteration (a) Position task (b) Color task\nFigure 4: Scene classification tasks. Differently sized RNs performed well when trained to classify 10 scene classes based on position relations (a), reaching a cross entropy loss below O.01 (a, top) and on tasks that contained 5, 10 or 20 classes (a, bottom). The MLPs performed poorly regardless of network size and the number of classes. When relational structure depended on the color of the objects (color task, b) all RN configurations performed wel1 classifying 5, 10 or 20 classes, similar tc what we observed on the position task. MLPs with similar number of parameters performed poorly\nWe repeated this experiment using relational structure defined by the color of the objects, with po. sition being randomly sampled for each object (figure|4b). Again, RNs reached a low classificatior error on withheld data (below O.01), whereas the MLP models did not (error above O.2; figure4b top). This was observed across datasets with 5, 10 and 20 classes (figure|4b, bottom)..\nThere are a number of difficulties that present when inferring object relations. Firstly, one cannot naively apply a deep neural network to infer relations; instead, a particular type of architecture possibly one that is permutation invariant to some component of its input space - may be necessary.\nWe showed the ability of a RN to solve the first problem, and here, we gain insight on the second. problem. A particular feature - or constraint - of RNs is their computation on implicit groups of factored objects. We tested, then, whether this constraint could act as a learning signal to induce the factorization of objects from entangled scenes. Specifically, we tested whether RNs can learn with two types of inputs that do not have nicely factored object representations: entangled scene descriptions and pixel images."}, {"section_index": "9", "section_name": "5.2.1 INFERRING RELATIONS FROM ENTANGLED SCENE DESCRIPTIONS", "section_text": "In this task we probed the network's ability to classify scenes from entangled scene descriptions Intuitively, the RN should behave as an architectural bottleneck that can aid the disentangling of objects by a perceptual model to produce factored object representations on which it can operate. Tc entangle data we started from the scene description D. and reshaped it into a vector of size mn (i.e. a concatenated vector of objects [01; 02; 03; ..; Om]). This vector was then projected using a random permutation matrix B of size mn mn. We chose a permutation matrix for two reasons. First because it preserved all the information of the input without adding any additional scaling between the factors. Second, because it was invertable using a matrix multiplication, and hence produced an interpretable solution found by the model. We used a learnable matrix U of size mn mn - implemented as a linear layer without biases, positioned after the permutation and before the RN (see figure|3). Thus, this layer provided the m objects to the RN. Figure|5a shows the performance of the model on this task.\nDuring learning we visualized the absolute value of BU (figure5h, inset). We noticed a block structure emerging, illustrating the identification of different objects. Note, white pixels denote a value of O, and black denote a value of 1. BU was of size mn mn, making the multiplication of a flattened D with BU produce a vector of size mn. This vector was then reshaped into a matrix of size m n, and provided as input to the RN. The block structure of BU suggests that the object k. as perceived by the network, was a linear transformation (given by the block) of a different object from the ground truth input in D. Of particular interest is the gradual discovery of new objects ove. time. Because the model is order invariant, there is no pressure to recover the ground truth order of the objects. Similarly, the particular order of features that define the object were not disentangled since there is no real pressure for this to happen. Therefore, RNs managed to successfully force the identification of different objects without imposing a particular form, or order of the features within. objects."}, {"section_index": "10", "section_name": "5.2.2 INFERRING RELATIONS FROM PIXELS", "section_text": "Image depictions from 100 unique classes (using the position-relation dataset; see figure|11|in ap. pendix) were passed through a VAE, whose latent variables were provided as input to a linear layer. which acted as a disentangler, and which sent its output to a RN. Both the VAE and RN were trainec concurrently - however, gradients from the RN were not propagated to the VAE portion of the. model so as to prevent any of the VAE components from contributing to the solution of the RN's. task. Thus, this experiment explicitly tested the ability of the RN to operate on distributed represen tations of scenes. It should be noted that this task is unique from the previous entangling tasks, a. the VAE may be capable of providing both disentangled object representations, as well as relationa. information, in its compressed latent code. Nonetheless, the results from this task suggest a capacity. for RNs to operate on distributed representations, which opens the door to the possible conjunctior. of RNs with perceptual neural network modules (figure5b)..\nHere, we assess the potential to use RNs in conjunction with memory-augmented neural networks to quickly - and implicitly - discover object relations and use that knowledge for one-shot learning\nWe trained a MANN with a RN pre-processor to do one-shot classification of scenes, as described in. section|3.2] In order to solve this task, the network must store representations of the scenes (which, if class representations are to be unique, should necessarily contain relational information), and the\n0.30 0.4 0.25 0.3 0.20 Iteration 0.15 0.2 0.10 0.1 Best RN 0.05 Best MLP 0.00 0.0 0 500000 1000000 1500000 2000000..2500000..3000000 0 200000 400000 Iteration Iteration (b) Distributed latent (a) Entagled scene description representation\n0.25 chnr enonns errss 0.20 0.15 0.10 0.05 0.00 0\nFigure 5: Inferring relational structure from entangled scenes. A RN is able to extract anc operate on objects from an entangled scene description (a). Inset shows BU from one example. seed. Different seeds produced unique block structures corresponding to different permutations. A RN is also able to extract and operate on objects from a distributed latent representation of the scene. (b). Distributed latent representations are inferred latent variables from VAE processing of imag. depictions of the scenes.\nepisode-unique label associated with the scene. Once a new sample of the same class is observed it must use inferred relational information from this sample to query its memory and retrieve th appropriate label.\nDuring training, a sequence of 50 random samples from 5 random classes (out of a pool of 1900). were picked to constitute an episode. The test phase consisted of episodes with scenes from 5 randomly selected and previously unobserved classes (out of a pool of 100). After 500,000 episodes. the network showed high classification accuracy when presented with just the second instance of a. class (76%) and performance reached 93% and 96% by the 5th and 10th instance, respectively (figure 6). As expected, since class labels change from episode-to-episode, performance is at chance for. the first instance of a particular class.\nAlthough samples from a class are visually very dissimilar, a RN coupled to an external memory. was able to do one-shot classification. This suggests that the network has the capacity to quickly extract information about the relational structure of objects, which is what defines class membership.. and does so without explicit supervision about the object-relation structure. Replacing the RN pre processor with an MLP resulted in chance performance across all instances (figure[6b), suggesting that the RN is a critical component of this memory-augmented model..\nRNs are a powerful architecture for reasoning about object-relations. They operate under the as sumption that relations exist between pairs of objects, or entities. This assumption affords the ability to classify scenes wherein class boundaries are defined by object relations, as well as the ability to induce factored object representations from entangled scene descriptions.\nRNs have the potential to successfully operate with perceptual modules to extract object representa tions on which to operate. This ability to operate with different neural network modules also extends to the domain of one-shot learning, wherein they are able to pair with MANNs to perform one-shot structure learning.\nThe utility of RNs as a relation-reasoning module suggests that they have the potential to be useful. for solving tasks that require reasoning not only about object-object relations. but also about verb-\n1.0 1.0 0.8 0.8 krrrret errtet 0.6 0.6 Instance Instance 0.4 0.4 1st 1st 2nd 2nd 5th 0.2 5th 0.2 10th 10th 0.0 0.0 100000 300000 500000 200000 400000 600000 800000 Episode Episode (a) MANN with RN pre-processor (b) MANN with MLP pre-processor\nFigure 6: One-shot classification of scenes. A MANN with a RN pre-processor is able to accurately classify unobserved scenes after the presentation of a single instance of the same class (a). A MANN. with a MLP pre-processor performs at chance (b)..\nThe ability of RNs to induce a disentangling of their input potentially allows for great flexibility ij the discovery of the \"objects\"' that are to be factored. In our case, objects were defined a priori, bu this needn't necessarily be the case. Depending on the task, they may not even constitute physicall consistent groups of matter. Combinations of objects can be grouped to become one \"cluster\"' object or, the notion of what constitutes an object can instead be determined by a deep neural network which may come up with a very flexible interpretation for \"object-ness\"' in a scene. Developing a way to process various types of input to produce \"objects\"' with which RNs can operate is an exciting direction of future research."}, {"section_index": "11", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Scott Reed, Daan Wierstra, Nando de Freitas, James Kirkpatrick, and man others on the DeepMind team\nSepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent In International Conference on Artificial Neural Networks, pp. 87-94. Springer, 2001.\nRanjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephani. Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting languag. and vision using crowdsourced dense image annotations. arXiv preprint arXiv:1602.07332, 2016\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997.\nBrenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learnin through probabilistic program induction. Science, 350(6266):1332-1338, 2015\nBrenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. arXiv preprint arXiv:1604.00289, 2016.\nStephen E Palmer. The effects of contextual scenes on the identification of objects. Memory d Cognition, 3:519-526, 1975.\nJoshua B Tenenbaum, Charles Kemp, Thomas L Griffiths, and Noah D Goodman. How to grow a mind: Statistics, structure, and abstraction. science. 331(6022):1279-1285. 2011\nOriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Match ing networks for one shot learning. arXiv preprint arXiv:1606.04080, 2016"}, {"section_index": "12", "section_name": "APPENDIX", "section_text": "ADDITIONAL SCENE CLASSIFICATION EXPERIMENTS\n1.0 0.8 RN MLP Model size 0.6 1000, 1000 500, 500 200, 200 0.4 200, 200, 200 0.2 0.0 0 100000 200000 300000 400000 5000 Iteration\n0.4 RN Model size 1000, 1000 0.3 500, 500 200, 200 200, 200, 200 0.2 0.1 0.0 0 200000 400000 600000 Iteration\nFigure 8: Scene classification on withheld classes. Differently sized RNs were trained to classify scenes from a pool of 490 classes. The plot shows the cross entropy loss on a test set composed of samples from 10 previously unseen classes.\n0* = argmingED~p(D)[L(D;0)\nIn our case, datasets consist of sequences, or episodes, of input-output pairs, where inputs are scene descriptions D and outputs are target labels y (see figure9). So, {(Dt, yt)}T-1 E D. All D from all datasets are constructed using the same generative process for producing scene descriptions as described in the main text. The main differentiating factor between datasets, then, is the label associated with each scene description. From dataset-to-dataset, scenes from a particular scene-class are assigned a unique, but arbitrary target label. This implies that scenes from the same scene clas. will not necessarily have the same label from episode-to-episode (or equivalently, from dataset-to dataset), but will indeed have the same label within episodes.\nFigure 7: Scene classification with one-hot vector labels. Performance of RNs and MLPs on position task with 20 classes and using one-hot vectors as output targets..\nIn this training setup, labels are provided as input at subsequent timesteps. So, a given input at a particular timestep is (Dt, yt-1), where yt-1 is the correct target label for the previous timestep's scene description. This allows the MANN to learn a binding strategy (Santoro et al.]2016) (see figure[9|and 10j: it will produce a representation for a particular input and bind it with the incoming label, and will then store this bound information in memory for later use. Importantly, the labe is presented in a time offset manner; if labels were instead presented at the same timestep as the corresponding sample, then the network could learn to cheat, and use the provided label informatior to inform its output.\nThis particular setup has been shown to allow for a particular meta-learning strategy (Hochreiter. et al.|2001 Santoro et al.2016); the network must learn to bind useful scene representations with. arbitrary, episode-specific labels, and use this bound information to infer class membership for sub-. sequent samples in the episode. This task poses a particular challenge for RNs; they must have the capacity to quickly extract useful representations of scene classes to enable rapid, within-episode comparisons to other scene representations. The representation extracted for a single example must contain useful, general information about the class if it is to provide information useful for subse- quent classification. As seen in the results, one-shot accuracy is quite high, implying that the RN. and MANN combination can extract useful representations of scene classes given a single example. and use this stored representation to successfully classify future examples from the same class. This implies that the network is necessarily extracting relational information from input scenes, since only this information can allow for successful one-shot learning across samples..\nsceneClassPredictior Shuffle: (Dt,Yt1)(Dt+1,Yt) Labels (Dj,0) (D2,y1) Classes Episode Samples\nXLerndlMennor LEIC 2 Backpropagated Signal 3 Yi: 2 Yt-1:\nFigure 9: One-shot learning task setup\ncene Class Prediction Shuffle: (Dt,Yt1)(Dt+1Yt) Labels (D1,0) (D2,Y1) Classes Episode Samples (a) Episode structure External Memory. External Memory 2 Backpropagated Signal Yt1: 3 Yt: 2 Bind and Encode. Retrieve Bound Information (b) Binding strategy\nScene Description Label RN Sampler Controller Categorical (Classification) LSTM Read Head Write Head Memory RECENTLY READ LEAST USED MEM. WORD Figure 10: RN with MANN Examples Class 1 Class 2 .. Class n Figure 11: Examples scene depictions\nScene Description Label RN Sampler Controller Categorical (Classification) LSTM Read Head Write Head Memory INTERPOLATION RECENTLY READ LEAST USED MEM. WORD Figure 10: RN with MANN\nFigure 11: Examples scene depictions"}] |
r1PRvK9el | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Knowledge bases such as WordNet (Fellbaum1998), Freebase (Bollacker et al.). 2008), or Yago (Suchanek et al.] 2007) contain many real-world facts expressed as triples, e.g., (Bill Gates. FounderOf, Microsoft). These knowledge bases are useful for many downstream applications. such as question answering (Berant et al.2013)Yih et al.[2015) and information extraction (Mintz et al.|2009). However, despite the formidable size of knowledge bases, many important facts are. still missing. For example, West et al.(2014) showed that 21% of the 100K most frequent PERSON entities have no recorded nationality in a recent version of Freebase. We seek to infer unknown. relations based on the observed triples. Thus, the knowledge base completion (KBC) task has emerged an important open research problem (Nickel et al.l2011).\nNeural-network based methods have been very popular for solving the KBC task. FollowingBorde. et al.(2013), one of the most popular approaches for KBC is to learn vector-space representations entities and relations during training, and then apply linear or bi-linear operations to infer the missin relations at test time. However, several recent papers demonstrate limitations of prior approache. relying upon vector-space models alone. By themselves, there is no straightforward way to captur the structured relationships between multiple triples adequately (Guu et al.2015Toutanova et al 2016] [Lin et al.] [2015a). For example, assume that we want to fill in the missing relation for th. triple (Obama, NATIoNALITy, ?), a multi-step search procedure might be needed to discover th. evidence in the observed triples such as (Obama, BornIn, Hawaii) and (Hawaii, PARTO U . S . A). To address this issue, [Guu et al.[(2015); Toutanova et al.[(2016); Lin et al.(2015a) propos different approaches of injecting structured information by directly operating on the observed triplet. Unfortunately, due to the size of knowledge bases, these newly proposed approaches suffer froi. some limitations, as most paths are not informative for inferring missing relations, and it is prohibitiv. to consider all possible paths during the training time with expressive models.."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "In this paper, we take a different approach from prior work on KBC by addressing the challenges of performing large-scale inference through the design of search controller and shared memory. Our inference procedure centers around the search controller, which only operates on the shared memory instead of directly manipulating the observed triples in knowledge base. IRNs use training data to\nShared Memory. M Search Controller Attention fa(St,M) fa(St+1,M)Xt+1 X Input Module. q ftc(St) False ftc(St+1) False ftc(St+2) Termination Tt Tt+1 Tt+2 True True Output Module. fo(St) f.(St+1) fo(St+2) Ot+1 Ot+2\nFigure 1: An IRN Architecture\nlearn to perform multi-step inference through the shared memory. First, input module generates a representation of the query. Then, the search controller repeatedly interacts with the shared memory and checks the termination gate. After each iteration, if the termination condition is met, the model stops the search process and calls the output module to generate a prediction. The shared memory is designed to store key information about the overall structures it learned during training, and hence the search controller only needs to access the shared memory instead of operating on the observed triples.\nThe main contributions of our paper are as follows.\nThere are several advantages of using IRNs. First, the cost of inference can be controlled because the search controller only needs to access the shared memory. Second, all the modules, including the search controller and memory, are jointly trained, and hence alleviate the needs to inject structured relationships between instances manually. Finally, we can easily extend IRNs to other tasks that require modeling structured relationships between instances by switching the input and output modules.\nWe propose Implicit ReasoNets (IRNs), which use a shared memory guided by a search controller to model large-scale structured relationships implicitly. We evaluate IRNs and demonstrate that our proposed model achieves the state-of-the-art results on the popular FB15k benchmark, surpassing prior approaches by more than 5.7%. We analyze the behavior of IRNs for shortest path synthesis. We show that IRNs outper- form a standard sequence-to-sequence model and execute meaningful multi-step inference\nIn this section, we describe the general architecture of 1RNs in a way that is agnostic to KBC. IRNs are composed of four main components: an input component, an output component, a shared memory, and a search controller, as shown in Figure[1] In this section, we briefly describe each component.\nInput/Output Modules: These two modules are task-dependent. The input module takes a query and. converts the query into a vector representation q. The output module is a function fo, which converts. the hidden state received from the search controller (s) into an output O. We optimize the whole\nSearch Controller: The search controller is a recurrent neural network and controls the search proces by keeping internal state sequences to track the current search process and history. The searcl. controller uses an attention mechanism to fetch information from relevant memory vectors in M, anc. decides if the model should output the prediction or continue to generate the next possible output.\nCompared IRNs to Memory Networks (MemNN) (Weston et al.l2014l Sukhbaatar et al.[2015] ? and Neural Turing Machines (NTM) (Graves et al.||2014f2016), the biggest difference between ou. model and the existing frameworks is the search controller and the use of the shared memory. We. build upon our previous work (Shen et al.[2016) for using a search controller module to dynamicall. perform a multi-step inference depending on the complexity of the instance. MemNN and NTM. explicitly store inputs (such as graph definition, supporting facts) in the memory. In contrast, in IRNs. we do not explicitly store all the observed inputs in the shared memory. Instead, we directly operat. on the shared memory, which modeling the structured relationships implicitly. We randomly initialize. the memory and update the memory with respect to task-specific objectives. The idea of exploiting. shared memory is proposed byMunkhdalai & Yu|(2016) independently. Despite of using the same. term, the goal and the operations used by IRNs are different from the one used in Munkhdalai & Yi. (2016), as IRNs allow the model to perform multi-step for each instance dynamically.."}, {"section_index": "2", "section_name": "2.1 STOCHASTIC INFERENCE PROCESS", "section_text": "The inference process of an IRN is as follows. First, the model converts a task-dependent input tc a vector representation through the input module. Then, the model uses the input representation. to initialize the search controller. In every time step, the search controller determines whether the process is finished by sampling from the distribution according to the terminate gate. If the outcome is termination, the output module will generate a task-dependent prediction given the search controller states. If the outcome is continuation, the search controller will move on to the next time step. and create an attention vector based on the current search controller state and the shared memory. Intuitively, we design whole process by mimicking a search procedure that iteratively finds its target. through a structure and output its prediction when a satisfying answer is found. The detailed inference. process is described in Algorithm1\nThe inference process of an IRN is considered as a Partially Observable Markov Decision Process. (POMDP) (Kaelbling et al.[1998) in the reinforcement learning (RL) literature. The IRN produces\nShared Memory: The shared memory is denoted as M. It consists of a list of memory vectors. M = {mi}i=1.1, where m; is a fixed dimensional vector. The memory vectors are randomly initialized and automatically updated through back-propagation. The shared memory component is shared across all instances.\nInternal State: The internal state of the search controller is denoted as S. which is a yector representation of the search process. The initial state s1 is usually the vector representation of the input vector q. The internal state at t-th time step is represented by St. The sequence of internal states is modeled by an RNN: St+1 = RNN(st, xt; 0s). Attention to memory: The attention vector xt at t-th time step is generated based on the current internal state St and the shared memory M: xt = fatt(St, M; 0x). Specifically the attention score at,i on a memory vector mi given a state st is computed as at,i = softmax,=1,.,|M|A cos(W1mi, W2st), where A is set to 10 in our experiments and the weight matrices Wi and W2 are learned during training. The attention vector xt can be Termination Control: The terminate gate produces a stochastic random variable according to the current internal state, tt ~ p([ftc(St; Otc))). tt is a binary random variable. If tt is true, the IRN will finish the search process, and the output module will execute at time step t; otherwise the IRN will generate the next attention vector xt+1 and feed into the state network to update the next internal state St+1. In our experiments, the termination variable is modeled by a logistical regression: ftc(st; Otc) = sigmoid(Wtcst + btc), where the weight matrix Wtc and bias vector btc are learned during training\nAlgorithm 1: Stochastic Inference Process in an IRN\nInput :Randomly initialized shared memory M; Input vector q; Maximum step Tmax Output : Output vector o 1 Define s1 = q; t = 1; 2 Sample tt from the distribution p(|ftc(st; Otc)); 3 if tt is false, go to Step 4; otherwise Step 7;. 4 Generate an attention vector xt = fatt(St, M; 0x); 5 Update the internal state St+1 = RNN(st, Xt; 0s); 6 Set t = t + 1; if t < Tmax go to Step 2; otherwise Step 7; 7 Generate output Ot = fo(st; 0o); 8 Return 0 = 0t;\nthe output vector oT at the T-th step, which implies termination gate variables t1:T = (t1 = 0, t2 = 0, ..., tT-1 = 0, tT = 1), and then takes prediction action pT according to the probability distributior. given OT. Therefore, the IRN learns a stochastic policy ((t1:T, PT)[q; 0) with parameters 0 to get a. distribution over termination actions, and over prediction actions. The termination step T varies fron. instance to instance. The parameters of the IRN 0 are given by the parameters of the embedding. matrices W for the input/output module, the shared memory M, the attention network 0., the searcl controller RNN network 0s, the output generation network 0o, and the termination gate network Ot. The parameters 0 = {W, M, 0x, 0s, 0o, 0tc} are trained to maximize the total expected reward tha. the IRN when interacting with the environment. The expected reward for an instance is defined as:.\nThe reward can only be received at the final termination step when a prediction action pT is performed The rewards on intermediate steps are zeros, {rt = O}t=1...T-1..\nVeJ(0) = (t1:T,PT;0) Velog(t1:T, PT; (t1:T,PT)EAt"}, {"section_index": "3", "section_name": "APPLYING IRNS TO KNOWLEDGE BASE COMPLETION", "section_text": "The goal of KBC tasks (Bordes et al.]2013) is to predict a head or a tail entity given the relation type. and the other entity, i.e. predicting h given (?, r, t) or predicting t given (h, r, ?), where ? denotes. the missing entity. For a KBC task, the input to our model is a subject entity (a head or tail entity and a relation. The task-dependent input module first extracts the embedding vectors for the entit and relation from an embedding matrix. We then represent the query vector q for an IRN as the. concatenation of the two vectors. We randomly initialize the shared memory component. At each step. a training triplet is processed through the model by Algorithm 1 where no explicit path informatior is given. The IRN updates the shared memory implicitly with respect to the objective function. Fo. the task dependent output module, we use a nonlinear projection to project the search controller state into an output vector o: f.(st; 0o) = tanh(W,st + bo), where the W, and b, are the weight matrix. and bias vector, respectively. We define the ground truth target (object) entity embedding as y, anc. use the L1 distance measure between the output o and target entity y, namely d(o, y) = [o - y[1. We\nT J(0) = E(t1:T,PT;0) Tt t=1\nWe employ the approach from our previous work (Shen et al.]2016), REINFORCE (Williams]1992 based Contrastive Reward method, to maximize the expected reward. The gradient of J can be written as:\nselecting a prediction y E D can be approximated as\nexp(-yd(o, y)\nwhere D = N U{y}. We set |N| and y to 20 and 5, respectively, for the experiments on FB15k and WN18 datasets. The IRN performs a prediction action pT on selecting y with probability p(y[o) We define the reward of the prediction action as one if the ground truth entity is selected, and zero Otherwise."}, {"section_index": "4", "section_name": "4 EXPERIMENTAL RESULTS", "section_text": "In this section, we evaluate the performance of our model on the benchmark FB15k and WN18 datasets for KBC tasks (Bordes et al.|2013). These datasets contain multi-relations between hea. and tail entities. Given a head entity and a relation, the model produces a ranked list of the entitie according to the score of the entity being the tail entity of this triple. To evaluate the ranking, we eport mean rank (MR), the mean of rank of the correct entity across the test examples, and hits@10 the proportion of correct entities ranked in the top-10 predictions. Lower MR or higher hits@10 ndicates a better prediction performance. We follow the evaluation protocol in Bordes et al.[(2013 o report filtered results, where negative examples N are removed from the dataset. In this case, we can avoid some negative examples being valid and ranked above the target triplet.\nWe use the same hyper-parameters of our model for both FB15k and WN18 datasets. Entity embed dings (which are not shared between input and output modules) and relation embedding are both 100-dimensions. We use the input module and output module to encode subject and object entities respectively. There are 64 memory vectors with 200 dimensions each, initialized by random vectors with unit L2-norm. We use single-layer GRU with 200 cells as the search controller. We set the maximum inference step of the IRN to 5. We randomly initialize all model parameters, and use SGD as the training algorithm with mini-batch size of 64. We set the learning rate to a constant number, 0.01. To prevent the model from learning a trivial solution by increasing entity embeddings norms, we followBordes et al.(2013) to enforce the L2-norm of the entity embeddings as 1. We use hits@ 10 as the validation metric for the IRN. Following the work (Lin et al.|2015a), we add reverse relations into the training triplet set to increase the training data.\nFollowing[Nguyen et al.(2016), we divide the results of previous work into two groups. The first. group contains the models that directly optimize a scoring function for the triples in a knowledge base. without using extra information. The second group of models make uses of additional information from multi-step relations. For example, RTransE (Garcia-Duran et al.]2015) and PTransE (Lin et al. 2015a) models are extensions of the TransE (Bordes et al.2013) model by explicitly exploring. multi-step relations in the knowledge base to regularize the trained embeddings. The NLFeat model (Toutanova et al.[2015) is a log-linear model that makes use of simple node and link features..\nTo better understand the behavior of IRNs, we report the results of IRNs with different memory sizes and different Tmax on FB15K in Table2 We find the performance of 1RNs increases significantly if the number of inference step increases. Note that an IRN with Tmax = 1 is the case that an IRN without the shared memory. Interestingly, given Tmax = 5, IRNs are not sensitive to memory sizes In particular, larger memory always improves the MR score, but the best hit@10 is obtained by [M] = 64 memory vectors. A possible reason is that the best memory size is determined by the complexity of the tasks.\nWe analyze hits @ 10 results on FB15k with respect to the relation categories. Following the evaluation in|Bordes et al.(2013), we evaluate the performance in four types of relation: 1-1 if a head entity.\nNguyen et al.(2016) reported two results on WN18, where the first one is obtained by choosing to optimize hits@10 on the validation set, and second one is obtained by choosing to optimize MR on the validation set. W list both of them in Table1\nTable[1presents the experimental results. According to the table, our model significantly outperforms. previous baselines, regardless of whether previous approaches use additional information or not Specifically, on FB15k, the MR of our model surpasses all previous results by 12, and our hit@ 10 outperforms others by 5.7%. On WN18, the IRN obtains the highest hit@10 while maintaining similar MR results compared to previous work.\nTable 1: The knowledge base cor. pletion (link prediction) results on WN18 and FB15k\nTable 2: The performance of 1RNs with different memory sizes and inference steps on FB15K\ncan appear with at most one tail entity, 1-Many if a head entity can appear with many tail entities Many-1 if multiple heads can appear with the same tail entity, and Many-Many if multiple head entities can appear with multiple tail entities. The detailed results are shown in Table[3] The IRI significantly improves the hits@10 results in the Many-1 category on predicting the head entity (18.8%), the 1-Many category on predicting the tail entity (16.5%), and the Many-Many category. (over 8% in average).\nTo analyze the behavior of tRNs, we pick some examples for the tail entity prediction in Table Interestingly, we observed that the model can gradually increase the ranking score of the correct tail entity during the inference process..\nWe construct a synthetic task, shortest path synthesis, to evaluate the inference capability over a shared memory. The motivations of applying our model to this task are as follows. First, we want to evaluate IRNs on another task requiring multi-step inference. Second, we select the sequence generation task so that we are able to analyze the inference capability of IRNs in details.\nIn the shortest path synthesis task, as illustrated in Figure[2] a training instance consists of a start. node and an end node (e.g., 215 ~, 493) of an underlying weighted directed graph that is unknown tc models. The output of each instance is the shortest path between the given start and end nodes of the underlying graph (e.g., 215 -> 101 -> 493). Specifically, models can only observe the start-end node\nModel Additional Information WN18 FB15k Hits@10 (%) MR Hits@10 (%) MR SE (Bordes et al.]2011) NO 80.5 985 39.8 162 Unstructured (Bordes et al.2014) NO 38.2 304 6.3 979 TransE (Bordes et al.||2013 NO 89.2 251 47.1 125 TransH (Wang et al. [2014) NO 86.7 303 64.4 87 TransR (Lin et al.[|2015b) NO 92.0 225 68.7 77 CTransR (Lin et al.]|2015b) NO 92.3 218 70.2 75 KG2E (He et al. [2015] NO 93.2 348 74.0 59 TransD7J1 et al. 2015 NO 92.2 212 77.3 91 TATEC (Garcia-Duran et al.]2015 NO 76.7 58 NTN (Socher et al.[[2013] NO 66.1 41.4 DISTMULT (Yang et al.][2014) NO 94.2 57.7 STransE (Nguyen et al.[2016) NO 94.7 (93) 244 (206) 79.7 69 RTransE (Garcia-Duran et al. 2015 Path 76.2 50 PTransE Lin et al.2015a Path 84.6 58 NLFeat (Toutanova et al.|[2015) Node + Link Features 94.3 87.0 - Path Random Walk ( Wei et al. 2016 94.8 74.7 IRN NO 95.3 249 92.7 38\nNumber of memory vectors Maximum inference step. FB15k Hits@10 (%) MR M| = 64 Tmax = 1 80.7 55.7 M= 64 Tmax = 2 87.4 49.2 M= 64 Tmax = 5 92.7 38.0 M = 64 Tmax = 8 88.8 32.9 M| = 32 Tmax = 5 90.1 38.7 M] = 64 Tmax = 5 92.7 38.0 M| = 128 Tmax = 5 92.2 36.1 M| = 512 Tmax = 5 90.0 35.3 M| = 4096 Tmax = 5 88.7 34.7\nTable 3: Hits@10 (%) in the relation categ on FB15k. (M stands for Many)\nTable 4: Test examples in FB15k dataset, given a head entity and a relation, the IRN predicts the tail entity with multiple search steps.\nInput: (Dean KoOntz, /PEOPLE/PERSON/PROFESSION) Target: Film Producer Step Termination Prob. Rank Predict top-3 entities 1 0.018 9 Author TV. Director Songwriter 2 0.052 7 Actor Singer Songwriter 3 0.095 4 Actor Singer Songwriter 4 0.132 4 Actor Singer Songwriter 5 0.702 3 Actor Singer Film Producer\npairs as input and their shortest path as output. The whole graph is unknown to the models and the. edge weights are not revealed in the training data. At test time, a path sequence is considered correc if it connects the start node and the end node of the underlying graph, and the cost of the predictec. path is the same as the optimal path.\nNote that the task is very difficult and cannot be solved by dynamic programming algorithms since the. weights on the edges are not revealed to the algorithms or the models. To recover some of the shortest paths at the test time, the model needs to infer the correct path from the observed instances. For example, assume that we observe two instances in the training data, \"A ~> D: A -> B -> G -> D and \"B ~> E: B -> C -> E. In order to answer the shortest path between A and E, the model. needs to infer that \"A -> B -> C -> E\"' is a possible path between A and E. If there are multiple possible paths, the model has to decide which one is the shortest one using statistical information..\nIn the experiments, we construct a graph with 500 nodes and we randomly assign two nodes to form an edge. We split 20,O00 instances for training, 10,000 instances for validation, and 10,000 instances for testing. We create the training and testing instances carefully so that the model needs to perform inference to recover the correct path. We describe the details of the graph and data construction parts in the appendix section. A sub-graph of the data is shown in Figure2\nFor the settings of the IRN, we switch the output module to a GRU decoder for a sequence generatior task. We assign reward rT = 1 if all the prediction symbols are correct and O otherwise. We use a 64-dimensional embedding vector for input symbols, a GRU controller with 128 cells, and a GRU decoder with 128 cells. We set the maximum inference step Tmax to 5.\nPredicting head h. Predicting tail t Model 1-1 1-M M-1 M-M 1-1 1-M M-1 M-M SE (Bordes et al.2011) 35.6 62.6 17.2 37.5 34.9 14.6 68.3 41.3 Unstructured (Bordes et al. 2014 34.5 2.5 6.1 6.6 34.3 4.2 1.9 6.6 TransE (Bordes et al.|2013 43.7 65.7 18.2 47.2 43.7 19.7 66.7 50.0 TransH [Wang et al.). 2014 66.8 87.6 28.7 64.5 65.5 39.8 83.3 67.2 TransR (Lin et al.[2015b) 78.8 89.2 34.1 69.2 79.2 37.4 90.4 72.1 CTransR (Lin et al.. 2015b 81.5 89.0 34.7 71.2 80.8 38.6 90.1 73.8 KG2E (He et al.]|2015] 92.3 94.6 66.0 69.6 92.6 67.9 94.4 73.4 TransD7J1 et al.[2 2015 86.1 95.5 39.8 78.5 85.4 50.6 94.4 81.2 TATEC (Garcia-Duran et al.2015 79.3 93.2 42.3 77.2 78.5 51.5 92.7 80.7 STransE(Nguyen et al.]2016 82.8 94.2 50.4 80.1 82.4 56.9 93.4 83.1 PTransE 2015a 91.0 92.8 60.9 83.8 91.2 Lin et al.. 74.0 88.9 86.4 IRN 87.2 96.1 84.8 92.9 86.9 90.5 95.3 94.1\nFigure 2: An example of the shortest path synthesis dataset, given an input \"215 ~> 493\" (Answer: 215. 101 -> 493). Note that we only show the nodes that are related to this example here. The corresponding. termination probability and prediction results are shown in the table. The model terminates at step 5.\nWe compare the IRN with two baseline approaches: dynamic programming without edge-weight information and a standard sequence-to-sequence model (Sutskever et al.f 2014) using a similar parameter size to our model. Without knowing the edge weights, dynamic programming only recovers 589 correct paths at test time. The sequence-to-sequence model recovers 904 correct paths. The IRN outperforms both baselines, recovering 1,319 paths. Furthermore, 76.9% of the predicted paths from IRN are valid paths, where a path is valid if the path connects the start and end node nodes of the underlying graph. In contrast, only 69.1% of the predicted paths from the sequence-to-sequence model are valid.\nTo further understand the inference process of the IRN, Figure[2|shows the inference process of a test. instance. Interestingly, to make the correct prediction on this instance, the model has to perform a fairly complicated inference|2|we observe that the model cannot find a connected path in the first three steps. Finally, the model finds a valid path at the forth step and predict the correct shortest path sequence at the fifth step.\nLink Prediction and Knowledge Base Completion Given that r is a relation, h is the head entity. and t is the tail entity, most of the embedding models for link prediction focus on finding the scoring. function fr(h, t) that represents the implausibility of a triple. (Bordes et al.|2011)2014]2013fWang et al.[|2014] Ji et al.[2015] Nguyen et al.[2016). In many studies, the scoring function fr(h, t) is linear or bi-linear. For example, in TransE (Bordes et al.f. 2013), the function is implemented as\nRecently, different studies (Guu et al.]2015] Lin et al.]2015a] Toutanova et al.]2016) demonstrate the importance for models to also learn from multi-step relations. Learning from multi-step relations. injects the structured relationships between triples into the model. However, this also poses a technical. challenge of considering exponential numbers of multi-step relationships. Prior approaches address. this issue by designing path-mining algorithms (Lin et al.|[2015a) or considering all possible paths. using a dynamic programming algorithm with the restriction of using linear or bi-linear models. only (Toutanova et al.|[2016).Toutanova & Chen (2015) shows the effectiveness of using simple node and link features that encode structured information on FB15k and WN18. In our work, the IRN outperforms prior results and shows that similar information can be captured by the model withou explicitly designing features\n2 In the example, to find the right path, the model needs to search over observed instances \"215 ~> 448 215 -> 101 -> 448\" and\"76 ~ 493: 76 -> 308 -> 101 -> 493', and to figure out the distance of\"140 -> 493 is longer than \"101 -> 493' (there are four shortest paths between 101 -> 493 and three shortest paths betwee 140 -> 493 in the training set).\n158 0.320.32 0.54 0.54 49 0.17 0.17 Step Termination Distance Predictions 0.650.65 0.430.43 Probability 0.37 1 0.001 N/A 215 -> 158 ->89 -> 458 -> 493 2 0.470.47 ~0 N/A 215 -> 479 -> 277 -> 353 -> 49 3 ~0 N/A 215 > 49 -> 493 01 0.14 4 ~0 0.77 215 > 140 -> 493 0.18 0.18 5 0.999 0.70 215 -> 101 > 493 0.500.50 140 0.600.60 0.320.32 0.63 0.63 853 479 215\nStudies such as (Riedel et al.]2013) show that incorporating textual information can further improve the knowledge base completion tasks. It would be interesting to incorporate the information outsid the knowledge bases in our model in the future.\nNeural Frameworks Sequence-to-sequence models (Sutskever et al.]2014] [Cho et al.]2014) have shown to be successful in many applications such as machine translation and conversation model. ing (Sordoni et al.]2015). While sequence-to-sequence models are powerful, recent work has shown that the necessity of incorporating an external memory to perform inference in simple algorithmic tasks (Graves et al.2014;2016)."}, {"section_index": "5", "section_name": "7 CONCLUSION", "section_text": "In this paper, we propose Implicit ReasoNets (IRNs), which perform inference over a shared memory that models large-scale structured relationships implicitly. The inference process is guided by a searcl controller to access the memory that is shared across instances. We demonstrate and analyze th multi-step inference capability of IRNs in the knowledge base completion tasks and a shortest patl synthesis task. Our model, without using any explicit knowledge base information in the inference procedure, outperforms all prior approaches on the popular FB15k benchmark by more than 5.7%.\nFor future work, we aim to further extend IRNs in two ways. First, inspired from Ribeiro et al.(2016. we would like to develop techniques to exploit ways to generate human understandable reasoning. interpretation from the shared memory. Second, we plan to apply IRNs to infer the relationships. in unstructured data such as natural language. For example, given a natural language query such as \"are rabbits animals?\", the model can infer a natural language answer implicitly in the shared memory without performing inference directly on top of huge amount of observed sentences such as \"all mammals are animals' and \"rabbits are animals'. We believe the ability to perform inference. implicitly is crucial for modeling large-scale structured relationships.."}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Scott Wen-Tau Yih, Kristina Toutanova, Jian Tang and Zachary Lipton for their thoughtfu feedback and discussions."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic parsing on Freebase from question answer pairs. In Proceedings of EMNLP, 2013.\nKurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of SIGMOD-08, pp. 1247-1250, 2008.\nAntoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. A semantic matching energy function fo learning with multi-relational data. Machine Learning, 94(2):233-259, 2014.\nAlberto Garcia-Duran, Antoine Bordes, Nicolas Usunier, and Yves Grandvalet. Combining two and three-wa. embeddings models for link prediction in knowledge bases. CoRR, abs/1506.00999, 2015..\nAlex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014\nAlex Graves. Greg Wayne. Malcolm Reynolds. Tim Harley. Ivo Danihelka. Agnieszka Grabska-Barwinska Sergio Gomez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016.\nShizhu He, Kang Liu, Guoliang Ji, and Jun Zhao. Learning to represent knowledge graphs with gaussian. embedding. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 623-632, 2015.\nLeslie Pack Kaelbling, Michael L. Littman, and Anthony R. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101:99-134, 1998\nYankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. Learning entity and relation embeddings fo. knowledge graph completion. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence AAAI'15, pp. 2181-2187, 2015b.\nMike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. Distant supervision for relation extraction withou labeled data. In Proceedings of ACL-IJCNLP-09, pp. 1003-1011, 2009\nTsendsuren Munkhdalai and Hong Yu. Neural semantic encoders. CoRR, abs/1607.04315, 2016\nSebastian Riedel. Limin Yao. Andrew McCallum, and Benjamin M. Marlin. Relation extraction with matrix factorization and universal schemas. In HLT-NAACL, pp. 74-84, 2013.\nYelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. Reasonet: Learning to stop reading in machin comprehension. CoRR, abs/1609.05284, 2016.\nRichard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. Reasoning With Neural Tensoi Networks For Knowledge Base Completion. In Advances in Neural Information Processing Systems, 2013.\nAlessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian- Yun Nie Jianfeng Gao, and Bill Dolan. A neural network approach to context-sensitive generation of conversational. responses. arXiv preprint arXiv:1506.06714, 2015.\n. M. Suchanek, G. Kasneci, and G. Weikum. Yago: A Core of Semantic Knowledge. In Www, 2007\nKristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon Representing text for joint embedding of text and knowledge bases. In EMNLP, 2015.\nKelvin Guu, John Miller, and Percy Liang. Traversing knowledge graphs in vector space. arXiv preprint arXiv:1506.01094, 2015.\nDat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark Johnson. STransE: a novel embedding model of entities and relationships in knowledge bases. In NAACL, pp. 460-466, 2016.\nSainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in Neural Information Processing Systems, pp. 2440-2448. 2015.\nlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advance. in Neural Information Processing Systems. pp. 3104-3112. 2014.\nKristina Toutanova, Xi Victoria Lin, Scott Wen tau Yih, Hoifung Poon, and Chris Quirk. Compositional learnin of embeddings for relation paths in knowledge bases and text. In ACL, 2016.\nZhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pp. 1112-1119 2014.\nZhuoyu Wei, Jun Zhao, and Kang Liu. Mining inference formulas by goal-directed random walks. In EMNLP 2016.\nJason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014\nBishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations fc learning and inference in knowledge bases. CoRR, abs/1412.6575, 2014.\nWe construct the underlying graph as follows: on a three-dimensional unit-sphere, we randoml. generate a set of nodes. For each node, we connect its K-nearest neighbors and use the euclidear. distance between two nodes to construct a graph. We randomly sample two nodes and compute its. shortest path if it is connected between these two nodes. Given the fact that all the sub-paths within a. shortest path are shortest paths, we incrementally create the dataset and remove the instances whicl are a sub-path of previously selected paths or are super-set of previous selected paths. In this case, al. the shortest paths can not be answered through directly copying from another instance. In addition, all. the weights in the graph are hidden and not shown in the training data, which increases the difficulty. of the tasks. We set k = 50 as a default value.\nRonald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning Machine Learning, 8:229-256, 1992\nWen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proc. of ACL, 2015.."}] |
ryF7rTqgl | [{"section_index": "0", "section_name": "UNDERSTANDING INTERMEDIATE LAYER. USING LINEAR CLASSIFIER PROBES", "section_text": "Guillaume Alain & Yoshua Bengio\nDepartment of Computer Science and Operations Research Universite de Montreal\nguillaume.alain.umontreal@gmail.com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The recent history of deep neural networks features an impressive number of new methods and technological improvements to allow the training of deeper and more powerful networks\nDespite this, models still have a reputation for being black boxes. Neural networks are criticized foi their lack of interpretability, which is a tradeoff that we accept because of their amazing performance on many tasks. Efforts have been made to identify the role played by each layer, but it can be hard to find a meaning to individual layers.\nThere are good arguments to support the claim that the first layers of a convolution network for image recognition contain filters that are relatively \"general\"', in the sense that they would work great even if we switched to an entirely different dataset of images. The last layers are specific to the dataset being used, and have to be retrained when using a different dataset. In Yosinski et al. (2014) the authors try to pinpoint the layer at which this transition occurs, but they show that the exact transition is spread across multiple layers\nIn this paper, we introduce the concept of linear classifier probe, referred to as a \"probe\"' for shor when the context is clear. We start from the concept of Shannon entropy, which is the classic way t describe the information contents of a random variable. We then seek to apply that concept to un derstand the roles of the intermediate layers of a neural network, to measure how much informatior is gained at every layer (answer : technically, none). We argue that it fails to apply, and so we propose an alternative framework to ask the same question again. This time around, we ask wha. would be the performance of an optimal linear classifier if it was trained on the inputs of a givei layer from our model. We demonstrate how this powerful concept can be very useful to understanc. the dynamics involved in a deep neural network during training and after..\nIt was a great discovery when Claude Shannon repurposed the notion of entropy to represent infor-. mation contents in a formal way. It laid the foundations for the discipline of information theory. We would refer the reader to first chapters of MacKay[(2003) for a good exposition on the matter..\nNeural network models have a reputation for being black boxes. We propose a new method to better understand the roles and dynamics of the intermediate ayers. This has direct consequences on the design of such models and it enables the expert to be able to justify certain heuristics (such as adding auxiliary losses in niddle layers). Our method uses linear classifiers, referred to as \"probes'', where a probe can only use the hidden units of a given intermediate layer as discriminating features. Moreover, these probes cannot affect the training phase of a model, and hey are generally added after training. They allow the user to visualize the state of the model at multiple steps of training. We demonstrate how this can be used o develop a better intuition about models and to diagnose potential problems.\nNaturally, we would like to ask some questions about the information contents of the many layers of convolutional neural networks\nWhat happens when we add more layers? Where does information flow in a neural network with multiple branches? Does having multiple auxiliary losses help? (e.g. Inception model)\nHere there is a mismatch between two different concepts of information. The notion of entropy fail. to capture the essence of those questions. This is illustrated in a formal way by the Data Processing Inequality. It states that, for a set of three random variables satisfying the dependency.\nIntuitively, this means that the deterministic transformations performed by the many layers of a deep neural network are not adding more information. In the best case, they preserve information and affect only the representation. But in almost all situations, they lose some information in the process.\nIf we distill this further, we can think of the serious mismatch between the two following ideas\nWe need a conceptual tool to analyze neural networks in a way that corresponds better to our intuitive notion of information. The role of data representation is important. but we would also argue that we have to think about this issue as it relates to computational complexity. A linear classifier is basically the simplest form of classifier that is neither trivial nor degenerate.\nWe end this section with a conceptual example in Figure|1] If X contains an image of the savannah. and Y E {0, 1} refers to whether it contains a lion or not, then none of the subsequent layers are truly more informative than X itself. The raw bits from the picture file contain everything.\nIn section|3.1|we present the main concept of this paper. We illustrate the concept in section 3.3 We then present a basic experiment in section3.4] In section 3.6|we modify a very deep networl in two different ways and we show how probes allow us to visualize the consequences (sometime disastrous) of our design choices..\nIntuitively, for a training sample x; with its associated label y, a deep model is getting closer to the correct answer in the higher layers. It starts with the difficult job of classifying xi, which becomes easier as the higher layers distill x; into a representation that is easier to classify. One might be tempted to say that this means that the higher layers have more information about the ground truth but this would be incorrect.\nX 1Y\nI(X;Z) I(X;Y)\nPart of the genius of the notion of entropy is that is distills the essence of information to. quantity that does not depend on the particular representation.. A deep neural network is a series of simple deterministic transformations that affect th representation so that the final layer can be fed to a linear classifier..\nThe former ignores the representation of data, while the latter is an expert in finding good represen. tations. A deaf painter is working on a visual masterpiece to offer to a blind musician who plays music for him.\nWe define a new notion of information that depends on our ability to classify features of a given layer with an optimal linear classifier. Then we have a conceptual tool to ask new questions and to get potentially interesting answers."}, {"section_index": "2", "section_name": "3.1 PROBES", "section_text": "As we discussed the previous section, there is indeed a good reason to use many deterministic layers and it is because they perform useful transformations to the data with the goal of ultimately fitting a linear classifier at the very end. That is the purpose of the many layers. They are a tool to transform data into a form to be fed to a boring linear classifier.\nJust to be absolutely clear about what we call a linear classifier, we mean a function\nwhere h E H are the features of some hidden layer, [0, 1]D is the space of one-hot encodings of the D target classes, and (W, b) are the probe weights and biases to be learned so as to minimize the usual cross-entropy loss.\nOver the course of training a model, the parameters of the model change. However, probes only make sense when we refer to a given training step. We can talk about the probes at iteration n oi training, when the model parameters are On. These parameters are not affected by the probes We prevent backpropagation through the model either by stopping the gradient flow (done with t f . st op-gradient in tensorflow), or simply by specifying that the only variables to be updated are the probe parameters, while we keep On frozen.\n81a9c D 1611211111151511 88 81acb 111116807111216451818 1182150112 221113 21822215 11161616162111161619 81b29 E9 F 8 81b58 172 81b8 7 81bb6 5D 8 1b e 81c14 81c43 AF 4 F 81c72 12272 1211 4 81ca1 81cd0 E 6 7 D 7 81cff 2 7 81d2e 21 81d5d 4 3 OL 2 7 F 4 0 B 81000 1 7 0 E 81dea 3 D 6 7 81e19 E 8A E4 14 D 3 D1 2.5 2 9 53 OD 8A 6 AI J 4 (a) hex dump of picture of a lion.. dohle\n(b) same lion in human-readable format\nFigure 1: The hex dump represented on the left has more information contents than the image on the right. Only one of them can be processed by the human brain in time to save their lives. Computational convenience matters. Not just entropy.\nWith this in mind, it is natural to ask if that transformation is sudden or progressive, and whether the. intermediate layers already have a representation that is immediately useful to a linear classifier. We. refer the reader to Figure|2|for a diagram of probes being inserted in the usual deep neural network\n* 0 Ho X H HK\nHo HK X H\nFigure 2: Probes being added to every layer of a model. These additional probes are not supposed. to change the training of the model, so we add a little diode symbol through the arrows to indicate that the gradients will not backpropagate through those connections..\nThe conceptual framework that we propose is one where the intuitive notion of information is equiv alent with immediate suitability for a linear classifier (instead of being related to entropy)\nf: H>[0,1]D h +> softmax (Wh + b)\nIt is absolutely possible to train the probes simulatenously while training the model itself. This is a good approach if we consider about how long it can take to train the model. However, this creates a potential problem if we optimize the loss of the model more quickly than the loss of the probes.. This can present a skewed view of the actual situation that we would have if we trained the probes until convergence before updating the model parameters. If we accept this trade off, then we can train the probes at the same time as the model..\nIn some situations, the probes might overfit the training set, so we may want to do early stopping ol the validation set and report the performance for the probes on the test set. This is what we do ir. section|3.4|with the simple MNIST convnet.\nWe are still unsure if one of those variations should be preferred in general, and right now they al seem acceptable so long as we interpret the probe measurements properly.\nNote that training those probes represents a convex optimization problem. In practice, this does. mean guarantee that they are easy to train. However, it is reassuring because it means that probes taken at time 0n. can be used as initialization for probes at time On+1..\nWe use cross-entropy as probe loss because all models studied here used cross-entropy. Other alter native losses could be justified in other settings.\nHere we show a hypothetical example in which a model contains a bifurcation with two paths tha. later recombine. We are interested in knowing whether those two branches are useful, or whethe one is potentially redundant or useless..\n0.75 probe prediction error concat 0.45 concat 0.60\nIf the concatenated layer had a prediction error of 0.60 instead of 0.45, then we could declare tha the above branch did nothing useful. It may have nonzero weights, but it's still useless\nNote that we are reporting here the prediction errors, and it might be the case that the loss is indeed lower when we concatenate the two branches, but for some reason it could fail to apply to the prediction error.\n0.75 probe prediction error concat 0.45 concat 0.60\nFor example, the two different branches might contain convolutional layers with different dimen sions. They may have a different number of sublayers, or one might represent a skip connection We assume that the branches are combined through concatenation of their features, so that nothing is lost.\nFor this hypothetical situation, we indicate the probe prediction errors on the graphical model. The upper path has a prediction error of 0.75, the lower path has 0.60, and their combination has 0.45 Small errors are preferred. Although the upper path has \"less information\"' than the lower path, we can see here that it is not redundant information, because when we concatenate the features of the two branches we get a prediction error of 0.45 < 0.60.\nNaturally, this kind of conclusion might be entirely wrong. It might be the case that the branch above contains very meaningful features, and they simply happen to be useless to a linear classifier applied right there. The idea of using linear classification probes to understand the roles of different branches is suggested as a heuristic instead of a hard rule. Moreover, if the probes are not optimized perfectly, the conclusions drawn can be misleading.\n0.5 0.4 0.3 0.2 0.1 0.0 0 5 10 15 20 25 30 35 linear probe at layer k\nWe start with a toy example to illustrate what kind of plots we expect from probes. We use a 32 layer MLP with 128 hidden units. All the layers are fully-connected and we use LeakyReLU(0.5) as activation function.\nWe will run the same experiment 100 times, with a different toy dataset each time. The goal is to use. a data distribution (X, Y) where X E R128 is drawn N(0, I) and where Y E {-1,1} in linearly. separable (i.e. super easy to classify with a one-layer neural network). To do this, we just pick a w E IR128 for each experiment, and let the label yn be the sign of xT w..\nWe initialize this 32-layer MLP using glorot normal initialization, we do not perform any training on the model, and we add one probe at every layer. We optimize the probes with RMsProp and a sufficiently small learning rate.\nIn Figure[3] we show the prediction error rate for every probe, averaged over the 100 experiments The graph includes a probe applied directly on the inputs X, where we naturally have an error rate that is essentially zero (to be expected by the way we constructed our data), and which serves as a kind of sanity check. Given that we have only two possible labels, we also show a dotted horizontal line at 0.50, which is essentially the prediction error that we would get by flipping a coin. We can see that the prediction error rate climbs up towards 0.50 as we go deeper in the MLP (with untrained parameters).\nThis illustrates the idea that the input signal is getting mangled by the successive layers, so much. that it becomes rather useless by the time we reach the final layer. We checked the mean activatior norm of the hidden units at layer 32 to be sure that numerical underflow was not the cause for the. degradation. Note that this situation could be avoided by using orthogonal weights..\nOne of the popular explanation for training difficulties in very deep models is that of the explod. ing/vanishing (Hochreiter||1991) Bengio et al.|1993). Here we would like to offer another comple. mentary explanation, based on the observations from Figure[3] That is, at the beginning of training, the usefulness of layers decays as we go deeper, reaching the point where the deeper layers are. utterly useless. The values contained in the last layer are then used in the final softmax classifier and the loss backpropagates the values of the derivatives. Since that derivative is based on garbage. activations, the backpropagated quantities are also garbage, which means that the weights are all. going to be updated based on garbage. The weights stay bad, and we fail to train the model. The. authors like to refer to that phenomenon as garbage forwardprop, garbage backprop, in reference to. the popular concept of garbage in, garbage out in computer science.."}, {"section_index": "3", "section_name": "3.4 PROBES ON MNIST CONVNET", "section_text": "In this section we run the MNIST convolutional model provided by the tensor f1ow github repo (tensorflow/models/image/mnist/convolutional.py) We selected that model for reproducibility and to demonstrate how to easily peek into popular models by using probes.\nFigure 3: Toy experiment described in section|3.3 with linearly separable data (two labels), an un-. trained MLP with 32 layers, and probes at ev-. ery layer. We report the prediction error for ev-. ery probe, where 0.50 would be the performence. of a coin flip and 0.00 would be ideal. Note that the layer 0 here corresponds to the raw data, and the probes are indeed able to classify it perfectly. As expected, performance degrades when apply. ing random transformations. If many more layers. were present, it would be hard to imagine how the. final layer (with the model loss) can get any useful. signal to backpropagate.\nconv 5x5 maxpool conv 5x5 maxpool ReLU ReLU matmul ReLU matmul 32 filters 2x2 64 filters 2x2 input output images logits convolution layer convolution layer fully-connected layer fully-connected layer\nconv 5x5 maxpool conv 5x5 ReLU ReLU maxpool matmul ReLU matmul 32 filters 2x2 64 filters 2x2 input output images logits\nFigure 4: This graphical model represents the neural network that we are going to use for MNIST. The model could be written in a more compact form, but we represent it this way to expose all the locations where we are going to insert probes. The model itself is simply two convolutional layers followed by two fully-connected layer (one being the final classifier). However, we insert probes or each side of each convolution, activation function, and pooling function. This is a bit overzealous but the small size of the model makes this relatively easy to do.\n0.10 0.10 0.08 0.08 TOor 5 0.06 0.06 aestt aestt 0.02 0.02 0.00 0.00 input input post ostac conv1_preacte V2P fC fc1 fC1 conv1_postacte conv1_preact fc1 conv1_ con conv1_po conv (a) After initialization, no training (b) After training for 10 epochs\nFigure 5: We represent here the test prediction error for each probe, at the beginning and at the end of training. This measurement was obtained through early stopping based on a validation set of 104 elements. The probes are prevented from overfitting the training data. We can see that, at the beginning of training (on the left), the randomly-initialized layers were still providing useful trans. formations. The test prediction error goes from 8% to 2% simply using those random features. The biggest impact comes from the first ReLU. At the end of training (on the right), the test prediction error is improving at every layer (with the exception of a minor kink on fc1_preact)."}, {"section_index": "4", "section_name": "3.5 PROBES ON INCEPTION V3", "section_text": "We have performed an experiment using the Inception v3 model on the ImageNet dataset (Szeged et al.]2015] Russakovsky et al.]2015). This is very similar to what is presented in section[3.4] bu on a much larger scale. Due to the challenge presented by this experiment, we were not able to d everything that we had hoped. We have chosen to put those results in the appendix section|A.2\nCertain layers of the Inception v3 model have approximately one million features. With 1000. classes, this means that some probes can take even more storage space than the whole model it self. In these cases, one of the creative solutions was to try to use only a random subset of the features. This is discussed in the appendix section|A.1.\nHere we investigate two ways to modify a deep model in order to facilitate training. Our goal is no to convince the reader that they should implement these suggestions in their own models. Rather we want to demonstrate the usefulness of the linear classifier probes as a way to better understan. what is happening in their deep networks.\nWe start by sketching the model in Figure4 We report the results at the beginning and the end of training on Figure 5 One of the interesting dynamics to be observed there is how useful the first layers are, despite the fact that the model is completely untrained. Random projections can be useful to classify data, and this has been studied by others (Jarrett et al.]2009).\nIn both cases we use a toy model with 128 fully-connected layers with 128 hidden units in eac layer. We train on MNIST, and we use Glorot initialization along with leaky ReLUs\nWe choose this model because we wanted a pathologically deep model without getting involved in architecture details. The model is pathological in the sense that smaller models can easily be designed to achieve better performance, but also in the sense that the model is so deep that it is very hard to train it with gradient descent methods. From our experiments, the maximal depth where things start to break down was depth 64, hence the choice here of using depth 128.\nIn the first scenario, we add one linear classifier at every 16 layers. These classifiers contribute to the loss minimization. They are not probes. This is very similar to what happens in the famous Inception. model where \"auxiliary heads\"' are used (Szegedy et al.]2015). This is illustrated in Figure[6a| and. it works nicely. The untrainable model is now made trainable through a judicious use of auxiliary. classifier losses. The results are shown in Figure[7\nIn the second scenario, we look at adding a bridge (a skip connection) between layer O and layer 64 This means that the input features to layer 64 are obtained by concatenating the output of layer 63 with the features of layer O. The idea here is that we might observe that the model would effectively train a submodel of depth 64, using the skip connection, and shift gears later to use the whole depth of 128 layers. This is illustrated in Figure[6b] and the results are shown in Figure 8] It does not work as expected, but the failure of this approach is visualized very nicely with probes and serves as a great example of their usefulness in diagnosing problems with models.\nIn both cases. there are two interesting observations that can be made with probes. We refer readers to https : / /youtu.be/x8 j4 ZHcR2F I for the full videos associated to Figures5l7|and 8\nFirstly, at the beginning of training, we can see how the raw data is directly useful to perform linea. classification, and how this degrades as more layers are added. In the case of the skip connection ir Figure[8l this has the effect of creating two bumps. This is because the layer 64 also has the inpu data as direct parent, so it can fit a probe to that signal..\nL\nFigure 6: Examples of deep neural network with one probe at every layer (drawn above the graph We show here the addition of extra components to help training (under the graph, in orange).\nSecondly, the prediction error goes down in all probes during training, but it does so in a way that starts with the parents before it spreads to their descendants. This is even more apparent on the full. video (instead of the 3 frames provided here). This is a ripple effect, where the prediction error in. Figure6blis visually spreading like a wave from the left of the plot to the right..\nWe have presented more toy models or simple models instead of larger models such as Inception v3. In the appendix section|A.2|we show an experiment on Inception v3, which proved to be more challenging than expected. Future work in this domain would involve performing better experiments on a larger scale than small MNIST convnets, but still within a manageable size so we can properly. train all the probes. This would allow us to produce nice videos showing many training steps in sequence.\nWe have received many comments from people who thought about using multi-layer probes. This can be seen as a natural extension of the linear classifier probes. One downside to this idea is that we lose the convexity property of the probes. It might be worth pursuing in a particular setting, but as of\nFigure 7: A pathologically deep model with 128 layers gets an auxiliary loss added at every 16 layers (refer to simplified sketch in Figure |6a|if needed). This loss is added to the usual mode. loss at the last layer. We fit a probe at every layer to see how well each layer would perform if its. values were used as a linear classifier. We plot the train prediction error associated to all the probes at three different steps. Before adding those auxiliary losses, the model could not successfully be. trained through usual gradient descent methods, but with the addition of those intermediate losses the model is \"guided\"' to achieve certain partial objectives. This leads to a successful training o. the complete model. The final prediction error is not impressive, but the model was not designed tc. achieve state-of-the-art performance.\nFigure 8: A pathologically deep model with 128 layers gets a skip connection from layer 0 to layer. 64 (refer to sketch in Figure 6b|if needed). We fit a probe at every layer to see how well each. layer would perform if its values were used as a linear classifier. We plot the train prediction error associated to all the probes, at three different steps. We can see how the model completely ignores. layers 1-63, even when we train it for a long time. The use of probes allows us to diagnose that. problem through visual inspection.\nnow we feel that it is premature to start using multi-layer probes. This also leads to the convoluted idea of having a regular probe inside a multi-layer probe.."}, {"section_index": "5", "section_name": "5 CONCLUSION", "section_text": "In this paper we introduced the concept of the linear classifier probe as a conceptual tool to bette. understand the dynamics inside a neural network and the role played by the individual intermediate layers. We are now able to ask new questions and explore new areas. We have demonstrated how. these probes can be used to identify certain problematic behaviors in models that might not be apparent when we traditionally have access to only the prediction loss and error..\nWe hope that the notions presented in this paper can contribute to the understanding of deep neural networks and guide the intuition of researchers that design them."}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": ". 1.0 0.8 0.8 0.8 0.2 0.2 0.0 406080 100 120 140 0.0 6080 100 120 140 0.0 20 20 40 20 40 60 80 100 120 140 a) probes after 0 minibatches (b) probes after 500 minibatches (c) probes after 5000 minibatche\n1.0 1.0 1.0 0.8 ).8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.2 0.0 0.0 0.0 20 40 60 80 100 120 140 20 40 60 80 100 120 140 20 40 60 80 100 120 140 (a) probes after O minibatches. (b) probes after 500 minibatches (c) probes after 2000 minibatches\nYoshua Bengio is a senior CIFAR Fellow. The authors would like to acknowledge the support of the following agencies for research funding and computing support: NSERC, FQRNT, Calcul Quebec, Compute Canada, the Canada Research Chairs and CIFAR.\nOlga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng. Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. doi: 10.1007/s11263-015-0816-y. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep. neural networks? In Advances in neural information processing systems. pp. 3320-3328. 2014"}, {"section_index": "7", "section_name": "A APPENDIX", "section_text": "When using stochastic gradient descent, we require space to store the gradients, and if we use mo-. mentum this ends up taking three times the memory on the GPU. This is even worse for RMSProp Normally this might be acceptable for a model of reasonable size, but this turns into almost 4GB. overhead per probe.\nWe do not have to put a probe at every layer. We can also train probes independently. We can put probe parameters on the CPU instead of the GPU, if necessary. But when the act of training probes increases the complexity of the experiment beyond a certain point, the researcher might decide that they are not worth the trouble.\nWe propose the following solution : for a given probe, use a fixed random subset of features instead of the whole set of features..\nWith certain assumptions about the independence of the features and their shared role in predicting the correct class, we can make certain claims about how few features are actually required to assess the prediction error of a probe. We thank Yaroslav Bulatov for suggesting this approach.\nx ~ N(0,ID) (xTW[,k]) y = arg max k=1..K\nWe selected a matrix W by drawing all its individual coefficients from a univariate gaussian\nDavid MacKay. Information Theory, Inference and Learning Algorithms. Cambridge University Press, 2003.\nOne of the challenges to train on the Inception v3 model is that many of the layers have more than 200, O00 features. This is even worse in the first convolution layers before the pooling operations, where we have around a million features. With 1000 output classes, a probe using 200, 000 features has a weight matrix taking almost 1GB of storage.\nWe ran an experiment in which we used data X ~ N(0, Ip) where D = 100, 000 is the number of features. We used K = 1000 classes and we generated the ground truth using a matrix W of shape. (D, K). To obtain the class of a given x, we simply multiply x W and take the argmax over the K. components of the result.\nInstead of using D = 100, O00 features, we used instead only 1000 features picked at random. We trained a linear classifier on those features and, experimentally, it was relatively easy to achieve a 4% error rate on our first try. With all the features, we could achieve a O% error rate, so 4 % might not look great. We have to keep in mind that we have K = 1000 classes so random guesses yield an error rate of 99.9%.\nThis can reduce the storage cost for a probe from 1GB down to 10MB. The former is hard to justify and the latter is almost negligible."}, {"section_index": "8", "section_name": "A.2 PROBES ON INCEPTION V3", "section_text": "We are interested in putting linear classifier probes in the popular Inception v3 model. training on the ImageNet dataset. We used the tensorflow implementation available online. (tensorflow/models/inception/inception) and ranit on one GPU for 2 weeks\n000000 H H 000 00000\nAs described in section A.1, one of the challenges is that the number of features can be prohibitively large, and we have to consider taking only a subset of the features. In this particular experiment, we have had the most success by taking 1o00 random features for each probe. This gives certain layers an unfair advantage if they start with 4000 features and we kept 1000, whereas in other cases the probe insertion point has 426, 320 features and we keep 1000. There was no simple \"fair\"' solution. That being said, 13 out of the 17 probes have more than 100, 000 features, and 11 of those probes have more than 200, 000 features, so things were relatively comparable.\nWe put linear classifier probes at certain strategic layers. We represent this using boxes in the following Figure[9] The prediction error of the probe given by the last layer of each box is illustrated by coloring the box. Red is bad (high prediction error) and green/blue is good (low prediction error)\nWe would have liked to have a video to show the evolution of this during training, but this experiment had to be scaled back due to the large computational demands. We show here the prediction errors. at three moments of training. These correspond roughly to the beginning of training, then after a. few days, and finally after a week.\nInception v3. 11 0000 00000 O0000 1 r H main head probe training prediction error minibatches 001515 auxiliary head. 0.0 1.0 000 000 000 000 000 000 HH H0 000 HHH 0000 00000 00000 00000 00000 00000 00000 00000 00 00 I 00 00 00- 00 main head minibatches 050389 auxiliary head. 00 000 000 000 000- 000- 000 000000 0000 0000 0000 00000 00000 00000 00 ~00- 00 main head. minibatches 100876 auxiliary head. 00 000 000 000 000- 000 000000 000 0000 0000 00000 10000 HH 0000 00000 00000 00000 I 00- main head minibatches 308230 auxiliary head\nFigure 9: Inserting a probe at multiple moments during training the Inception v3 model on the ImageNet dataset. We represent here the prediction error evaluated at a random subset of 1000 features. As expected, at first all the probes have a 100% prediction error, but as training progresses we see that the model is getting better. Note that there are 1000 classes, so a prediction error of 50% is much better than a random guess. The auxiliary head, shown under the model, was observed to have a prediction error that was slightly better than the main head. This is not necessarily a condition that will hold at the end of training, but merely an observation."}] |
H1oRQDqlg | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Modern machine learning increasingly relies on highly complex probabilistic models to reason abou uncertainty. A key computational challenge is to develop efficient inference techniques to approx imate. or draw samples from complex distributions. Currently. most inference methods. includin. MCMC and variational inference, are hand-designed by researchers or domain experts. This makes it difficult to fully optimize the choice of different methods and their parameters, and exploit the structures in the problems of interest in an automatic way. The hand-designed algorithm can also b inefficient when it requires to make fast inference repeatedly on a large number of different distri butions with similar structures. This happens, for example, when we need to reason about a numbe of observed datasets in settings like online learning, or need fast inference as inner loops for othe algorithms such as maximum likelihood training. Therefore, it is highly desirable to develop more intelligent probabilistic inference systems that can adaptively improve its own performance to fully the optimize computational efficiency, and generalize to new tasks with similar structures.\nSpecifically, denote by p(x) a probability density of interest specified up to the normalization con-. stant, which we want to draw sample from, or marginalize to estimate its normalization constant. We want to study the following problem:.\nProblem 1. Given a distribution with density p(x) and a function f(n; &) with parameter n and random input &, for which we only have assess to draws of the random input (without knowing its true distribution qo), and the output values of f(n; &) and its derivative dn f(n; &) given n and . We want to find an optimal parameter n so that the density of the random output variable x = f(n; ) with & ~ qo closely matches the target density p(x).\nBecause we have no assumption on the structure of f(n; &) and the distribution of random input we can not directly calculate the actual distribution of the output random variable x = f(n; );. this makes it difficult to solve Problem[1using the traditional variational inference (VI) methods. Recall that traditional VI approximates p(x) using simple proposal distributions qn(x) indexed by. parameter n, and finds the optimal n by minimizing KL divergence KL(qn | p) = Eqn [log(qn/p)] which requires to calculate the density qn(x) or its derivative that is not computable by our assump-."}, {"section_index": "1", "section_name": "tion (even when the Monte Carlo gradient estimation and the reparametrization trick (Kingma & Welling2013) are applied)", "section_text": "In fact, it is this requirement of calculating qn(x) that has been the major constraint for the de signing of state-of-the-art variational inference methods with rich approximation families; the re cent successful algorithms (e.g.,Rezende & Mohamed]2015b] Tran et al.]2015] Ranganath et al. 2015, to name only a few) have to handcraft special variational families to ensure the computationa tractability of qn(x) and simultaneously obtain high approximation accuracy, which require substan tial mathematical insights and research effects. Methods that do not require to explicitly calculate In(x) can significantly simplify the design and applications of VI methods, allowing practical users to focus more on choosing proposals that work best with their specific tasks. We will use the term wild variational inference to refer to new variants of variational methods that require no tractabil ity qn(x), to distinguish with the black-box variational inference (Ranganath et al.]2014) which refers to methods that work for generic target distributions p(x) without significant model-by-mode consideration (but still require to calculate the proposal density qn(x)).\nA similar problem also appears in importance sampling (IS), where it requires to calculate the IS pro posal density q(x) in order to calculate the importance weight w(x) = p(x)/q(x). However, there exist methods that use no explicit information of q(x), which, seemingly counter-intuitively, give better asymptotic variance or converge rates than the typical IS that uses the proposal information (e.g.,Liu & Lee2016fBriol et al.| 2015] Henmi et al.2007Delyon & Portier2014). Discussions on this phenomenon dates back to|O'Hagan(1987), who argued that \"Monte Carlo (that uses the proposal information) is fundamentally unsound\"' for violating the Likelihood Principle, and devel. oped Bayesian Monte Carlo (O'Hagan 1991) as an example that uses no information on q(x), yet gives better convergence rate than the typical Monte Carlo O(n-1/2) rate (Briol et al.]2015). De- spite the substantial difference between IS and VI, these results intuitively suggest the possibility ol developing efficient variational inference without calculating q(x) explicitly.\nIn this work, we propose a simple algorithm for Problem|1|by iteratively adjusting the network pa rameter n to make its output random variable changes along a Stein variational gradient directior (SVGD) (Liu & Wang2016) that optimally decreases its KL divergence with the target distribu tion. Critically, the SVGD gradient includes a repulsive term to ensure that the generated sample have the right amount of variability that matches p(x). In this way, we \"amortize SVGD\"' using a neural network, which makes it possible for our method to adaptively improve its own efficiency by leveraging fast experience, especially in cases when it needs to perform fast inference repeatedly or a large number of similar tasks. As an application, we use our method to amortize the MLE training of deep energy models, where a neural sampler is adaptively trained to approximate the likelihooc function. Our method, which we call SteinGAN, mimics an adversarial game between the energy model and the neural sampler, and obtains realistic-looking images competitive with the state-of-the art results produced by generative adversarial networks (GAN) (Goodfellow et al.]2014) Radforc et al.|2015).\nRelated Work The idea of amortized inference (Gershman & Goodman]2014) has been recently. applied in various domains of probabilistic reasoning, including both amortized variational infer-. ence (e.g.,Kingma & Welling 2013} Rezende & Mohamed2015a), and data-driven proposals for (sequential) Monte Carlo methods (e.g.,Paige & Wood2016), to name only a few. Most of these. methods, however, require to explicitly calculate q(x) (or its gradient). One exception is a very. recent paper (Ranganath et al.]2016) that avoids calculating q(x) using an idea related to Stein. discrepancy [Gorham & Mackey2015} Liu et al.]2016} Oates et al.]2014} Chwialkowski et al. 2016). There is also a raising interest recently on a similar problem of \"learning to optimize\"' (e.g.,. Andrychowicz et al.[ 2016 Daniel et al.]2016 Li & Malik,2016), which is technically easier than the more general problem of \"learning to sample'. In fact, we show that our algorithm reduces to. \"learning to optimize\" when only one particle is used in SVGD.\nGenerative adversarial network (GAN) and its variants have recently gained remarkable success on generating realistic-looking images (Goodfellow et al 2014 Salimans et al. 2016] Radford et al.[ 2015 Li et al.]2015] Dziugaite et al.f2015] Nowozin et al.[2016).All these methods are set up to train latent variable models (the generator) under the assistant of the discriminator. Our SteinGAN instead performs traditional MLE training for a deep energy model, with the help of a neural sampler that learns to draw samples from the energy model to approximate the likelihood\nfunction; this admits an adversarial interpretation: we can view the neural sampler as a generator tha attends to fool the deep energy model, which in turn serves as a discriminator that distinguishes th real samples and the simulated samples given by the neural sampler. This idea of training MLE witl neural samplers was first discussed by Kim & Bengio(2016); one of the key differences is that th neural sampler in Kim & Bengio (2016) is trained with the help of a heuristic diversity regularize based on batch normalization, while SVGD enforces the diversity in a more principled way. Anothe method byZhao et al.[(2016) also trains an energy score to distinguish real and simulated samples but within a non-probabilistic framework (see Section|5|for more discussion). Other more traditiona approaches for training energy-based models (e.g., Ngiam et al.]2011} Xie et al.]2016) are ofter based on variants of MCMC-MLE or contrastive divergence (Geyer1991||Hinton2002|Tieleman 2008), and have difficulty generating realistic-looking images from scratch.\nStein variational gradient descent (SVGD) (Liu & Wang2016) is a general purpose Bayesian infe ence algorithm motivated by Stein's method (Stein1972 Barbour & Chen]2005) and kernelize Stein discrepancy (Liu et al.]2016} Chwialkowski et al.2016Oates et al. 2014J. It uses an effi cient deterministic gradient-based update to iteratively evolve a set of particles {xi}-1 to minimiz the KL divergence with the target distribution. SVGD has a simple form that reduces to the typica gradient descent for maximizing logp when using only one particle (n = 1), and hence can b easily combined with the successful tricks for gradient optimization, including stochastic gradien adaptive learning rates (such as adagrad), and momentum.\nwhere q[e] denotes the density of the updated particle x' = x + e(x) when the density of the original particle x is q, and F is the set of perturbation directions that we optimize over. We choose F to be the unit ball of a vector-valued reproducing kernel Hilbert space (RKHS) Hd = H : . . H with each H associating with a positive definite kernel k(x, x' ); note that H is dense in the space of continuous functions with universal kernels such as the Gaussian RBF kernel.\nCritically, the gradient of KL divergence in (2) equals a simple linear functional of $, allowing us to obtain a closed form solution for the optimal .Liu & Wang[(2016) showed that\nd KL(q[eo] |lp)|c=o = Ex~q[7p$(x)] de\nEp[Tp9]=Ep[Vxlogp'$+ Vx:$] = 0\nD(q||p) def max{Ex~q[Tp$(x)] s.t. I9lHd 1} $EHd\n$*(x) x Ex~q[Vxlogp(x)k(x,x) + Vxk(x,x)]\nTo give a quick overview of the main idea of SVGD, let p(x) be a positive density function on Rd which we want to approximate with a set of particles {xi}-1. SVGD initializes the particles by. sampling from some simple distribution go, and updates the particles iteratively by\nXiXi+E$(xi) Vi=1,...,n,\nd KL(q[eg] p = arg max D p)[ de $EF\nwith Tp$(x) = Vx logp(x)'(x) + Vx$(x)\nwhere D(q |I p) is the kernelized Stein discrepancy defined in|Liu et al.(2016), which equals zero if and only if p = q under mild regularity conditions. Importantly, the optimal solution of (6) yields a closed form\nIt is easy to see from (7) that x; reduces to the typical gradient x log p(x) when there is only a. single particle (n = 1) and Vxk(x, x) when x = x, in which case SVGD reduces to the standard gradient ascent for maximizing log p(x) (i.e., maximum a posteriori (MAP))..\nSVGD and other particle-based methods become inefficient when we need to repeatedly infer a large number different target distributions for multiple tasks, including online learning or inner loops of other algorithms, because they can not improve based on the experience from the past tasks, and may require a large memory to restore a large number of particles. We propose to \"amortize SVGD\" by. training a neural network f(n; &) to mimic the SVGD dynamics, yielding a solution for Problem|1\nOne straightforward way to achieve this is to run SVGD to convergence and train f(n; ) to fit the SVGD results. This, however, requires to run many epochs of fully converged SVGD and can be slow in practice. We instead propose an incremental approach in which n is iteratively adjusted so that the network outputs x = f(n; &) changes along the Stein variational gradient direction in (7) in order to decrease the KL divergence between the target and approximation distribution.\nm Iarg min >llf(n;Si)-x'll2, where =x+exj n i=1\nl|f(n; i)- x[l2 ~|0nf(nt; Si)(n-nt)-exi|l\nm t+1nt+ent, where Ant = argmin>||nf(nt; Si)-x|l2 8 i=1\nBy approximating the expectation under q with the empirical average of the current particles {xi}=1, SVGD admits a simple form of update:\nXj 1Xi + EXi here x;=ExE{ n .[Vx logp(x)k(x,xi)+ Vxk(x,xi)]\nTo be specific, denote by nt the estimated parameter at the t-th iteration of our method; each iteration of our method draws a batch of random inputs {&}1 and calculate their corresponding output x; = f(n; &) based on n'; here m is a mini-batch size (e.g., m = 100). The Stein variational gradient x; in (7) would then ensure that x, = x; + ex; forms a better approximation of the target distribution p. Therefore, we should adjust n to make its output matches {x}, that is, we want to update n by\nUpdate (9) can still be computationally expensive because of the matrix inversion. We can derive a further approximation by performing only one step of gradient descent of (8) (or (9)), which gives\nm Onf(nt; Si)xi i=1\nAlthough update (10) is derived as an approximation of (8)-(9), it is computationally faster and we. find it works very effectively in practice; this is because when e is small, one step of gradient update can be sufficiently close to the optimum.\nUpdate (10) also has a simple and intuitive form: (10) can be thought as a \"chain rule\" that back propagates the Stein variational gradient to the network parameter n. This can be justified by considering the special case when we use only a single particle (n = 1) in which case x; in (7) reduces to the typical gradient log p(x;) of log p(x), and update (10) reduces to the typical. gradient ascent for maximizing\nEglogp(f(n; 8)]\nVnKL(qn l]p) E)(Vx log p(x Vx log qn(x))\nnt+1nt+0nf(nt;Si)xi, where x; = Vx logp(xi) - Vx log qn(xi)\nx; ~ Ex~q[Vx logp(x)k(x,xi) + Vxk(x,xi). =Ex~q[(Vx logp(x) - Vx logq(x))k(x,xi) =Ex~q[(x)k(x,xi)]\nwhere (12) is obtained by using Stein's identity (5). Therefore, x; can be treated as a kernel smoothed version of x;"}, {"section_index": "2", "section_name": "1+ AMORTIZED MLE FOR GENERATIVE ADVERSARIAL TRAINING", "section_text": "Our method allows us to design efficient approximate sampling methods adaptively and automat ically, and enables a host of novel applications. In this paper, we apply it in an amortized MLE method for training deep generative models.\nMaximum likelihood estimator (MLE) provides a fundamental approach for learning probabilisti. models from data, but can be computationally prohibitive on distributions for which drawing sam. ples or computing likelihood is intractable due to the normalization constant. Traditional method such as MCMC-MLE use hand-designed methods (e.g., MCMC) to approximate the intractable like lihood function but do not work efficiently in practice. We propose to adaptively train a generativ. neural network to draw samples from the distribution during MLE training, which not only provide computational advantage, and also allows us to generate realistic-looking images competitive with. or better than the state-of-the-art generative adversarial networks (GAN) (Goodfellow et al.]2014. Radford et al.2015) (see Figure15).\nin which case f(n; &) is trained to maximize logp(x) (that is, learning to optimize), instead of learning to draw samples from p for which it is crucial to use Stein variational gradient x to diversify the network outputs.\nUpdate (10) also has a close connection with the typical variational inference with the reparameter lzation trick (Kingma & Welling2 2013). Let qn(x) be the density function of x = f(n; $), ~ qo Using the reparameterization trick, the gradient of KL(qn II p) w.r.t. n can be shown to be\nVnKL(qn l p) =-Ez~qo[0nf(n; E)(Vx logp(x) - Vx logqn(x)]\nWith {&} i.i.d. drawn from go and x; = f(n; ), Vi, the standard stochastic gradient descent for minimizing the KL divergence is\nThis is similar with (10), but replaces the Stein gradient x; defined in (7) with x. The advantage of using x; is that it does not require to explicitly calculate qn, and hence admits a solution to Prob- lem 1 in which qn is not computable for complex network f(n; &) and unknown input distribution qo. Further insights can be obtained by noting that\n\\xi ~ Ex~q[Vx logp(x)k(x,xi) + Vxk(x,xi) Ex~q[(Vx logp(x) - Vx log q(x))k(x,xi)] =Ex~q[(x)k(x,xi)]\nAlgorithm 2 Amortized MLE as Generative Adversaria1 Learning\nTo be specific, denote by {xi,obs} a set of observed data. We consider the maximum likelihood training of energy-based models of form\np(x) = exp(-(x,0)-(0)), (0) = log exp(-$(x, 0))dx\nwhere $(x; 0) is an energy function for x indexed by parameter 0 and (0) is the log-normalization constant. The log-likelihood function of 0 is.\nVeL(0) = -Eobs@e(x;0)[+ Ee[@e$(x;0)]\nwhere Eobs[] and Eg[] denote the empirical average on the observed data {xi,obs} and the expecta- tion under model p(x|0), respectively. The key computational difficulty is to approximate the model expectation Ee[]. To address this problem, we use a generative neural network x = f(n; ) trained by Algorithm|1|to approximately sample from p(x|0), yielding a gradient update for 0 of form\n00+eVeL(0) VeL(0) =Eobs[De(x;0)]+ En[e(x;0)\nwhere En denotes the empirical average on {x} where x; = f(n; S), {Si} ~ qo. As 0 is updated by gradient ascent, n is successively updated via Algorithm[1|to follow p(x|). See Algorithm|2\nWe call our method SteinGAN, because it can be intuitively interpreted as an adversarial game be. tween the generative network f(n; ) and the energy mode1 p(x[0) which serves as a discriminator The MLE gradient update of p(x|0) effectively decreases the energy of the training data and in. creases the energy of the simulated data from f(n; &), while the SVGD update of f(n; &) decreases. the energy of the simulated data to fit better with p(x[0). Compared with the traditional method. based on MCMC-MLE or contrastive divergence, we amortize the sampler as we train, which give much faster speed and simultaneously provides a high quality generative neural network that car. generate realistic-looking images; see Kim & Bengio(2016) for a similar idea and discussions.."}, {"section_index": "3", "section_name": "EMPIRICAL RESULTS", "section_text": "We evaluated our SteinGAN on four datasets, MNIST, CIFAR-10, CelebA (Liu et al.] 2015), and Large-scale Scene Understanding (LSUN) (Yu et al.]2015), on which we find our method tends to generate realistic-looking images competitive with, sometimes better than DCGAN (Radford et al. 2015) (see Figure2- Figure 3). Our code is available at https : //github. com/DartML, SteinGAN\nModel Setup In order to generate realistic-looking images, we define our energy model based or an autoencoder:\np(x[) x exp([x D(E(x; 0); 0)ID)\nn 1 L(e log p(xi,obs|0) n i=1\nassume f(n; ) to be a neural network whose input is a 100-dimensional random vector drawn by Uniform([-1, 1]). The positive definite kernel in SVGD is defined by the RBF kernel on the hidden representation obtained by the autoencoder in (14), that is,.\n1 k(x,x) = exp |E(x; 0)-E(x';0)||2)\nAs it is discussed in Section[3] the kernel provides a repulsive force to produce an amount of variabil- ity required for generating samples from p(x). This is similar to the heuristic repelling regularizer in Zhao et al.(2016) and the batch normalization based regularizer in|Kim & Bengio(2016), but is derived in a more principled way. We take the bandwidth to be h = 0.5 med, where med is the median of the pairwise distances between E(x) on the image simulated by f(n; ). This makes the kernel change adaptively based on both 0 (through E(x: 0)) and n (through bandwidth h).\nSome datasets include both images x and their associated discrete labels y. In these cases, we train a joint energy model on (x, y) to capture both the inner structure of the images and its predictive relation with the label, allowing us to simulate images with a control on which category it belongs to. Our ioint energy model is defined to be\np(x,y0) x exp -x-D(E(x; 0); 0)|]-max[m, o(y, E(x; 0))[}\nwhere o(., .) is the cross entropy loss function of a fully connected output layer. In this case, ou neural sampler first draws a label y randomly according to the empirical counts in the dataset, anc. then passes y into a neural network together with a 100 1 random vector to generate image x. This allows us to generate images for particular categories by controlling the value of input y..\nmax{logp(x|0) + y(0)}\nwhere (0) is the log-partition function; note that exp(y(0)) is a conjugate prior of p(x|\nWe initialize the weights of both the generator and discriminator from Gaussian distribution W(0, 0.02), and train them using Adam (Kingma & Ba2014) with a learning rate of 0.001 for the generator and O.o001 for the energy model (the discriminator). In order to keep the generator and discriminator approximately aligned during training, we speed up the MLE update (16) of the discriminator (by increasing its learning rate to 0.0o05) when the energy of the real data batch is larger than the energy of the simulated images, while slow down it (by freezing the MLE update of 0 in (16)) if the magnitude of the energy difference between the real images and the simulated images goes above a threshold of O.5. We used the bag of architecture guidelines for stable training suggested in DCGAN (Radford et al.]2015).\nDiscussion The MNIST dataset has a training set of 60, 000 examples. Both DCGAN and our. model produce high quality images, both visually indistinguishable from real images; see figure|1\nFigure|3|and|4|visualize the results on CelebA (with more than 200k face images) and LSUN (with. nearly 3M bedroom images), respectively. We cropped and resized both dataset images into 64 64\nCIFAR-10 is very diverse, and with only 50,000 training examples. Figure 2 shows examples of simulated images by DCGAN and SteinGAN generated conditional on each category, which look equally well visually. We also provide quantitively evaluation using a recently proposed inception score (Salimans et al.2016), as well as the classification accuracy when training ResNet using 50, 000 simulated images as train sets, evaluated on a separate held-out testing set never seen by the GAN models. Besides DCGAN and SteinGAN, we also evaluate another simple baseline obtained by subsampling 500 real images from the training set and duplicating them 100 times. We observe that these scores capture rather different perspectives of image generation: The inception score favors images that look realistic individually and have uniformly distributed labels; as a result, the Inception score of the duplicated 500 images is almost as high as the real training set. We find that the inception score of SteinGAN is comparable, or slightly lower than that of DCGAN. On the other hand, the classification accuracy measures the amount information captured in the simulated image sets; we find that SteinGAN achieves the highest classification accuracy, suggesting that it captures more information in the training set.\n000000000 222222a 3 5 M 73 4S6 + + + S 8 884 88888 99999999 1999999999 DCGAN SteinGAN\nFigure 1: MNIST images generated by DCGAN and our SteinGAN. We use the joint model in q1: to allow us to generate images for each digit. We set m = 0.2..\nairplane automobile bird cat deer dog frog horse ship truck DCGAN SteinGAN Inception Score Real Training Set 500 Duplicate DCGAN SteinGAN Model Trained on ImageNet 11.237 11.100 6.581 6.351 Model Trained on CIFAR-10 9.848 9.807 7.368 7.428 Testing Accuracy Real Training Set 500 Duplicate DCGAN SteinGAN 92.58 % 44.96 % 44.78 % 63.81 %\nReal Training Set 92.58 %\n92.58 %\nFigure 2: Results on CIFAR-10. \"500 Duplicate\"' denotes 500 images randomly subsampled from the training set, each duplicated 100 times. Upper: images simulated by DCGAN and SteinGAN (based on joint model (15)) conditional on each category. Middle: inception scores for samples generated by various methods (all with 50,000 images) on inception models trained on ImageNet and CIFAR-10, respectively. Lower: testing accuracy on real testing set when using 50,o00 simulated images to train ResNets for classification. SteinGAN achieves higher testing accuracy than DCGAN. We set m = 1 and y = 0.8.\nWe propose a new method to train neural samplers for given distributions, together with a new SteinGAN method for generative adversarial training. Future directions involve more applications and theoretical understandings for training neural samplers.\nDCGAN SteinGAN\nFigure 3: Results on CelebA. Upper: images generated by DCGAN and our SteinGAN. Lower: images generated by SteinGAN when performing a random walk + 0.01 Uniform(-1, 1 on the random input ; we can see that a man with glasses and black hair gradually changes to a woman with blonde hair. See Figure|5|for more examples.\nDCGAN SteinGAN\nFigure 4: Images generated by DCGAN and our SteinGAN on LSUN"}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Kacper Chwialkowski, Heiko Strathmann, and Arthur Gretton. A kernel test of goodness of fit. In Proceeding of the International Conference on Machine Learning (ICML), 2016.\nSamuel J Gershman and Noah D Goodman. Amortized inference in probabilistic reasoning. In Proceedings o the 36th Annual Conference of the Cognitive Science Society, 2014..\nCharles J. Geyer. Markov chain Monte Carlo maximum likelihood. In Computing Science and Statistics: Proc 23rd Symp. Interface, pp. 156-163, 1991.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaro Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Process ing Systems, pp. 2672-2680, 2014.\nJack Gorham and Lester Mackey. Measuring sample quality with Stein's method. In Advances in Neura Information Processing Systems (NIPS), pp. 226-234, 2015.\nMasayuki Henmi, Ryo Yoshida, and Shinto Eguchi. Importance sampling via the estimated sample Biometrika, 94(4):985-991, 2007\nTaesup Kim and Yoshua Bengio. Deep directed generative models with energy-based probability estimation arXiv preprint arXiv:1606.03439, 2016\nKe Li and Jitendra Malik. Learning to optimize. arXiv. reprint arXiv:1606.01885, 2016\nQiang Liu and Jason D. Lee. Black-box importance sampling. https://arxiv.org/abs/1610.05247, 2016\nZiwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceed ings of International Conference on Computer Vision (ICCV), 2015..\nChristian Daniel, Jonathan Taylor, and Sebastian Nowozin. Learning step size controllers for robust neura network training. In Thirtieth AAAI Conference on Artificial Intelligence, 2016.\nGeoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation 14(8):1771-1800, 2002\nDiederik P Kingma and Max Welling. Auto-encoding variational Bayes. In Proceedings of the International Conference on Learning Representations (ICLR), 2013..\nujia Li, Kevin Swersky, and Rich Zemel. Generative moment matching networks. In Proceedings of the International Conference on Machine Learning (ICML), 2015.\nJiquan Ngiam, Zhenghao Chen, Pang W Koh, and Andrew Y Ng. Learning deep energy models. In Proceeding. of the International Conference on Machine Learning (ICML), pp. 1105-1112, 2011.\nChris J Oates, Mark Girolami, and Nicolas Chopin. Control functionals for Monte Carlo integration. Journa of the Royal Statistical Society, Series B, 2014.\nAnthony O'Hagan. Monte Carlo is fundamentally unsound. Journal of the Royal Statistical Society. Series (The Statistician), 36(2/3):247-249, 1987\nAnthony O'Hagan. Bayes-hermite quadrature. Journal of statistical planning and inference, 29(3):245-260 1991.\nBrooks Paige and Frank Wood. Inference networks for sequential monte carlo in graphical models. arXiv preprint arXiv:1602.06701, 2016\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutiona generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\nR. Ranganath, J. Altosaar, D. Tran, and D.M. Blei. Operator variational inference. 2016\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improve techniques for training gans. arXiv preprint arXiv:1606.03498, 2016.\nTijmen Tieleman. Training restricted boltzmann machines using approximations to the likelihood gradient. In ceedings of the 2 5 th national confe 1064-1071 Machiv ACM. 2008\nJianwen Xie, Yang Lu, Song-Chun Zhu, and Ying Nian Wu. A theory of generative convnet. arXiv preprin arXiv:1602.03264, 2016.\nJunbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprir arXiv:1609.03126, 2016.\nSebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. arXiv preprint arXiv:1606.00709. 2016\nRajesh Ranganath, Sean Gerrish, and David M Blei. Black box variational inference. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), 2014..\nRajesh Ranganath, Dustin Tran, and David M Blei. Hierarchical variational models. arXiv preprint arXiv:1511.02386, 2015.\nDanilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedings th 1al Confe Machine (CML 5\nDanilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015b.\nFigure 5: More images generated by SteinGAN on CelebA"}] |
HkpbnH9lx | [{"section_index": "0", "section_name": "DENSITY ESTIMATION USING 1 REAL NVP", "section_text": "Laurent Dinh\nMontreal Institute for Learning Algorithms University of Montreal. Montreal. OC H3T1J4.\nGoogle Brain"}, {"section_index": "1", "section_name": "1 Introduction", "section_text": "The domain of representation learning has undergone tremendous advances due to improved super. vised learning techniques. However, unsupervised learning has the potential to leverage large pools of. unlabeled data, and extend these advances to modalities that are otherwise impractical or impossible\nOne principled approach to unsupervised learning is generative probabilistic modeling. Not only dc generative probabilistic models have the ability to create novel content, they also have a wide range of reconstruction related applications including inpainting [61]46] [59], denoising [3], colorization [71], and super-resolution [9].\nThis model can perform efficient and exact inference, sampling and log-density estimation of data points. Moreover, the architecture presented in this paper enables exact and efficient reconstruction of input images from the hierarchical features extracted by this model."}, {"section_index": "2", "section_name": "2 Related work", "section_text": "Substantial work on probabilistic generative models has focused on training models using maximun likelihood. One class of maximum likelihood models are those described by probabilistic undirectec graphs, such as Restricted Boltzmann Machines [58] and Deep Boltzmann Machines [53]. These models are trained by taking advantage of the conditional independence property of their bipartite structure to allow efficient exact or approximate posterior inference on latent variables. However because of the intractability of the associated marginal distribution over latent variables, theii training, evaluation, and sampling procedures necessitate the use of approximations like Mean Field inference and Markov Chain Monte Carlo, whose convergence time for such complex models\nGoogle Brain"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Jnsupervised learning of probabilistic models is a central yet challenging problem n machine learning. Specifically, designing models with tractable learning, sam ling, inference and evaluation is crucial in solving this task. We extend the space f such models using real-valued non-volume preserving (real NVP) transforma ions, a set of powerful, stably invertible, and learnable transformations, resulting n an unsupervised learning algorithm with exact log-likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an nterpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable nanipulations.\nAs data of interest are generally high-dimensional and highly structured, the challenge in this domain is building models that are powerful enough to capture its complexity yet still trainable. We address this challenge by introducing real-valued non-volume preserving (real NVP) transformations, a tractable yet expressive approach to modeling high-dimensional data.\nData space X Latent space Z Inference x ~ px z=fx Generation z ~ pz x ="}, {"section_index": "4", "section_name": "Inference", "section_text": "Figure 1: Real NVP learns an invertible, stable, mapping between a data distribution px and a laten. distribution pz (typically a Gaussian). Here we show a mapping that has been learned on a toy. 2-d dataset. The function f (x) maps samples x from the data distribution in the upper left intc approximate samples z from the latent distribution, in the upper right. This corresponds to exac. inference of the latent state given the data. The inverse function, f-1 (z), maps samples z from the. latent distribution in the lower right into approximate samples x from the data distribution in the. lower left. This corresponds to exact generation of samples from the model. The transformation ol. grid lines in I' and Z space is additionally illustrated for both f (x) and f-1 (z)..\nremains undetermined, often resulting in generation of highly correlated samples. Furthermore, these approximations can often hinder their performance [7].\nSuch approximations can be avoided altogether by abstaining from using latent variables. Auto- regressive models [18] 6 3720] can implement this strategy while typically retaining a great deal of flexibility. This class of algorithms tractably models the joint distribution by decomposing it into a product of conditionals using the probability chain rule according to a fixed ordering over dimensions simplifying log-likelihood evaluation and sampling. Recent work in this line of research has taken advantage of recent advances in recurrent networks [51], in particular long-short term memory [26] and residual networks [25] [24] in order to learn state-of-the-art generative image models [61]46] and language models [32]. The ordering of the dimensions, although often arbitrary, can be critical to the training of the model [66]. The sequential nature of this model limits its computational efficiency. For example, its sampling procedure is sequential and non-parallelizable, which can become cumbersome in applications like speech and music synthesis, or real-time rendering. Additionally, there is no natural latent representation associated with autoregressive models, and they have not yet been shown to be useful for semi-supervised learning\nDirected graphical models are instead defined in terms of an ancestral sampling procedure, which is appealing both for its conceptual and computational simplicity. They lack, however, the conditional independence structure of undirected models, making exact and approximate posterior inference on latent variables cumbersome [56]. Recent advances in stochastic variational inference [27] and amortized inference [13] 43] [35] 49], allowed efficient approximate inference and learning of deep directed graphical models by maximizing a variational lower bound on the log-likelihood [45] In particular, the variational autoencoder algorithm [35] 49] simultaneously learns a generative network, that maps gaussian latent variables z to samples x, and a matched approximate inference network that maps samples x to a semantically meaningful latent representation z, by exploiting the reparametrization trick [68]. Its success in leveraging recent advances in backpropagation [51][39] in deep neural networks resulted in its adoption for several applications ranging from speech synthesis [12] to language modeling [8]. Still, the approximation in the inference process limits its ability to learn high dimensional deep representations, motivating recent work in improving approximate inference [42]48]55][63][10]59][34].\nGenerative Adversarial Networks (GANs) [21] on the other hand can train any differentiable gen. erative network by avoiding the maximum likelihood principle altogether. Instead, the generative. network is associated with a discriminator network whose task is to distinguish between samples and real data. Rather than using an intractable log-likelihood, this discriminator network provides the training signal in an adversarial fashion. Successfully trained GAN models [21] 15] 47] can consistently generate sharp and realistically looking samples [38]. However, metrics that measure the diversity in the generated samples are currently intractable [62] [22] [30]. Additionally, instability in their training process [47] requires careful hyperparameter tuning to avoid diverging behavior..\nThis formula has been discussed in several papers including the maximum likelihood formulation of. independent components analysis (ICA) [4] 28], gaussianization [14] 11] and deep density models [5] 50] 17] 3]. As the existence proof of nonlinear ICA solutions [29] suggests, auto-regressive models can be seen as tractable instance of maximum likelihood nonlinear ICA, where the residual corresponds to the independent components. However, naive application of the change of variable. formula produces models which are computationally expensive and poorly conditioned, and so large. scale models of this type have not entered general use.."}, {"section_index": "5", "section_name": "3 Model definition", "section_text": "In this paper, we will tackle the problem of learning highly nonlinear models in high-dimensional. continuous spaces through maximum likelihood. In order to optimize the log-likelihood, we introduce. a more flexible class of architectures that enables the computation of log-likelihood on continuous. data using the change of variable formula. Building on our previous work in [17], we define a powerful class of bijective functions which enable exact and tractable density evaluation and exact. and tractable inference. Moreover, the resulting cost function does not to rely on a fixed form. reconstruction cost such as square error [38] 47], and generates sharper samples as a result. Also,. this flexibility helps us leverage recent advances in batch normalization [31] and residual networks. [24l25] to define a very deep multi-scale architecture with multiple levels of abstraction.."}, {"section_index": "6", "section_name": "3.1 Change of yariable formula", "section_text": "Given an observed data variable x E X, a simple prior probability distribution pz on a latent variable z E Z, and a bijection f : X -> Z (with g = f-1), the change of variable formula defines a model distribution on X by\n0f(x) px(x) =pz(f(x)) det dxT df(x) log(px(x)) = log (pz log det dxT\ndf(x) where is the Jacobian of f at x OxT\nExact samples from the resulting distribution can be generated by using the inverse transform sampling. rule [16]. A sample z ~ pz is drawn in the latent space, and its inverse image x = f-1(z) = g(z generates a sample in the original space. Computing the density on a point x is accomplished by computing the density of its image f(x) and multiplying by the associated Jacobian determinant.\nTraining such a generative network g that maps latent variable z ~ pz to a sample x ~ px does not in theory require a discriminator network as in GANs, or approximate inference as in variational autoencoders. Indeed, if g is bijective, it can be trained through maximum likelihood using the change of variable formula:\ndq px(x) =pz(z) det dzT"}, {"section_index": "7", "section_name": "3.2 Coupling layers", "section_text": "As shown however in [17], by careful design of the function f, a bijective model can be learned which. is both tractable and extremely flexible. As computing the Jacobian determinant of the transformation. is crucial to effectively train using this principle, this work exploits the simple observation that the. determinant of a triangular matrix can be efliciently computed as the product of its diagonal terms.\nWe will build a flexible and tractable bijective function by stacking a sequence of simple bijections. In each simple bijection, part of the input vector is updated using a function which is simple to invert but which depends on the remainder of the input vector in a complex way. We refer to each of these. simple bijections as an affine coupling layer. Given a D dimensional input x and d < D, the output y of an affine coupling layer follows the equations.\nwhere s and t stand for scale and translation, and are functions from Rd +> RD-d, and O is the Hadamard product or element-wise product (see Figure2(a))"}, {"section_index": "8", "section_name": "3.3 Properties", "section_text": "The Jacobian of this transformation is\ndy 0 Oyd+1:D dxT diag(exp [s (x1:d)]\nits determinant as exp S (x1:d) I. Since computing the Jacobian determinant of the coupling\nAnother interesting property of these coupling layers in the context of defining probabilistic models is their invertibility. Indeed, computing the inverse is no more complex than the forward propagation\ny1 Y2 y1 Y2 X S X1 X2 X1 X2 (a) Forward propagation (b) Inverse propagation\nFigure 2: Computational graphs for forward and inverse propagation. A coupling layer applies a simple invertible transformation consisting of scaling followed by addition of a constant offset to one part x2 of the input vector conditioned on the remaining part of the input vector x1. Because of its simple nature, this transformation is both easily invertible and possesses a tractable determinant However, the conditional nature of this transformation, captured by the functions s and t, significantly increase the flexibility of this otherwise weak function. The forward and inverse propagation operations have identical computational cost.\nComputing the Jacobian of functions with high-dimensional domain and codomain and computing. the determinants of large matrices are in general computationally very expensive. This combined with the restriction to bijective functions makes Equation2 appear impractical for modeling arbitrary. distributions.\n91.a 1.C Yd+1:D = xd+1:D O exp (s(x1:d)) + t(x1:d)\n1 2 5 6 4 8 3 7 3 4 7 8 2 6 5\nFigure 3: Masking schemes for affine coupling layers. On the left, a spatial checkerboard pattern mask. On the right, a channel-wise masking. The squeezing operation reduces the 4 4 1 tensor (on the left) into a 2 2 4 tensor (on the right). Before the squeezing operation, a checkerboard pattern is used for coupling layers while a channel-wise masking pattern is used afterward.\nY1:d = X1:d =xd+1:D O exp (s(x1:d)) + t(x1:d Yd+1:D L1:d Y1:d Yd+1:D-t(y1:d)) O exp(- s(y1:d) xd+1:D\nmeaning that sampling is as efficient as inference for this model. Note again that computing the inverse of the coupling layer does not require computing the inverse of s or t, so these functions can be arbitrarily complex and difficult to invert.."}, {"section_index": "9", "section_name": "3.4 Masked convolution", "section_text": "Partitioning can be implemented using a binary I mask b. and using the functional form for y.\ny=bOx+(1-bo x O exp(s(bO x)) +t(b O\nWe use two partitionings that exploit the local correlation structure of images: spatial checkerboard patterns, and channel-wise masking (see Figure[3). The spatial checkerboard pattern mask has value 1 where the sum of spatial coordinates is odd, and O otherwise. The channel-wise mask b is 1 for the first half of the channel dimensions and O for the second half. For the models presented here, both s() and t(.) are rectified convolutional networks."}, {"section_index": "10", "section_name": "3.5 Combining coupling layers", "section_text": "Although coupling layers can be powerful, their forward transformation leaves some components. unchanged. This difficulty can be overcome by composing coupling layers in an alternating pattern such that the components that are left unchanged in one coupling layer are updated in the next (see. Figure4(a))\nThe Jacobian determinant of the resulting function remains tractable, relying on the fact that\nSimilarly, its inverse can be computed easily as\n(fbofa-=\n0(fb o J Xp = fa(xa) dxT\n+ + x x + x\n(a) In this alternating pattern, units which remain identical in one transformation are modified in the next..\nFigure 4: Composition schemes for affine coupling layers"}, {"section_index": "11", "section_name": "3.6 Multi-scale architecture", "section_text": "At each scale, we combine several operations into a sequence: we first apply three coupling layet with alternating checkerboard masks, then perform a squeezing operation, and finally apply thre. more coupling layers with alternating channel-wise masking. The channel-wise masking is chosen s. that the resulting partitioning is not redundant with the previous checkerboard masking (see Figur. 3). For the final scale, we only apply four coupling layers with alternating checkerboard masks..\nPropagating a D dimensional vector through all the coupling layers would be cumbersome, in terms. of computational and memory cost, and in terms of the number of parameters that would need to be trained. For this reason we follow the design choice of [57] and factor out half of the dimensions at regular intervals (see Equation[14). We can define this operation recursively (see Figure4(b))\nIn our experiments, we use this operation for i < L. The sequence of coupling-squeezing-coupling operations described above is performed per layer when computing f(i) (Equation|14). At each layer, as the spatial resolution is reduced, the number of hidden layer features in s and t is doubled All variables which have been factored out at different scales are concatenated to obtain the final transformed output (Equation16).\nAs a consequence, the model must Gaussianize units which are factored out at a finer scale (in an earlier layer) before those which are factored out at a coarser scale (in a later layer). This results in the definition of intermediary levels of representation [53]49] corresponding to more local, fine-grained features as shown in AppendixD\nMoreover, Gaussianizing and factoring out units in earlier layers has the practical benefit of distribut ing the loss function throughout the network, following the philosophy similar to guiding intermediate layers using intermediate classifiers [40]. It also reduces significantly the amount of computation and memory used by the model, allowing us to train larger models.\n(b) Factoring out variables. At each step, half the vari. ables are directly modeled as Gaussians, while the other half undergo further transfor mation.\nWe implement a multi-scale architecture using a squeezing operation: for each channel, it divides the image into subsquares of shape 2 2 c, then reshapes them into subsquares of shape 1 1 4c The squeezing operation transforms an s s c tensor into an 4c tensor (see Figure[3) effectively trading spatial size for number of channels.\n(14)"}, {"section_index": "12", "section_name": "3.7 Batch normalization", "section_text": "To further improve the propagation of training signal, we use deep residual networks [24] [25] witl. batch normalization [31] and weight normalization [2[54] in s and t. As described in Appendix[I we introduce and use a novel variant of batch normalization which is based on a running average over. recent minibatches, and is thus more robust when training with very small minibatches..\nWe also apply batch normalization to the whole coupling layer output. The effects of batch normal. ization are easily included in the Jacobian computation, since it acts as a linear rescaling on each dimension. That is, given the estimated batch statistics and 2, the rescaling function.\nx - x T +e\nhas a Jacobian determinant\nThis form of batch normalization can be seen as similar to reward normalization in deep reinforcement learning 44]65]\nWe found that the use of this technique not only allowed training with a deeper stack of coupling layers, but also alleviated the instability problem that practitioners often encounter when training conditional distributions with a scale parameter through a gradient-based approach.."}, {"section_index": "13", "section_name": "4.1 Procedure", "section_text": "The algorithm described in Equation [2 shows how to learn distributions on unbounded space. In general, the data of interest have bounded magnitude. For examples, the pixel values of an image. typically lie in [0, 256]D after application of the recommended jittering procedure [64] [62]. In order to reduce the impact of boundary effects, we instead model the density of logit(a +(1- ) O 256), where. a is picked here as .05. We take into account this transformation when computing log-likelihood and. bits per dimension. We also augment the CIFAR-10, CelebA and LSUN datasets during training to. also include horizontal flips of the training examples..\nWe train our model on four natural image datasets: CIFAR-10 [36], Imagenet [52], Large-scale Scene Understanding (LSUN) [70], CelebFaces Attributes (CelebA) [41]. More specifically, we train on the downsampled to 32 32 and 64 64 versions of Imagenet [46]. For the LSUN dataset, we train on the bedroom, tower and church outdoor categories. The procedure for LSUN is the same as in [47] we downsample the image so that the smallest side is 96 pixels and take random crops of 64 64. Fol CelebA, we use the same procedure as in [38]: we take an approximately central crop of 148 148 then resize it to 64 x 64.\nWe use the multi-scale architecture described in Section[3.6 and use deep convolutional residual networks in the coupling layers with rectifier nonlinearity and skip-connections as suggested by 46. To compute the scaling functions s, we use a hyperbolic tangent function multiplied by a learnec scale, whereas the translation function t has an affine output. Our multi-scale architecture is repeated. recursively until the input of the last recursion is a 4 4 c tensor. For datasets of images of size. 32 32, we use 4 residual blocks with 32 hidden feature maps for the first coupling layers with. checkerboard masking. Only 2 residual blocks are used for images of size 64 64. We use a batch. size of 64. For CIFAR-10, we use 8 residual blocks, 64 feature maps, and downscale only once. We. optimize with ADAM [33] with default hyperparameters and use an L2 regularization on the weight. scale parameters with coefficient 5 : 10-5\nWe set the prior pz to be an isotropic unit norm Gaussian. However, any distribution could be use. for pz, including distributions that are also learned during training, such as from an auto-regressive model, or (with slight modifications to the training objective) a variational autoencoder.\nII(o?+e)\nTable 1: Bits/dim results for CIFAR-10. Imagenet, LSUN datasets and CelebA. Test results for CIFAR-10 and validation results for Imagenet, LSUN and CelebA (with training results in parenthesis. for reference).\nFigure 5: On the left column, examples from the dataset. On the right column, samples from the. model trained on the dataset. The datasets shown in this figure are in order: CIFAR-10, Imagenet (32 32), Imagenet (64 64), CelebA, LSUN (bedroom)."}, {"section_index": "14", "section_name": "4.2 Results", "section_text": "We show in Table[1|that the number of bits per dimension, while not improving over the Pixel RNN. [46] baseline, is competitive with other generative methods. As we notice that our performance. increases with the number of parameters, larger models are likely to further improve performance For CelebA and LSUN, the bits per dimension for the validation set was decreasing throughou. training, so little overfitting is expected..\nWe show in Figure 5 samples generated from the model with training examples from the dataset for comparison. As mentioned in [62] [22], maximum likelihood is a principle that values diversity\nFigure 6: Manifold generated from four examples in the dataset. Clockwise from top left: CelebA Imagenet (64 64), LSUN (tower), LSUN (bedroom).\nover sample quality in a limited capacity setting. As a result, our model outputs sometimes highl improbable samples as we can notice especially on CelebA. As opposed to variational autoencoder. the samples generated from our model look not only globally coherent but also sharp. Our hypothesi is that as opposed to these models, real NVP does not rely on fixed form reconstruction cost like an L norm which tends to reward capturing low frequency components more heavily than high frequenc components. Unlike autoregressive models, sampling from our model is done very efficiently as it i parallelized over input dimensions. On Imagenet and LSUN, our model seems to have captured wel the notion of background/foreground and lighting interactions such as luminosity and consistent ligh source direction for reflectance and shadows.\nWe also illustrate the smooth semantically consistent meaning of our latent variables. In the latent space, we define a manifold based on four validation examples z(1), 2(2), 2(3), 2(4), and parametrized by two parameters o and ' by.\nWe project the resulting manifold back into the data space by computing g(z). Results are shown Figure |6 We observe that the model seems to have organized the latent space with a notion of meaning that goes well beyond pixel space interpolation. More manifold visualization are showr in the Appendix. To further test whether the latent space has a consistent semantic interpretation we trained a class-conditional model on CelebA, and found that the learned representation had a consistent semantic meaning across class labels (see Appendix[F)."}, {"section_index": "15", "section_name": "Discussion and conclusion", "section_text": "In this paper, we have defined a class of invertible functions with tractable Jacobian determinant enabling exact and tractable log-likelihood evaluation, inference, and sampling. We have shown that this class of generative model achieves competitive performances, both in terms of sample quality and log-likelihood. Many avenues exist to further improve the functional form of the transformations for instance by exploiting the latest advances in dilated convolutions [69] and residual networks architectures [60].\nThis paper presented a technique bridging the gap between auto-regressive models, variational. autoencoders. and generative adversarial networks. Like auto-regressive models. it allows tractable and exact log-likelihood evaluation for training. It allows however a much more flexible functional. form, similar to that in the generative model of variational autoencoders. This allows for fast. and exact sampling from the model distribution. Like GANs, and unlike variational autoencoders. our technique does not require the use of a fixed form reconstruction cost, and instead defines a. cost in terms of higher level features, generating sharper images. Finally, unlike both variational.\nz = cos($) (cos(')z(1) + sin()z(2)) + sin($) (cos($)z(3) + sin($)z(4)\nautoencoders and GANs, our technique is able to learn a semantically meaningful latent space which is as high dimensional as the input space. This may make the algorithm particularly well suited to semi-supervised learning tasks, as we hope to explore in future work.\nReal NVP generative models can additionally be conditioned on additional variables (for instanc. class labels) to create a structured output algorithm. More so, as the resulting class of invertibl transformations can be treated as a probability distribution in a modular way, it can also be used tc. improve upon other probabilistic models like auto-regressive models and variational autoencoders. For variational autoencoders, these transformations could be used both to enable a more flexibl. reconstruction cost [38] and a more flexible stochastic inference distribution [48]. Probabilisti models in general can also benefit from batch normalization techniques as applied in this paper"}, {"section_index": "16", "section_name": "Acknowledgments", "section_text": "The authors thank the developers of Tensorflow [1]. We thank Sherry Moore, David Andersen anc Jon Shlens for their help in implementing the model. We thank Aaron van den Oord, Yann Dauphin Kyle Kastner, Chelsea Finn, Maithra Raghu, David Warde-Farley, Daniel Jiwoong Im and Oriol. Vinyals for fruitful discussions. Finally, we thank Ben Poole, Rafal Jozefowicz and George Dahl for. their input on a draft of the paper.."}, {"section_index": "17", "section_name": "References", "section_text": "The definition of powerful and trainable invertible functions can also benefit domains other than generative unsupervised learning. For example, in reinforcement learning, these invertible functions can help extend the set of functions for which an argmax operation is tractable for continuous Q learning [23] or find representation where local linear Gaussian approximations are more appropriate [67].\nCorrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. [2] Vijay Badrinarayanan, Bamdev Mishra, and Roberto Cipolla. Understanding symmetries in deep networks arXiv preprint arXiv:1511.01029, 2015. [3] Johannes Balle, Valero Laparra, and Eero P Simoncelli. Density modeling of images using a generalized normalization transformation. arXiv preprint arXiv:1511.06281, 2015. [4] Anthony J Bell and Terrence J Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural computation, 7(6):1129-1159, 1995. [5] Yoshua Bengio. Artificial neural networks and their application to sequence recognition. 1991. [6] Yoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neural networks. In NIPS, volume 99, pages 400-406, 1999. [7] Mathias Berglund and Tapani Raiko. Stochastic gradient estimate variance in contrastive divergence and persistent contrastive divergence. arXiv preprint arXiv:1312.6002, 2013. [8] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015. [9] Joan Bruna, Pablo Sprechmann, and Yann LeCun. Super-resolution with deep convolutional sufficient statistics. arXiv preprint arXiv:1511.05666, 2015. [10] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015. [11] Scott Shaobing Chen and Ramesh A Gopinath. Gaussianization. In Advances in Neural Information Processing Systems, 2000. [12] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pages 2962-2970, 2015. [13] Peter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine. Neural computation, 7(5):889-904. 1995. [14] Gustavo Deco and Wilfried Brauer. Higher order statistical decorrelation without information loss. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Advances in Neural Information Processing Systems 7, pages 247-254. MIT Press, 1995. [15] Emily L. Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in Neural Information Processing Svstems 28.\nQuebec, Canada, pages 1486-1494, 2015. [16] Luc Devroye. Sample-based non-uniform random variate generation. In Proceedings of the 18th conference on Winter simulation, pages 260-265. ACM, 1986. [17] Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: non-linear independent components estimation.. arXiv preprint arXiv:1410.8516, 2014. 18] Brendan J Frey. Graphical models for machine learning and digital communication. MIT press, 1998.. 19] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesis using convolutional neural. networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural. Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 262-270, 2015. [20] Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE: masked autoencoder for. distribution estimation. CoRR, abs/1502.03509, 2015. [21] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,. Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information. Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December. 8-13 2014, Montreal, Quebec, Canada, pages 2672-2680, 2014. [22] Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. Towards. conceptual compression. arXiv preprint arXiv:1604.08772, 2016. 23] Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, and Sergey Levine. Continuous deep q-learning with. model-based acceleration. arXiv preprint arXiv:1603.00748, 2016. [24] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. [25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. CoRR, abs/1603.05027, 2016. [26] Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735-1780, 1997. [27] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. The. Journal of Machine Learning Research, 14(1):1303-1347, 2013. [28] Aapo Hyvarinen, Juha Karhunen, and Erkki Oja. Independent component analysis, volume 46. John Wiley. & Sons, 2004. [29] Aapo Hyvarinen and Petteri Pajunen. Nonlinear independent component analysis: Existence and uniqueness results. Neural Networks, 12(3):429-439, 1999. [30] Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating images with. recurrent adversarial networks. arXiv preprint arXiv:1602.05110, 2016. [31] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing. internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [32] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of. language modeling. CoRR, abs/1602.02410, 2016. [33] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint. arXiv:1412.6980, 2014. [34] Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse. autoregressive flow. arXiv preprint arXiv:1606.04934, 2016. [35] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114,. 2013. [36] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009.. [37] Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATs, 2011. [38] Anders Boesen Lindbo Larsen, Soren Kaae Sonderby, and Ole Winther. Autoencoding beyond pixels using. a learned similarity metric. CoRR, abs/1512.09300, 2015. [39] Yann A LeCun, Leon Bottou, Genevieve B Orr, and Klaus-Robert Muller. Efficient backprop. In Neural. networks: Tricks of the trade, pages 9-48. Springer, 2012. [40] Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply-supervised nets.. arXiv preprint arXiv:1409.5185, 2014. [41] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In. Proceedings of International Conference on Computer Vision (ICCV), December 2015. [42] Lars Maalge, Casper Kaae Sonderby, Sgren Kaae Sonderby, and Ole Winther. Auxiliary deep generative. models. arXiv preprint arXiv:1602.05473, 2016. [43] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiv. preprint arXiv:1402.0030, 2014. [44] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare,. Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015. [45] Radford M Neal and Geoffrey E Hinton. A view of the em algorithm that justifies incremental, sparse, and. other variants. In Learning in graphical models, pages 355-368. Springer, 1998.\n[46] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv. preprint arXiv:1601.06759, 2016. [47] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015. [48] Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv. preprint arXiv:1505.05770, 2015. [49] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approxi-. mate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. 50] Oren Rippel and Ryan Prescott Adams. High-dimensional probability estimation with deep density models.. arXiv preprint arXiv:1302.5125, 2013. [51] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back- propagating errors. Cognitive modeling, 5(3):1, 1988. [52] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge.. International Journal of Computer Vision, 115(3):211-252, 2015. 53] Ruslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In International conference on artificial intelligence and statistics, pages 448-455, 2009. [54] Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate. training of deep neural networks. arXiv preprint arXiv:1602.07868, 2016. 55] Tim Salimans, Diederik P Kingma, and Max Welling. Markov chain monte carlo and variational inference:. Bridging the gap. arXiv preprint arXiv:1410.6460, 2014. [56] Lawrence K Saul, Tommi Jaakkola, and Michael I Jordan. Mean field theory for sigmoid belief networks.. Journal of artificial intelligence research, 4(1):61-76, 1996. [57] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recogni-. tion. arXiv preprint arXiv:1409.1556, 2014. [58] Paul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. Technical report, DTIC Document, 1986. [59] Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised. learning using nonequilibrium thermodynamics. In Proceedings of the 32nd International Conference on. Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 2256-2265, 2015. [60] Sasha Targ, Diogo Almeida, and Kevin Lyman. Resnet in resnet: Generalizing residual architectures. CoRR, abs/1603.08029, 2016. [61] Lucas Theis and Matthias Bethge. Generative image modeling using spatial lstms. In Advances in Neural. Information Processing Systems, pages 1918-1926, 2015. [62] Lucas Theis, Aaron Van Den Oord, and Matthias Bethge. A note on the evaluation of generative models.. CoRR, abs/1511.01844, 2015. [63] Dustin Tran, Rajesh Ranganath, and David M Blei. Variational gaussian process. arXiv preprint arXiv:1511.06499, 2015. [64] Benigno Uria, Iain Murray, and Hugo Larochelle. Rnade: The real-valued neural autoregressive density-. estimator. In Advances in Neural Information Processing Systems, pages 2175-2183, 2013. [65] Hado van Hasselt, Arthur Guez, Matteo Hessel, and David Silver. Learning functions across many orders. of magnitudes. arXiv preprint arXiv:1602.07714, 2016. [66] Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391, 2015. [67] Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing. Systems, pages 2728-2736, 2015. [68] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement. learning. Machine learning, 8(3-4):229-256, 1992. [69] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015. [70] Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.. [71] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. arXiv preprint\nFigure 7: Samples from a model trained on Imagenet (64 64)\nFigure 8: Samples from a model trained on CelebA\nFigure 9: Samples from a model trained on LSUN (bedroom category)\nFigure 10: Samples from a model trained on LSUN (church outdoor category)\nFigure 11: Samples from a model trained on LSUN (tower category)\nFigure 12: Manifold from a model trained on Imagenet (64 64). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation\nFigure 13: Manifold from a model trained on CelebA. Images with red borders are taken from the training set, and define the manifold. The manifold was computed as described in Equation[19] where the x-axis corresponds to , and the y-axis to ', and where , ' E {0, 7 TT\nFigure 14: Manifold from a model trained on LSUN (bedroom category). Images with red bor ders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation|19 where the x-axis corresponds to , and the y-axis to ', and where Q, E {0, T 7TT\nFigure 15: Manifold from a model trained on LSUN (church outdoor category). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation|1 where the x-axis corresponds to , and the y-axis to ', and where $, $' E {0, ,::. 7 TT\nFigure 16: Manifold from a model trained on LSUN (tower category). Images with red bor. ders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation19 9 where the x-axis corresponds to , and the y-axis to ', and where. P, E {0, 3 (TT"}, {"section_index": "18", "section_name": "C Extrapolation", "section_text": "Inspired by the texture generation work by [19 61] and extrapolation test with DCGAN [47], we also evaluate the statistics captured by our model by generating images twice or ten times as large as present in the dataset. As we can observe in the following figures, our model seems to successfully create a \"texture'' representation of the dataset while maintaining a spatial smoothness through the image. Our convolutional architecture is only aware of the position of considered pixel through edge effects in convolutions, therefore our model is similar tc a stationary process. This also explains why these samples are more consistent in LSUN, where the training data was obtained using random crops.\n(a) x2 (b) x10 enerate samples a factor bigger than the training set image size on I\nFigure 17: We generate samples a factor bigger than the training set image size on Imagenet (64 64\n(a) x2 (b) x10\nFigure 18: We generate samples a factor bigger than the training set image size on CelebA\n(a) x2 (b) x10\n(a) x2 (b) x10\nFigure 19: We generate samples a factor bigger than the training set image size on LSUN (bedroom category).\n(a) x2 (b) x10 generate samples a factor bigger than the training set image size ry).\nFigure 20: We generate samples a factor bigger than the training set image size on LSUN (church outdoor category).\n(a) x2 (b) x10\n(a) x2 (b) x10\nFigure 21: We generate samples a factor bigger than the training set image size on LSUN (tower category)."}, {"section_index": "19", "section_name": "D Latent variables semantic", "section_text": "As in [22], we further try to grasp the semantic of our learned layers latent variables by doing ablation tests. We infer the latent variables and resample the lowest levels of latent variables from a standard gaussian, increasing. the highest level affected by this resampling. As we can see in the following figures, the semantic of ou. latent space seems to be more on a graphic level rather than higher level concept. Although the heavy use o convolution improves learning by exploiting image prior knowledge, it is also likely to be responsible for this. limitation.\nFigure 23: Conceptual compression from a model trained on CelebA. The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%. 12.5% and 6.25% of the latent variables are kept.\nFigure 22: Conceptual compression from a model trained on Imagenet (64 64). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%. 50%, 25%, 12.5% and 6.25% of the latent variables are kept.\nFigure 25: Conceptual compression from a model trained on LSUN (church outdoor category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept.\nE\nFigure 24: Conceptual compression from a model trained on LSUN (bedroom category). The leftmost. column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right:. 100%. 50%, 25%, 12.5% and 6.25% of the latent variables are kept.\nFigure 26: Conceptual compression from a model trained on LSUN (tower category). The leftmost. column represent the original image, the subsequent columns were obtained by storing higher level. latent variables and resampling the others, storing less and less as we go right. From left to right 100%, 50%, 25%, 12.5% and 6.25% of the latent variables are kept."}, {"section_index": "20", "section_name": "E Batch normalization", "section_text": "We further experimented with batch normalization by using a weighted average of a moving average of the layer statistics t, ? and the current batch batch statistics t, Ot,.\nWe used batch normalization with a moving average for our results on CIFAR-10"}, {"section_index": "21", "section_name": "F Attribute change", "section_text": "Additionally, we exploit the attribute information y in CelebA to build a conditional model, i.e. the invertible function f from image to latent variable uses the labels in y to define its parameters. In order to observe the information stored in the latent variables, we choose to encode a batch of images x with their original attribute y. and decode them using a new set of attributes y', build by shuffling the original attributes inside the batch. We obtain the new images x' = g(f (x; y); y').\nWe observe that, although the faces are changed as to respect the new attributes, several properties remai unchanged like position and background.\nt+1=pt+1-p) t+1=pot+1-p)o 2\nwhere p is the momentum. When using t+1, 2+1, we only propagate gradient through the current batch statistics t, o?. We observe that using this lag helps the model train with very small minibatches.\nX\nFigure 27: Examples x from the CelebA dataset\nFigure 28: From a model trained on pairs of images and attributes from the CelebA dataset, we encode a batch of images with their original attributes before decoding them with a new set of attributes. We notice that the new images often share similar characteristics with those in Fig27] including position. and background."}] |
SygvTcYee | [{"section_index": "0", "section_name": "ABSTRACT", "section_text": "Many powerful machine learning models are based on the composition of mul-. tiple processing layers, such as deep nets, which gives rise to nonconvex objec-. tive functions. A general, recent approach to optimise such \"nested\"' functions is. the method of auxiliary coordinates (MAC). MAC introduces an auxiliary coordi-. nate for each data point in order to decouple the nested model into independent. submodels. This decomposes the optimisation into steps that alternate between. training single layers and updating the coordinates. It has the advantage that it. reuses existing single-layer algorithms, introduces parallelism, and does not need. to use chain-rule gradients, so it works with nondifferentiable layers. We describe. ParMAC, a distributed-computation model for MAC. This trains on a dataset dis-. tributed across machines while limiting the amount of communication so it does not obliterate the benefit of parallelism. ParMAC works on a cluster of machines. with a circular topology and alternates two steps until convergence: one step trains. the submodels in parallel using stochastic updates, and the other trains the coor-. dinates in parallel. Only submodel parameters, no data or coordinates, are ever. communicated between machines. ParMAC exhibits high parallelism, low com-. munication overhead, and facilitates data shuffling, load balancing, fault tolerance. and streaming data processing. We study the convergence of ParMAC and its par-. allel speedup, and implement ParMAC using MPI to learn binary autoencoders for. fast image retrieval, achieving nearly perfect speedups in a 128-processor cluster. with a training set of 100 million high-dimensional points.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Serial computing has reached a plateau and parallel, distributed architectures are becoming widely available, from machines with a few cores to cloud computing with 1000s of machines. The combi nation of powerful nested models with large datasets is a key ingredient to solve difficult problems in machine learning, computer vision and other areas, and it underlies recent successes in deep learning (Hinton et al.. [2012: Le et al.,2012; Dean et al., 2012). Unfortunately, parallel computation is not easy, and many good serial algorithms do not parallelise well. The cost of communicating (through the memory hierarchy or a network) greatly exceeds the cost of computing, both in time and energy and will continue to do so for the foreseeable future. Thus, good parallel algorithms must minimise communication and maximise computation per machine, while creating sufficiently many subprob lems (ideally independent) to benefit from as many machines as possible. The load (in runtime) on each machine should be approximately equal. Faults become more frequent as the number of ma chines increases, particularly if they are inexpensive machines. Machines may be heterogeneous and differ in CPU and memory; this is the case with initiatives such as SETI@home, which may become an important source of distributed computation in the future. Big data applications have additiona restrictions. The size of the data means it cannot be stored on a single machine, so distributed memory architectures are necessary. Sending data between machines is prohibitive because of the size of the data and the high communication costs. In some applications, more data is collected than can be stored, so data must be regularly discarded. In others, such as sensor networks, limited battery life and computational power imply that data must be processed locally."}, {"section_index": "2", "section_name": "PARMAC: DISTRIBUTED OPTIMISATION OF NESTED FUNCTIONS, WITH APPLICATION TO LEARNING B1- NARY AUTOENCODERS", "section_text": "In this paper, we focus on machine learning models of the form y = F K+1(. . . F2(F1(x)) . .. ), i.e. consisting of a nested mapping from the input x to the output y. Such nested models involve multipl parameterised layers of processing and include deep neural nets, cascades for object recognition ij. computer vision or for phoneme classification in speech processing, wrapper approaches to classifi. cation or regression, and various combinations of feature extraction/learning and preprocessing prio. to some learning task. Nested and hierarchical models are ubiquitous in machine learning becaus. they provide a way to construct complex models by the composition of simple layers. Howeve. training nested models is difficult even in the serial case because function composition produce. inherently nonconvex functions, which makes gradient-based optimisation difficult and slow, an. sometimes inapplicable (e.g. with nonsmooth or discrete layers)..\nOur starting point is a recently proposed technique to train nested models, the method of auxiliary coordinates (MAC) (Carreira-Perpinan and Wang, 2012; 2014). This reformulates the optimisation into an iterative procedure that alternates training submodels independently with coordinating them. It introduces significant model and data parallelism, can often train the submodels using. existing algorithms, and has convergence guarantees with differentiable functions to a local. stationary point, while it also applies with nondifferentiable or even discrete layers. MAC has been. applied to various nested models (Carreira-Perpinan and Wang, 2014;Wang and Carreira-Perpinan. 2014,Carreira-Perpinan and Raziperchikolaei,2015;Raziperchikolaei and Carreira-Perpinan, 2016; Carreira-Perpinan and Vladymyro,2015). However, the original papers proposing MAC (Carreira-Perpinan and Wang, 2012; 2014) did not address how to run MAC on a distributed computing architecture, where communication between machines is far costlier than computation This paper proposes ParMAC, a parallel, distributed framework to learn nested models using MAC,. analyses its parallel speedup and convergence, implements it in MPI for the problem of learning. binary autoencoders, and demonstrates its ability to train on large datasets and achieve large. d:ctbu1t0do11q4\nRelated work Distributed optimisation and large-scale machine learning have been steadily gain ing interest in recent years, with the development of parallel computation abstractions tailorec to machine learning, such as Spark (Zaharia et al.,2010), GraphLab (Low et al., 2012), Petuun (Xing et al. 2015) or TensorFlow (Abadi et al.2015), which have the goal of making cloud com puting easily available to train machine learning models. Most work has centred on convex optimi sation, particularly when the objective function has the form of empirical risk minimisation (data fitting term plus regulariser) (Cevher et al., 2014). This includes many important models in machine learning, such as linear regression, LASsO, logistic regression or SVMs. Such work is typically based on stochastic gradient descent (SGD) (Bottou,2010), coordinate descent (CD) (Wright,2016 or the alternating direction method of multipliers (ADMM) (Boyd et al.,2011). This has resulted ir several variations of parallel SGD (Bertsekas, 2011; Zinkevich et al., 2010; Niu et al., 2011), paral lel CD (Bradley et al.]2011;[Richtarik and Takac,[2013;Liu and Wright, 2015) and parallel ADMM (Boyd et al., 2011; Ouyang et al., 2013; Zhang and Kwok, 2014).\nLittle work has addressed nonconvex models. Most of it has focused on deep nets (Dean et al. 2012; Le et al., 2012).Google's DistBelief (Dean et al., 2012) uses asynchronous parallel SGI (with gradients for the full model computed with backpropagation) to achieve data parallelism, an. some form of model parallelism. The latter is achieved by carefully partitioning the neural net int. pieces and allocating them to machines to compute gradients. This is difficult to do and requires careful match of the neural net structure (number of layers and hidden units, connectivity, etc.) to th. target hardware. Also, parallel SGD can diverge with nonconvex models, which requires heuristic. to make sure we average replica models that are close in parameter space and thus associated witl. the same optimum. Although this has managed to train huge nets on huge datasets by using ten. of thousands of CPU cores, the speedups achieved were very modest. Other work has used simila. techniques but for GPUs (Coates et al., [2013; Seide et al., 2014).\nParMAC is specifically designed for nested models, which are typically nonconvex and include deej ets and many other models, some of which have nondifferentiable layers. As we describe belov ParMAC has the advantages of being simple and relatively independent of the target hardware, whil achieving high speedups.\nMany optimisation problems in machine learning involve mathematically \"nested\"' functions of the form F(x; W) = FK+1(... F2(F1(x; W1); W2) ...; WK+1) with parameters W, such as deep nets. Such problems are traditionally optimised using methods based on gradients computed using the chain rule. However, such gradients may sometimes be inconvenient to use, or may not exist (e.g. if some of the layers are nondifferentiable, as with binary autoencoders). Also, they are hard to parallelise, because of the inherent sequentiality in the chain rule. The method of auxiliary co- ordinates (MAC) (Carreira-Perpinan and Wang, 2012; 2014) is designed to optimise nested models without using chain-rule gradients while introducing parallelism. The idea is to break nested func- tional relationships judiciously by introducing new variables (the auxiliary coordinates) as equality constraints. These are then solved by optimising a penalised function using alternating optimi- sation over the original parameters (which we call the W step) and over the coordinates (which we call the Z step). The result is a coordination-minimisation (CM) algorithm: the minimisation (W) step updates the parameters by splitting the nested model into independent submodels and training them using existing algorithms, and the coordination (Z) step ensures that corresponding inputs and outputs of submodels eventually match. MAC algorithms have been developed for sev- eral nested models so far: deep nets (Carreira-Perpinan and Wang,2014), low-dimensional SVMs (Wang and Carreira-Perpinan,2014), binary autoencoders (Carreira-Perpinan and Raziperchikolaei 2015), affinity-based loss functions for binary hashing (Raziperchikolaei and Carreira-Perpinan 2016) and parametric nonlinear embeddings (Carreira-Perpinan and Vladymyrov, 2015). Although this paper proposes and analyses ParMAC in general, our MPI implementation is for the particular case of binary autoencoders. These define a nonconvex nondifferentiable problem, yet its MAC algorithm is simple and effective\nMAC algorithm for binary autoencoders A binary autoencoder (BA) is a usual autoencoder but with a binary code layer. It consists of an encoder h(x) that maps a real vector x E RD onto. a binary code vector with L < D bits, z E {0,1}L, and a linear decoder f(z) which maps z. back to RD in an effort to reconstruct x. We will call h a binary hash function (see later). Let. us write h(x) = (Ax) (A includes a bias by having an extra dimension xo = 1 for each x) where A E RL(D+1) and (t) is a step function applied elementwise, i.e., (t) = 1 if t 0 and (t) = 0 otherwise. Given a dataset of D-dimensional patterns X = (x1,..., x), our objective. function, which involves the nested model y. = f(h(x)), is the usual least-squares reconstruction NP-complete. Where the gradients do exist wrt A they are zero, so optimisation of h using chain-. rule gradients does not apply. We introduce as auxiliary coordinates the outputs of h, i.e., the codes for each of the N input patterns, and obtain the following equality-constrained problem:. N\nN min |xn - f(zn)|| s.t.Zn=h(xn),znE{0,1},n=1,...,N h,f,Z n=\nN EQ(h, f, Z; ) = ||xn-f(zn)|l2+||zn-h(xn)|l s.t.zn E{0,1}L, n=1,..,N n=\nFinally, we apply alternating optimisation over Z and W = (h, f). This gives the following steps\nThe user must choose a schedule for the penalty parameter (sequence of values 0 < 1 < ... oo). This should increase slowly enough that the binary codes can change considerably and explore\nOver Z for fixed (h, f), this is a binary optimisation on N L variables, but it separates into N independent optimisations each on only L variables, with the form of a binary proximal. operator (where we omit the index n): minz ||x f(z)|I + ||z - h(x)|| s.t. z E {0, 1}L This can be solved approximately by alternating optimisation over bits.. Over W = (h, f) for fixed Z, we obtain L + D independent problems: for each of the L single-bit hash functions (which try to predict Z optimally from X), each solvable by fitting a linear SVM; and for each of the D linear decoders in f (which try to reconstruct X optimally from Z), each a linear least-squares problem..\nnitialise ZLxN = (Z1, for = 0 < 1 <...< o Z auxiliary coordj parfor l = 1, ..., L W step: h h: RD >{0,1}L hiO) fit SVM to (X, Zi.) parfor d = 1, ..., D h = (h1,...,ht) W step: f fa) least-squares fit to (Z, Xd.) encoders (hash parfor n = 1,..., N Z step f: RL RD Zn < arg minznE{0,1}L||xn - f(zn)|| + ||zn- h(xn) f =(f1,...,fD) if no change in Z and Z = h(X) then stop decoders return h, Z = h(X)\nFigure 1: MAC algorithm for binary autoencoders. \"parfor\" indicates a for loop whose iterations are carried out in parallel. The steps over h and f can be run in parallel as well.\nbetter solutions before the constraints are satisfied and the algorithm stops. With BAs, MAC stops. for a finite value of , which occurs whenever Z does not change compared to the previous Z step This gives a practical stopping criterion. Carreira-Perpinan and Raziperchikolaei (2015) give proof: of these statements and further details about the algorithm. Fig.1gives the MAC algorithm for BAs\nThe BA was proposed as a way to learn good binary hash functions for fast, approximate informatior retrieval (Carreira-Perpinan and Raziperchikolaei, 2015). Binary hashing (Grauman and Fergus 2013) has emerged in recent years as an effective way to do fast, approximate nearest-neighbou. searches in image databases. The real-valued, high-dimensional image vectors are mapped onto a. binary space with L bits and the search is performed there using Hamming distances at a vastly. faster speed and smaller memory (e.g. N = 109 points with D = 500 take 2 TB, but only 8 GE using L = 64 bits, which easily fits in RAM). As shown by|Carreira-Perpinan and Raziperchikolae (2015), training BAs with MAC beats approximate optimisation approaches such as relaxing the. codes or the step function in the encoder, and yields state-of-the-art binary hash functions h in un supervised problems, improving over established approaches such as iterative quantisation (ITQ. (Gong et al., [2013). We focus mostly on linear hash functions because these are, by far, the mos used type of hash functions in the literature of binary hashing, due to the fact that computing the binary codes for a test image must be fast at run time..\nMAC in general With a nested function with K layers, we can introduce auxiliary coordinates al each layer. For example, with a neural net, this decouples the weight vector of every hidden unit ir the W step, which can be solved as a logistic regression (seeCarreira-Perpinan and Alizadeh, 2016) For a large net with a large dataset, this affords an enormous potential for parallel computation.\nMAC and EM MAC is very similar to expectation-maximisation (EM) at a conceptual level. EM (McLachlan and Krishnan, 2008) applies generally to many probabilistic models. The resulting algorithm can be very different (e.g. EM for Gaussian mixtures vs EM for hidden Markov models) but it always alternates two steps that conceptually do the following. The E step updates in paralle the posterior probabilities. This separates over data points and is like the Z step in MAC, where he posterior probabilities are the auxiliary coordinates, and where the step may be in closed-forn or require optimisation, depending on the model. The M step updates in parallel the \"submodels'' For a mixture with M components, these are the M Gaussians (means, covariances, proportions) This separates over submodels and is like the W step in MAC. For BAs, the submodels are the I encoders (linear SVMs) and the D decoders (linear regressors); for a neural net, each weight vecto of a hidden unit is a submodel (a logistic regressor). For Gaussian mixtures, the M step can be done exactly in one \"epoch\"' because it is a simple average. For MAC, it usually requires optimisation and so multiple epochs. In fact, ParMAC applies to EM by using e = 1 epoch: in the W step, the Gaussians visit each machine circularly and (their averages) are updated on its data; in the Z step each machine updates its posterior probabilities.\ninput XDxN = (x1,...,XN), L E N X training points Initialise ZLxN = (z1,..., zN) E {0,1}LN for = 0 < 1 <...< o Z auxiliary coordinates parfor l = 1, ..., L W step: h h: RD _>{0,1}L hi(O)< fit SVM to (X,Zi.) parfor d = 1, ..., D h = (h1,...,ht) W step: f fa) least-squares fit to (Z, Xd.) encoders (hash fcn.) parfor n = 1,..., N Z step f: RL > RD Zn < arg minznE{0,1}L|xn- f(zn)|l2 + |zn-h(xn)|| f=(f1,...,fD) if no change in Z and Z = h(X) then stop decoders return h, Z = h(X)\nIn the rest of the paper, some readers may find this analogy useful and think of EM for Gaussian mix- tures instead of MAC, replacing \"submodels'' and \"auxiliary coordinates\"' in MAC with \"Gaussians' and \"posterior probabilities\"' in EM, respectively.\nWh W h W h Wh 1 2 13 25 37 14 26 38 3 15 27 39 4 16 28 40 5 17 29 41 67 18 30 42 19 31 43 8 20 32 44 9 21 33 45 10 22 34 46 11 23 35 47 12 24 36 48 Xn yn Zn Xn yn Zn Xn Yn Zn Xn Yn Zn 1 11 21 31 2 12 22 32 3 13 23 33 Data 4 14 24 34 10 20 30 40 Machine 1 Machine 2 Machine 3 Machine 4\nFigure 2: ParMAC model with P = 4 machines, M = 12 submodels \"wn\" and N = 40 data points. Submodels h, h + M, h + 2M and h + 3M are copies of submodel h, but only one of them is the most currently updated. At the end of the W step all copies are identical..\nA specific MAC algorithm depends on the model and objective function and on how the auxiliar coordinates are introduced. We can achieve steps that are closed-form, convex, nonconvex, binary or others. However, we will assume the following always hold: (1) Separability over data points In the Z step, the N subproblems for z1,...,Z are independent, one per data point. Each z, step depends on the current model. (2) Separability over submodels. In the W step, there ar M independent submodels, where M depends on the problem. For example, M is the numbe of hidden units in a deep net, or the number of hash functions and linear decoders in a BA. Eacl submodel depends on all the data and coordinates. We now show how to turn this into a distributed low-communication ParMAC algorithm.\nThe basic idea in ParMAC is as follows. With large datasets in distributed systems, it is imperative to minimise data movement over the network because the communication time generally far exceeds the computation time in modern architectures. In MAC we have 3 types of data: the original train- ing data (X, Y), the auxiliary coordinates Z, and the model parameters (the submodels). Usually. the latter type is far smaller. In ParMAC, we never communicate training or coordinate data; each machine keeps a disjoint portion of (X, Y, Z) corresponding to a subset of the points. Only model parameters are communicated, during the W step, following a circular topology, which implicitly implements a stochastic optimisation. The model parameters are the hash functions h and the de coder f for BAs, and the weight vector wn of each hidden unit h for deep nets. Let us see this in detail (refer to fig.2).\nAssume we have P identical processing machines, each with its own memory and CPU, connectec through a network in a circular unidirectional topology. Each machine stores a subset of the data points and corresponding coordinates (xn, Yn, Zn) such that the subsets are disjoint and their union is the entire data. Before the Z step starts, each machine contains all the (just updated) submodels This means that in the Z step each machine processes its auxiliary coordinates {zn} independently of all other machines, i.e., no communication occurs. The W step is more subtle. At the beginning of the W step, each machine will contain all the submodels and its portion of the data and (jus updated) coordinates. Each submodel must have access to the entire data and coordinates in ordej to update itself and, since the data cannot leave its home machine, the submodel must go to the data We achieve this in the circular topology with an asynchronous processing, as follows. Each machine keeps a queue of submodels to be processed, and repeatedly performs the following operations extract a submodel from the queue, process it on its data and send it to the machine's successor\n(which will insert it in its queue). If the queue is empty, the machine waits until it is nonempty. The queue of each machine is initialised with a portion M/ P of submodels associated with that machine (e.g. in fig.2] machine 1's queue contains submodels 1-3, machine 2 submodels 4-6, etc.). Each submodel carries a counter that is initially 1 and increases every time it visits a machine. When it reaches P, the submodel has visited all machines in sequence and has completed an epoch. We repeat this for e epochs and, to ensure all machines have all final submodels before starting the Z step, we run a communication-only epoch e + 1 (without computation), where submodels simply move from machine to machine.\nSince each submodel is updated as soon as it visits a machine, rather than computing the exac gradient once it has visited all machines and then take a step, the W step is really carrying ou stochastic steps for each submodel. For example, if the update is done by a gradient step, we are. actually implementing stochastic gradient descent (SGD) where the minibatches are of size N/I. (or smaller, if we subdivide a machine's data portion into minibatches, which should be typicall. the case in practice). From this point of view, we can regard the W step as doing SGD on eacl. submodel in parallel by having each submodel visit the minibatches in each machine.\nAs described, and as implemented in our experiments, the entire model parameters are communi cated e + 1 times in a MAC iteration if running e epochs in the W step. We can also run e epochs with only 2 rounds of communication by having a submodel do e consecutive passes within each machine's data. This reduces the amount of shuffling, but should not be a problem if the data are randomly distributed over machines\nExtensions of ParMAC Data shuffling, which improves the SGD convergence speed, can be achieved without data movement by accessing the local data in random order at each epoch (within machine), and by randomising the circular topology at each epoch (across-machine). Load bal- ancing is simple because the work in both W and Z steps is proportional to the number of data points N. Hence, if the processing power of machine p is proportional to dp > 0, we allocate to it Nap/(a1 + ... + ap) data points. Streaming, i.e., discarding old data and adding new data during training, can be done by adding/removing data within-machine, or by adding/removing ma- chines and updating the circular topology. Fault tolerance is possible because we can still learn a good model even if we lose the data from a machine that fails, and because in the W step we can revert to older copies of the lost submodels residing in other machines. See further details in Carreira-Perpinan and Alizadeh (2016).\nT(1) P1 = t/(e+1)tW, P2=etW/(e+1)t S(P) T(P) 1p2 + p2P+ P1TM/P p = P1+ P2 = (etW + t)/(e+1)tW pTM\nwhere p, p1 and p2 are ratios of computation vs communication, dependent on the optimisatior algorithm in the W and Z steps, and on the performace of the distributed system and MPI library.\nHence, if P < M and M is divisible by P we have S(P) = P/(1 + P) and if P > M we have S(P) = pM/(02 + P1 + ). In practice, typically we have p < 1 (because communication dominates computation in current architectures) and p2N > 1 (large dataset). If we take P < p2N. then S(P) ~ P if P M and S(P) ~ pM/(p2 + p1) if P > M. Hence, the speedup is nearly perfect if using fewer machines than submodels, and otherwise it peaks at S* = pM/(p2 + 2p1M/N) > M for P = P* = p1MN > M and decreases thereafter. This affords very large speedups for large datasets and large models. This theoretical speedup matches well our measured ones (see the experiments section), and can be used to determine optimal values for the number of machines P to use in practice (subject to additional constraints, e.g. cost of the machines).\nA theoretical model of the parallel speedup. We can estimate the runtime of the W and Z steps. assuming there are M independent submodels of the same size in the W step, using e epochs, on a dataset with N training points, distributed over P identical machines (each with N/P points). Let. tW be the computation time per submodel and data point in the W step, t7 the computation time. per data point in the Z step, and tW the communication time per submodel in the W step. Then the. runtime of the W and Z steps is TW(P) = [M/P](tW N +tW) Pe+[M/P]tW P and TZ(P) = M tZ, respectively. Hence the parallel speedup is (see details inCarreira-Perpinan and Alizadeh,. 2016):\nEq. (3) also shows that we can leave the speedup unchanged by trading off dataset size and com putation/communication times, as long as one of these holds: NtW and NtZ remain constant; or\nIn the BA, we have submodels of different size: encoders of size D and decoders of size L < D We can model this by \"grouping\" the D decoders into L groups of D/L decoders each, resulting in M = 2L equal-size submodels (assuming the ratio of computation and communication times of decoder vs encoder is L/D < 1).\nConvergence of ParMAC The only approximation that ParMAC makes to the original MAC algo. rithm is using SGD in the W step. Since we can guarantee convergence of SGD under certain condi. tions (e.g. Robbins-Monro schedules), we can recover the original convergence guarantees for MAC. to a local stationary point with differentiable layers (see details in|Carreira-Perpinan and Alizadeh. 2016). This convergence guarantee is independent of the number of layers, models and proces-. sors. With nondifferentiable layers, the convergence properties of MAC (and ParMAC) are not well known. In particular, for the binary autoencoder the encoding layer is discrete and the problem is. NP-complete. While convergence guarantees are important theoretically, in practical applications. with large datasets in a distributed setting one typically runs SGD for just a few epochs, even one or less than one (i.e., we stop SGD before passing through all the data). This typically reduces the. objective function to a good enough value as fast as possible, since each pass over the data is very. costly. In our experiments, 1-2 epochs in the W step make ParMAC very similar to MAC using ar exact step.\nCircular vs parameter-server topologies We also considered implementing ParMAC using a parameter-server (PS) topology rather than a circular one, but the latter is better. With a PS we do parallel SGD on each submodel independently, i.e., each worker runs SGD on its own submodel replica for a while, sends it to the PS, and this broadcasts an \"average' submodel back to the work ers, asynchronously. The circular topology does true SGD on each submodel independently fror the others. We can show the runtime per iteration using a PS is equal to that of the circular topol ogy only if the server can communicate with P workers simultaneously (rather than sequentially) otherwise it is slower. The reason is the PS has more communication. The PS has some additiona disadvantages: parallel SGD converges more slowly than true SGD and is difficult to apply if the W step is nonconvex; and it needs extra machine(s) to act as parameter server(s). The fundamental is sue is that both topologies differ in how they employ the available parallelism: the circular topology updates different, independent submodels, while the PS updates replicas of the same submodels."}, {"section_index": "3", "section_name": "4 EXPERIMENTS", "section_text": "MPI implementation of ParMAC for BAs. We have used C/C++, the GSL and BLAS libraries f. nathematical operations, and the Message Passing Interface (MPI) (Gropp et al.,1999) for inte orocess communication. MPI is a widely used framework for high-performance parallel computin available in multiple platforms. It is particularly suitable for ParMAC because of its support of tl. SPMD (single program, multiple data) model. In MPI, processes in different machines communica. hrough messages. To receive data, we use the synchronous blocking receive function MP I_Rec. the process calling this blocks until the data arrives. To send data we use the buffered blocking ser. unction MP I_Bsend. We allocate enough memory and attach it to the system. The process callir. MP I_Bsend blocks until the buffer is copied to the MPI internal memory; after that, the MPI libra. takes care of sending the data. See a code snippet in|Carreira-Perpinan and Alizadeh (2016).\nDistributed-memory cluster. We used General Computing Nodes from the UCsD Triton Sharec Computing Cluster (TSCC), available to the public for a fee. Each node contains 2 8-core Intel Xeon E5-2670 processors (16 cores in total), 64GB RAM (4GB/processor) and a 500GB hard drive. The nodes are connected through a 10GbE network. We used up to P = 128 proces sors. Carreira-Perpinan and Alizadeh (2016) give detailed specs as well as experiments in a shared memory machine.\nDatasets. We have used 3 well-known colour image retrieval benchmarks. (1) CIFAR (Krizhevsky 2009) contains 60 000 images (N = 50 000 training and 10 000 test), represented by D = 320 GIST features. (2) SIFT-1M (Jegou et al.,2011a) contains N = 106 training and 104 test images, each.\nrepresented by D = 128 SIFT features. (3) SIFT-1B (Jegou et al., 2011a) has three subsets: 10 base vectors where the search is performed, N = 108 learning vectors used to train the model anc 104 query vectors.\nPerformance measures. Regarding the quality of the BA and hash functions learnt, we report the retrieval precision (%) in the test set using as true neighbours the K nearest images in Euclidean distance in the original space, and as retrieved neighbours in the binary space we use the k nearest images in Hamming distance. We set (K, k) = (1 000, 100) for CIFAR and (10 000, 10 000) for SIFT-1M. For SIFT-1B, as suggested by the dataset creators, we report the recall@R: the average number of queries for which the nearest neighbour is ranked within the top R positions (for varying values of R); in case of tied distances, we place the query as top rank. All these measures are computed offline once the BA is trained. Carreira-Perpinan and Alizadeh (2016) give additional measures and experiments.\nModels and their parameters. We use BAs with linear encoders (linear SVM) except with SIFT-1B where we also use kernel SVMs. The decoder is always linear. We set L = 16 bits (hash functions) for CIFAR and SIFT-1M and L = 64 bits for SIFT-1B. We initialise the binary codes from truncatec PCA ran on a subset of the training set (small enough that it fits in one processor). To train the encoder (L SVMs) and decoder (D linear mappings) with stochastic optimisation, we used the SGD code from (Bottou and Bousquet, 2008), using its default parameter settings. The SGD step size is tuned automatically in each iteration by examining the first 1 000 datapoints. We use a multiplicative schedule i = oa' where the initial value o and the factor a > 1 are tuned offline in a trial run using a small subset of the data. For CIFAR we use o = 0.005 and a = 1.2 over 26 iterations (i = 0, ..., 25). For SIFT-1M and SIFT-1B we use o = 10-4 and a = 2 over 10 iterations.\nEffect of stochastic steps in the W step Fig.. FAR of varying the number of epochs within the. of the number of processors P. As the number more exactly (8 epochs is practically exact in this only a small degradation. The reason is that, althc contain sufficient redundance that few epochs are This is also helped by the accumulated effect. f epochs over MAC iterations. Running more. epochs increases the runtime and lowers the. 34 oarallel speedup in this particular model, be-. 32 cause we use few bits (L = 16) and therefore. ew submodels (M = 2L = 32) compared to 30 he number of machines (up to P = 128), so. he W step has less parallelism. The positive. 26 effect of data shuffling in the W step is clear:. huffling generally increases the precision with. no increase in runtime..\nEffect of stochastic steps in the W step Fig.3 shows the effect on the precision on CI-. FAR of varying the number of epochs within the W step and shuffling the data as a function. of the number of processors P. As the number of epochs increases, the W step is solved. more exactly (8 epochs is practically exact in this data). Fewer epochs, even just one, cause only a small degradation. The reason is that, although these are relatively small datasets, they contain sufficient redundance that few epochs are sufficient to decrease the error considerably This is also helped by the accumulated effect. 1: ff\nP = 1. different e. e = 8. different P 34 32 ---1 epoch 1 machine 30 -2 epochs -.32 machines -- 8 epochs 64 machines 38 1 epoch shuffled 1 machine shuffled. 2 epochs shuffled 32 machines shuffled. 8 epochs shuffled 64 machines shuffled 26 0 500 1000 1500 0 10 20 runtime iteration\nFigure 3: Precision in CIFAR dataset\nSpeedup The fundamental advantage of ParMAC and distributed optimisation in general is the. ability to train on datasets that do not fit in a single machine, and the reduction in runtime because of parallel processing. Fig.4 shows the \"strong scaling\" speedups achieved, as a function of the number of machines P for fixed problem size (dataset and model), in CIFAR and SIFT-1M (N = 50K and 1M training points, respectively). Even though these datasets and especially the number of. independent submodels (M = 2L = 32 effective submodels of the same size, as discussed earlier) are relatively small, the speedups we achieve are nearly perfect for P M and hold very well for. P > M up to the maximum number of machines we used (P = 128 in the distributed system). The. speedups flatten as the number of W-step epochs (and consequently the amount of communication). increases, because for this experiment the bottleneck is the W step, whose parallelisation ability (i.e., the number of concurrent processes) is limited by M = 2L (the Z step has N independent processes and is never a bottleneck, since N is very large). However, as noted earlier, using 1 to 2. epochs gives a good enough result, very close to doing an exact W step. The runtime for SIFT-1M. on P = 128 machines with 8 epochs was 12 minutes and its speedup 100. This is particularly. remarkable given that the original, nested model did not have model parallelism..\nCIFAR SIFT-1M SIFT-1B (eeee) 80 100 -1 epoch -1 epoch 2 epochs +8 epochs 60 80 +3 epochs (d)s dnpeeds -4 epochs 60 8 epochs 40 too long to run 40 20 20 32 64 96 128 0 7 1 32 64 96 128 80 100 1024 (y1) 1 epoch -1 epoch 1 epoch 2 epochs 2 epochs 2 epochs 80 60 3 epochs 3 epochs 768 -3 epochs (d) 4 epochs 4 epochs 4 epochs 60 S 8 epochs 8 epochs 8 epochs 40 512 dnpeads 40 20 256 20 0 0 32 64 96 128 1 32 64 96 128 1 256 512768102 number of machines P number of machines P number of machines P\nFigure 4: Speedup S(P) as a function of the number of machines P (top: experiment, bottom theory). The dataset size and number of submodels (N, M) is (50 000, 32) for CIFAR, (106, 32) fol SIFT-1M and (108, 128) for SIFT-1B\nFig. 4|also shows the speedups predicted by our theoretical model. We set the parameters e and N to their known values, and M = 2L = 32 for CIFAR and SIFT-1M and M = 2L = 128 for SIFT-1B. For the time parameters, we set tW = 1 to fix the time units, and we set tW and tZ by trial and error to achieve a reasonably good fit to the experimental speedups: tW = 104 for both datasets, and tZ = 200 for CIFAR and 40 for SIFT-1M. Although these are fudge factors, they are in rough agreement with the fact that communicating a weight vector over the network is orders oi magnitude slower than updating it with a gradient step, and that the Z step is quite slower than the W step because of the binary optimisation it involves.\nLarge-scale experiment SIFT-1B is one of the largest datasets, if not the largest one, that are publicly available for comparing nearest-neighbour search algorithms with known ground-truth (i.e. precomputed exact Euclidean distances for each query to its k nearest vectors in the base set). The training set contains N = 100M vectors, each consisting of 128 SIFT features. We used L = 64 hash functions (M = 128 submodels): linear SVMs as before, and kernel SVMs. These have fixed Gaussian radial basis functions (2 000 centres picked at random from the training set and bandwidth = 160), so the only trainable parameters are the weights, and the MAC algorithm does not change except that it operates on a 2 000-dimensional input vector of kernel values, instead of the 128 SIFT features. We use e = 2 epochs with shuffling. All these decisions were based on trials on a subset o1 the training dataset. We initialised the binary codes from truncated PCA trained on a subset of size 1M (recall@R=100: 55.2%), which gave results comparable to the baseline in (Jegou et al.,2011b)"}, {"section_index": "4", "section_name": "5 DISCUSSION", "section_text": "Developing parallel, distributed optimisation algorithms for nonconvex problems in machine learn ing is challenging, as shown by recent efforts by large teams of researchers (Le et al., 2012 Dean et al.,2012). One important advantage of ParMAC is its simplicity. Data and model paral\nWe ran ParMAC on the whole training set in the distributed system with 128 processors for 6 it- erations and achieved a recall@R=100 of 61.5% in 29 hours (linear SVM) and 66.1% in 83 hours (kernel SVM). Using a scaled-down model and training set, we estimated that training in one ma- chine (with enough RAM to hold the data and parameters) would take months. The theoretical speedup (fig.4|right plot, using the same parameters as in SIFT-1M), is nearly perfect (note the plot goes up to P = 1 024 machines, even though our experiments are limited to P = 128). This is because M is quite larger and N is much larger than in the previous datasets.\nlelism arise naturally thanks to the introduction of auxiliary coordinates. The corresponding opti. misation subproblems can often be solved reusing existing code as a black box (as with the SGD training of SVMs and linear mappings in the BA). A circular topology is sufficient to achieve a low communication between machines. There is no close coupling between the model structure and the distributed system architecture. This makes ParMAC suitable for architectures as different as supercomputers and data centres.\nFurther improvements can be made in specific problems. For example, we may have more paral. lelisation or less dependencies (e.g. the weights of hidden units in layer k of a neural net depen. only on auxiliary coordinates in layers k and k + 1). This may reduce the communication in the V. step, by sending to a given machine only the model portion it needs, or by allocating cores within. multicore machine accordingly. The W and Z step optimisations can make use of further paralleli. sation by GPUs or by distributed convex optimisation algorithms. Many more refinements can b. done, such as storing or communicating reduced-precision values with little effect of the accuracy. In this paper, we have tried to keep our implementation as simple as possible, because our goal wa. to understand the parallelisation speedups of ParMAC in a setting as general as possible, rather tha. trying to achieve the very best performance for a particular dataset. model or distributed system"}, {"section_index": "5", "section_name": "6 CONCLUSION", "section_text": "We have proposed ParMAC, a distributed model for the method of auxiliary coordinates for training. nested, nonconvex models in general, analysed its parallel speedup and convergence, and demon strated it with an MPI-based implementation for a particular case, to train binary autoencoders. MAC. creates parallelism by introducing auxiliary coordinates for each data point to decouple nested terms in the objective function. ParMAC is able to translate the parallelism inherent in MAC into a dis-. tributed system by 1) using data parallelism, so that each machine keeps a portion of the original. data and its corresponding auxiliary coordinates; and 2) using model parallelism, so that independent. submodels visit every machine in a circular topology, effectively executing epochs of a stochastic. optimisation, without the need for a parameter server and therefore no communication bottlenecks. The convergence properties of MAC (to a stationary point of the objective function) remain essen-. tially unaltered in ParMAC. The parallel speedup can be theoretically predicted to be nearly perfect. when the number of submodels is comparable or larger than the number of machines, and to eventu- ally saturate as one continues to increase the number of machines, and indeed this was confirmed in our experiments. ParMAC also makes it easy to account for data shuffling, load balancing, streaming. and fault tolerance. Hence, we expect that ParMAC could be a basic building block, in combination. with other techniques, for the distributed optimisation of nested models in big data settings.."}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "Work supported by a Google Faculty Research Award and by NSF award IIS-1423515. We thank Ramin Raziperchikolaei (UC Merced) for discussions about binary autoencoders, Dong Li (UC Merced) for discussions about MPI and performance evaluation on parallel systems, and Quoc Le (Google) for discussions about Google's DistBelief system."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "M. Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. White paper.. Y. Bengio, J.-F. Paiement, P. Vincent, O. Delalleau, N. Le Roux, and M. Ouimet. Out-of-sample extensions fo.\nLLE, Isomap, MDS, Eigenmaps, and spectral clustering. NIPS, 2004.. D. P. Bertsekas. Incremental gradient, subgradient, and proximal methods for convex optimization: A survey. In S. Sra, S. Nowozin, and S. J. Wright, editors, Optimization for Machine Learning. MIT Press, 2011.. L. Bottou. Large-scale machine learning with stochastic gradient descent. COMPSTAT, 2010.. L. Bottou and O. Bousquet. The tradeoffs of large scale learning. NIPs, 2008.. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3, 2011.. J. Bradley, A. Kyrola, D. Bickson, and C. Guestrin. Parallel coordinate descent for L1-regularized loss mini mization. ICML, 2011. M. A. Carreira-Perpinan and M. Alizadeh. ParMAC: Distributed optimisation of nested functions, with appli-. cation to learning binary autoencoders. arXiv:1605.09114 [cs.LG]. May 30 2016\nM. A. Carreira-Perpinan and R. Raziperchikolaei. Hashing with binary autoencoders. CVPR, 2015.. M. A. Carreira-Perpinan and M. Vladymyrov. A fast, universal algorithm to learn parametric nonlinear embed-. dings. NIPS, 2015. M. A. Carreira-Perpinan and W. Wang. Distributed optimization of deeply nested systems. arXiv:1212.5921 [cs.LG], Dec. 24 2012. M. A. Carreira-Perpinan and W. Wang. Distributed optimization of deeply nested systems. AISTATS, 2014.. V. Cevher, S. Becker, and M. Schmidt. Convex optimization for big data: Scalable, randomized, and parallel. algorithms for big data analytics. IEEE Signal Processing Magazine, 31(5):32- 43, Sept. 2014.. A. Coates, B. Huval, T. Wang, D. Wu, B. Catanzaro, and A. Ng. Deep learning with COTS HPC systems.. ICML, 2013. J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. Le, M. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang.. and A. Ng. Large scale distributed deep networks. NIPS, 2012. P. Drineas and M. W. Mahoney. On the Nystrom method for approximating a Gram matrix for improved. kernel-based learning. J. Machine Learning Research, 6:2153-2175, Dec. 2005. Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin. Iterative quantization: A Procrustean approach to learning. binary codes for large-scale image retrieval. IEEE PAMI, 2013. K. Grauman and R. Fergus. Learning binary hash codes for large-scale image search. In R. Cipolla, S. Battiato,. and G. Farinella, editors, Machine Learning for Computer Vision, pages 49-87. Springer-Verlag, 2013.. W. Gropp, E. Lusk, and A. Skjellum. Using MPI: Portable Parallel Programming with the Message-Passing. Interface. MIT Press, second edition, 1999.. G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath,. and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of. four research groups. IEEE Signal Processing Magazine, 29(6):82-97, Nov. 2012.. H. Jegou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. IEEE PAMI, 33, 2011a.. H. Jegou, R. Tavenard, M. Douze, and L. Amsaleg. Searching in one billion vectors: Re-rank with source. coding. ICASSP, 2011b. A. Krizhevsky. Learning multiple layers of features from tiny images. Master's thesis, U. Toronto, 2009.. Q. Le, M. Ranzato, R. Monga, M. Devin, G. Corrado, K. Chen, J. Dean, and A. Ng. Building high-level features using large scale unsupervised learning. ICML, 2012. J. Liu and S. J. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence properties.. SIAM J. Optimization, 25(1):351-376, 2015. Y. Low, D. Bickson, J. Gonzalez, C. Guestrin, A. Kyrola, and J. M. Hellerstein. Distributed GraphLab: A. framework for machine learning and data mining in the cloud. Proc. VLDB Endowment, 5, 2012.. G. J. McLachlan and T. Krishnan. The EM Algorithm and Extensions. Wiley, second edition, 2008. F. Niu, B. Recht, C. Re, and S. J. Wright. HogwiLD!: A lock-free approach to parallelizing stochastic gradient. descent. NIPS, 2011. H. Ouyang, N. He, L. Tran, and A. Gray. Stochastic alternating direction method of multipliers. ICML, 2013.. R. Raziperchikolaei and M. A. Carreira-Perpinan. Optimizing affinity-based binary hashing using auxiliary. coordinates. NIPS, 2016. P. Richtarik and M. Takac. Distributed coordinate descent method for learning with big data. arXiv:1310.2059. [stat.ML], Oct. 8 2013. F. Seide, H. Fu, J. Droppo, G. Li, and D. Yu. 1-bit stochastic gradient descent and its application to data-parallel. distributed training of speech DNNs. Interspeech, 2014. A. Talwalkar, S. Kumar, and H. Rowley. Large-scale manifold learning. CVPR, 2008.. M. Vladymyrov and M. A. Carreira-Perpinan. Locally Linear Landmarks for large-scale manifold learning.. ECML, 2013. M. Vladymyrov and M. A. Carreira-Perpinan. The Variational Nystrom method for large-scale spectral prob lems. ICML, 2016. W. Wang and M. A. Carreira-Perpinan. The role of dimensionality reduction in classification. AAAI, 2014.. C. K. I. Williams and M. Seeger. Using the Nystrom method to speed up kernel machines. NIPs, 2001.. S. J. Wright. Coordinate descent algorithms. Math. Prog., 151(1):3-34, June 2016.. E. P. Xing, Q. Ho, W. Dai, J. K. Kim, J. Wei, S. Lee, X. Zheng, P. Xie, A. Kumar, and Y. Yu. Petuum: A new. platform for distributed machine learning on big data. IEEE Trans. Big Data, 1(2):49-67, Apr.-June 2015. M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica. Spark: Cluster computing with working."}] |
HyGTuv9eg | [{"section_index": "0", "section_name": "INCORPORATING LONG-RANGE CONSISTENCY CNN-BASED TEXTURE GENERATION", "section_text": "Guillaume Berger & Roland Memisevic\nDepartment of Computer Science and Operations Research Jniversity of Montreal\nguillaume.berger@umontreal.ca,memisevr@iro.umontreal.ca"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Gatys et al. (2015a) showed that pair-wise products of features in a convolutional network are a very effective representation of image textures. We propose a simple modification to that representation which makes it possible to incorporate long- range structure into image generation, and to render images that satisfy various symmetry constraints. We show how this can greatly improve rendering of regular textures and of images that contain other kinds of symmetric structure. We also present applications to inpainting and season transfer."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "As shown in Figure 1, this method produces impressive results. However, it fails to take into account non-local structure, and consequently cannot generate results that exhibit long-range correlations ir images. An example of the importance of long-range structure is the regular brick wall texture in the middle of the figure. Another example is the task of inpainting, where the goal is to fill in a missing part of an image, such that it is faithful to the non-missing pixels. Our main contribution is to introduce a way to deal with long-range structure using a simple modification to the product-based texture features. Our approach is based on imposing a \"Markov-structure\"' on high-level features allowing us to establish feature constraints that range across sites instead of being local. Unlike classical approaches to preserving spatial structure in image generation, such as Markov Random Fields and learning-based extensions (Roth & Black, 2005), our approach does not impose any explicit local constraints on pixels themselves. Rather, inspired by Gatys et al. (2015a), it encourages consistency to be satisfied on high-level features and on average across the whole image. We present applications to texture generation, inpainting and season transfer.\nGiven a reference texture, x, the algorithm described in Gatys et al. (2015a) permits to synthesize. by optimization a new texture x similar to x. To achieve this, the algorithm exploits an ImageNe\nThere are currently two dominant approaches to texture synthesis: non-parametric techniques, which synthesize a texture by extracting pixels (or patches) from a reference image that are resampled for rendering (Efros & Leung, 1999; Kwatra et al., 2003), and parametric statistical models, which op timize reconstructions to match certain statistics computed on filter responses (Heeger & Bergen. 1995; Portilla & Simoncelli, 2000). Recently, the second approach has seen a significant advance- ment, after Gatys et al. (2015a) showed that a CNN pre-trained on an object classification task, such as ImageNet (Russakovsky et al., 2015), can be very effective at generating textures. Gatys et al. (2015a) propose to minimize with respect to the input image a loss function, that measures how well certain high-level features of a reference image are preserved. The reference image constitutes an example of the texture to be generated. The high-level features to be preserved are pair-wise products of feature responses, averaged over the whole image, referred to as the \"Gramian' in that work. In Gatys et al. (2015b), the same authors show that by adding a second term to the cost, which matches the content of another image, one can render that other image in the \"style\" (texture) of the first. Numerous follow-up works have since then analysed and extended this approach (Ulyanov et al., 2016; Johnson et al., 2016; Ustyuzhaninov et al., 2016).\nDenuisle lerjuin 201 d'eye traching a ete pr eaeud ynavoxau serice teste aupresd'ume dir A memnes du personme cafio paramedicale ont troun Rone eo exnligue le Dr Bodet\nFigure 1: Reference image (left) and generated texture (right) using the procedure described in Gaty et al. (2015a).\npre trained CNN aLstyle F1 F1 El Al FL AL at x n Lstyle style sty.e\nFigure 2: Summary of the texture synthesis procedure described in Gatys et al. (2015a). We use VGG-19 network Simonyan & Zisserman (2014) as the pre-trained CNN.\npre-trained model to define metrics suitable for describing textures: \"Gram\"' matrices of featur maps, computed on top of L selected layers. Formally, let N' be the number of maps in layer l of a. pre-trained CNN. The corresponding Gram matrix G' is a N' N' matrix defined as:.\nMl 1 1 ik1 Ml Ijk Ml k=1\nFrobenius norm of the itn 1. map F so they represent its spatially averaged energy .WeW1l\nwhere wj is a normalizing constant similar to Gatys et al. (2015a)\nThe overall process is summarized in Figure 2. While the procedure can be computationally ex pensive, there have been successful attempts reported recently which reduce the generation time (Ulyanov et al., 2016; Johnson et al., 2016)."}, {"section_index": "3", "section_name": "2.2 WHY GRAM MATRICES WORK", "section_text": "Feature Gram matrices are effective at representing texture, because they capture global statistics across the image due to spatial averaging. Since textures are static, averaging over positions is re quired and makes Gram matrices fully blind to the global arrangement of objects inside the reference image. This property permits to generate very diverse textures by just changing the starting point of the optimization. Despite averaging over positions, coherence across multiple features needs tc be preserved (locally) to model visually sensible textures. This requirement is taken care of by the off-diagonal terms in the Gram matrix. which capture the co-occurence of different features al\nwhere Ff. is the ith vectorized feature map of layer l, M' is the number of elements in each map. of this layer, and where ( :,:) denotes the inner product. Equation 1 makes it clear that G' captures how feature maps from layer l are correlated to each other. Diagonal terms, G,, are the squared. Frobenius norm of the ith map |FE, so they represent its spatially averaged energy. We will discuss the Gramians in more detail in the next paragraph. Once the Gram matrices {G'}e[1,L] of the reference texture are computed, the synthesis procedure by Gatys et al. (2015a) amounts to constructing an image that produces Gram matrices {G'}1e[1,L] that match the ones of the reference. texture. More precisely, the following loss function is minimized with respect to the image being constructed:\nL style = W1 W1 e l=1 l=1\nReference 3x264x264 pool1 + pool2 + pool3 + pool4 + pool5\nFigure 3: Exploiting Gram matrices of feature maps as in Gatys et al. (2015a) (1st row) or only the squared Frobenius norm of feature maps (2nd row) for increasingly deep layers (from left to right)..\na single spatial location. Indeed, Figure 3 shows that restricting the texture representation to the squared Frobenius norm of feature maps (i.e. diagonal terms) makes distinct object-parts from the reference texture encroach on each other in the reconstruction, as local coherence is not captured by the model. Exploiting off-diagonal terms improves the quality of the reconstruction as consistency across feature maps is enforced (on average across the image)\nThe importance of local coherence can be intuitively understood in the case of linear features (o in the lowest layer of a convolutional network): when decomposing an image using Gabor-like. features, local structure can be expressed as the relative offsets in the Fourier phase angles betweer multiple different filter responses. A sharp step-edge, for example, requires the phases of loca Fourier components at different frequencies to align in a different way than a blurry edge or a. ridge (Morrone & Burr, 1988; Kovesi, 1999). Also, natural images exhibit very specific phase. relationships across frequency components in general, and destroying these makes the image lool unnatural (Wang & Simoncelli, 2003). The same is not true of Fourier amplitudes (representec on the diagonals of the Gramian), which play a much less important role in the visual appearance. (Oppenheim & Lim, 1981). In the case of deeper representations, the situation is more complex, bu it is still local co-occurrence averaged over the whole image that captures texture..\nUnfortunately, average local coherence falls short of capturing long-range structure in images. Spa tial consistency is hard to capture within a single filter bank, because of combinatorial effects. In deed, since Gram matrices capture coherence at a single spatial location, every feature would have to be matched to multiple transformed versions of itself. A corollary is that every feature would have to appear in the form of multiple transformed copies of itself in order to capture spatial consistency However, this requirement clashes with the limited number of features available in each CNN layer One way to address this is to use higher-layer features, whose receptive fields are larger. Unfortu nately, as illustrated in Figure 3, even if using layers up to pool5 whose input receptive field covers the whole image' (first row, last column), the reconstruction remains mainly unstructured and the method fails to produce spatial regularities."}, {"section_index": "4", "section_name": "3 MODELING SPATIAL CO-OCCURENCES", "section_text": "To account for spatial structure in images, we propose encoding this structure in the feature self-. similarity matrices themselves. To this end, we suggest that, instead of computing co-occurences between multiple features within a map, we compute co-occurences between feature maps F' and. spatially transformed feature maps T(F), where T denotes a spatial transformation. In the simplest case, T represents local translation, which amounts to measuring similarities between local features. and other neighbouring features. We denote by Tx,+s the operation consisting in horizontally trans-. lating feature maps by o pixels and define the transformed Gramian:.\n'For this experiment, the image size is 264 264, which is also the size of the pool5 receptive field\n1 (F),Tx,-s(F) x,d,iJ x,+s M\nRaw feature maps Shifting Gram Matrix computation X Tx+8Fl Tx-8Fl vectorize &transpose vectorize\nFigure 4: Computing the shifted Gram matrix for a given layer with feature maps of width X\nwhere Tx,-s performs a translation in the opposite direction. As illustrated in Figure 4, the transfor mation in practice simply amounts to removing the o first or last columns from the raw feature maps Therefore, the inner product now captures how features at position (i, j) are correlated with features located at position (i, j + d) in average. While Figure 4 illustrates the case where feature map are horizontally shifted, one would typically use translations along both the x-axis and the y-axis Our transformed Gramians are related to Gray-Level Co-occurrence Matrices (GLCM) (Haralicl et al., 1973) which compute the unnormalized frequencies of pixel values for a given offset in ar image. While GLCMs have been mainly used for analysis, some work tried to use these feature for texture synthesis (Lohmann, 1995). Usually, GLCMs are defined along 4 directions: 0 and 90 (i.e. horizontal and vertical offsets), as well as 45o and 135o (i.e. diagonal offsets). In comparisor our method does not consider diagonal offsets and captures spatial coherence on high-level features making use of a pre-trained CNN, instead of working directly in the pixel domain.\nWith this definition for transformed Gram matrices, we propose defining the loss as: L = Lstyle + Lcc, where cc stands for cross-correlation. Like Lstyle, Lcc is a weighted sum of mul- tiple losses Ll. s defined for several selected layers as the mean squared error between transformed Gram matrices of the reference texture and the one being constructed:\nAlthough this amounts to adding more terms to a representation that was already high-dimensiona. and overparametrized, we found that these additional terms do not hurt the diversity of generate. textures. Indeed, the new loss remains blind to the global arrangement of objects. In fact, there exist a specific situation where our approach is strictly equivalent to computing Gram matrices of deepe. ayers: with linear activations and \"one-hot'' convolution kernels-, deeper layers would simpl. contain translated versions of lower feature maps. In that case, computing Gram matrices of deepe. layers would permit to directly capture cross-correlation statistics in lower ones. Nevertheless, thi. situation is very unlikely when using a pretrained CNN, as the network probably learned operation. more useful than simple translations during its supervised training..\nWhile we focus on translation for most of our results, we shall discuss other types of transformatior in the experiments Section."}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "In our experiments, we exploit the same normalized version of the VGG-19 network3 (Simonyan &. Zisserman, 2014) as in Gatys et al. (2015a;b) and layers conv1_1, pool1, pool2, pool3, and pool4 are always used to define the standard Gram matrices. In our method, we did not use conv1_1 to define cross-correlation terms, as the large number of neurons at this stage makes the computation of Gram. matrices costly. Corresponding o values for each layer are discussed in the next paragraph..\n2To match our translated Gramians, the only non-zero component would be & or - shifted from the center 3 available at http://bethgelab.org/media/uploads/deeptextures/vgg_normalised.caffemodel.\n8=2 8=4 8=8 =16 8=32 Without cross-correlation term 8={2} 8={2,4} 8={2,4,8} 8={2,4,8,16} 8={2,4,8,16,32}\ncorrelation terms. (Other columns): Using a single cross-correlation term with a fixed 8 value (1st row) or using multiple cross-correlation terms with distinct values (2nd row)..\nFinally, our implementation4 uses Lasagne (Dieleman et al., 2015). Each image is of size 384 384 Most textures used as references in this paper were taken from textures.com and pixabay.com\nFor instance, Figure 5 shows generated images from the brick wall texture using only the pool2 laye. with different & configurations. The first row depicts the results when considering single values o. only. While & = 4 or & = 8 are good choices, considering extreme long-range correlations doe. not help for this particular texture: a brick depends mostly on its neighbouring bricks and not th far-away ones. More precisely, a translation of more than 16 pixels in the pool2 layer makes th. input receptive field move more than 64 pixels. Therefore d = 16 or d = 32 do not capture an information about neighbouring bricks. Unfortunately, this is not true for all textures, and & = 16 o. 8 = 32 might be good choices for another image that exhibits longer structures..\nSearching systematically for a configuration that works well with the reference texture being con sidered would be a tedious task: even for a very regular texture with a periodic horizontal (or ver tical) pattern, it is hard to guess the optimal d values for each layer (for deeper ones in particular) Instead, we propose to use a fixed but wide set of & values per layer, by defining the cost to be\ntion or the diversity of generated textures. Figure 5 (second row) shows contrarily that there is. no visual effect from using values that are not specifically useful for the reference texture be ing considered: the rendering benefits from using d E {2,4,8} while considering bigger values ( E {2, 4, 8,16, 32}, e.g.) does not help, but does not hurt the reconstruction either. We found. the same to be true for other textures as well, and our results (shown in the next Section) show that. combining the loss terms is able to generate very diverse textures. A drawback from using multiple cross-correlation terms per layers, however, is computational. We found that in our experimental setups, adding cross-correlation terms increases the generation time by roughly 80%..\nAs a guide-line, for image sizes of roughly 384 384 pixels, we recommend the following val ues per layer (which we used in all our following experiments): {2,4,8,16, 32, 64} for pooll {2, 4,8,16, 32} for pool2, {2, 4,8,16} for pool3, and {2, 4,8} for pool4. The number and the range of values decrease with depth because feature maps are getting smaller due to 2 2 pooling layers. This configuration should be sufficient to account for spatial structure in any 384 384 image.\navailable at https://github.com/guillaumebrg/texture_generation\nThe & parameter is of central importance as it dictates the range of the spatial constraints. We observed that the optimal value depends on both the considered layer in the pre-trained CNN and the reference texture, making it difficult to choose a value automatically..\nReference Gatys et al. Ours Ours (seed=123) seed=123) (seed=123456) Denuisle lerjuim 201 aldeus ar inlonds eytp ogia u d'eye traching a etepr Zac a eaeud igre serice teste aupres dume dira CH weolcdrocbil memnes du personme iaiorabls cd oagesslaladre r paramedicale ont troun e dcarceonanne exnliguele DrBodetrone BanoeeCialoan\nFigure 6: Some results of our approach compared with Gatys et al. (2015a). Only the initialization. differs in the last two columns. Further results are shown in the supplementary material.\nFigure 6 shows the result of our approach applied to various structured and unstructured textures It demonstrates that the method is effective at capturing long-range correlations without simpl copying the content of the original texture. For instance, note how our model captures the dept aspect of the reference image in the third and fourth rows.\nOurs Starting point Gatys et al. Inpainted image Reference\nFigure 7: Texture generation applied to in-painting. More in-painted images can be found in the supplementary material.\nStyle Content&init) Ours Gatys et al.\nFigure 8: Season transfer examples\nEven if the reference image is not a texture, the generated images in the last row (Leonardo Di. caprio's face) provide a good visual illustration of the effect of translation terms. In contrast to. Gatys et al., our approach preserves longer-range structure, such as the alignment and similar ap. pearance of the eyes, hair on top of the forehead, the chin below the mouth, etc. Finally, when the. reference texture is unstructured (fifth row), our solution does not necessarily provide a benefit, bu it also does not hurt the visual quality or the diversity of the generated textures.."}, {"section_index": "6", "section_name": "4.3 INPAINTING APPLICATION", "section_text": "Modelling long-range correlations can make it possible to apply texture generation to inpainting. because it allows us to impose consistency constraints between the newly rendered region and the unmodified parts of the original image.\nTo apply our approach to texture inpainting, we extracted two patches from the original image one that covers the whole area to inpaint. and another one that serves as the reference texture Then, approximately the same process as for texture generation is used, with the following two\n5Copy and paste patches side by side\nThe problem of synthesizing near-regular structures is challenging because stochasticity and regu. larity are adversarial properties (Lin et al., 2006). Non-parametric patch-based techniques, such as Efros & Freeman (2001), are better suited for this task because they can tile' the reference image.. On the other hand, regular structures are usually more problematic for parametric statistical models.. Nevertheless, the two first rows of Figure 6 demonstrate that our approach can produce good visual. results and can reduce the gap to patch-based methods on these kinds of texture..\nReference Gatys et al. Ours Ours (seed=123) seed=123 seed=123456\nmodifications: First, instead of random noise, the optimization starts from the masked content patch. (the one to inpaint) showing a grey area and its non-missing surrounding. Second, we encourage the. borders of the output to not change much with respect to the original image using an L2 penalty. We. apply the penalty both in the Gatys et al. rendering and in ours. Some inpainted images are showr n Figure 7. As seen in the figure, our solution significantly outperforms Gatys et al. in terms of. visual quality. Further results are shown in the supplementary material..\nFigure 8 shows the result of applying our approach to a style transfer task, as in Gatys et al. (2015b) transferring the \"season\"' of a landscape image to another one. On this task, the results from ou approach are similar to those from Gatys et al. (2015b). Nevertheless, in contrast to the Gatys e al. results, our approach seems to better capture global information, such as sky color and leave. (bottom row), or the appearance of branches in the winter image (top row)."}, {"section_index": "7", "section_name": "4.5 INCORPORATING OTHER TYPES OF STRUCTURE", "section_text": "While we focused on feature map translations in most of our experiments, other transformations can. be applied as well. To illustrate this point, we explored a way to generate symmetric textures using. another simple transformation. To this end, we propose flipping one of the two feature maps before computing the Gram matrices: Glr,ij = (F., Tir (FJ.)). Here Tir corresponds to the left-right. finnin g Oneration.but we In-dou f1 Of featu1 Cl\nAs can be seen in Figure 9, in contrast to Gatys et al., the additional loss terms capture which. objects are symmetric in the reference texture, and enforce these same objects to be symmetric in the reconstruction as well. Other kinds of transformation could be used, depending on the type of. property in the source texture one desires to preserve..\nFigure 9: Generation of abstract symmetric textures"}, {"section_index": "8", "section_name": "5 CONCLUSION", "section_text": "We presented an approach to satisfying long-range consistency constraints in the generation of im. ages. It is based on a variation of the method by Gatys et al., and considers spatial co-occurences of local features (instead of only co-occurences across features). We showed that the approach per. mits to generate textures with various global symmetry properties and that it makes it possible tc. apply texture generation to in-painting. Since it preserves correlations across sites, the approach is. reminiscent of an MRF, but in contrast to an MRF or other graphical models, it defines correlation. constraints on high-level features of a (pre-trained) CNN rather than on pixels.."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Sander Dieleman, Jan Schlter, Colin Raffel, et al. Lasagne: First release., August 2015. URL http : / /dx doi.0rg/10.5281/zenodo.27878\nAlexei A. Efros and William T. Freeman. Image Quilting for Texture Synthesis and Transfer. Proceedings oj SIGGRAPH 2001, pp. 341-346, August 2001.\nSIGGRAPH 2001, pp. 341-346, August 2001 . A. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis using convolutional neural networks. In Advanc in Neural Information Processing Systems 28, 2015a. URL http: //arxiv.0rg/abs/1505. 07376 . A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. CoRR, abs/1508.06576, 2015 URLhttp://arxiv.0rg/abs/1508.06576. R. Haralick, K. Shanmugam, and I. Dinstein. Texture features for image classification. IEEE Transactions c Systems, Man, and Cybernetics 3 (6), 1973. David J. Heeger and James R. Bergen. Pyramid-based texture analysis/synthesis. In Proceedings of the 22r Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1995, pp. 229-238, 1995. J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. CoR abs/1603.08155,2016. URL http://arxiv.0rg/abs/1603.08155. P. Kovesi. Image features from phase congruency. Videre, 1(3):2-27, 1999. V. Kwatra, A. Schdl, I. Essa, G. Turk, and A. Bobick. Graphcut textures: Image and video synthesis usir graph cuts. ACM Transactions on Graphics, SIGGRAPH 2003, 22(3):277-286, 2003. Wen-Chieh Lin, James Hays, Chenyu Wu, V. Kwatra, and Yanxi Liu. Quantitative evaluation on near regul texture synthesis. In CVPR '06, volume 1, pp. 427 434, June 2006. G. Lohmann. Analysis and synthesis of textures: A co-occurrence-based approach. Computers and Graphic 19(1), pp. 29-36, 1995. M. C. Morrone and D. C. Burr. Feature detection in human vision: A phase-dependent energy model. Procee ings of the Royal Society of London B: Biological Sciences, 235(1280):221-245, 1988. A. V. Oppenheim and J. S. Lim. The importance of phase in signals. Proceedings of the IEEE, 69(5):529-54 1981. ISSN 0018-9219. I. Portilla and E. P. Simoncelli. A parametric texture model based on joint statistics of complex wavelet coeff cients. Int. J. Comput. Vision, 40(1):49-70, 2000. S. Roth and M. J. Black. Fields of experts: A framework for learning image priors. In IEEE Conf. on Comput Vision and Pattern Recognition, volume 2, pp. 860-867, June 2005. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstei A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal Computer Vision (IJCV), 115(3):211-252, 2015. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoR abs/1409.1556, 2014. D. Ulyanov, V. Lebedev, A. Vedaldi, and V. Lempitsky. Texture networks: Feed-forward synthesis of texture and stylized images. In International Conference on Machine Learning (ICML), 2016. 9\nV. Oppenheim and J. S TeTnporlanceo1 Onase msignals. T7oceealngs o lne IL 1981. 1SSN 0018-9219. J. Portilla and E. P. Simoncelli. A parametric texture model based on joint statistics of complex wavelet coeffi. cients. Int. J. Comput. Vision, 40(1):49-70, 2000. S. Roth and M. J. Black. Fields of experts: A framework for learning image priors. In IEEE Conf. on Compute. Vision and Pattern Recognition, volume 2, pp. 860-867, June 2005. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein. A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal o Computer Vision (IJCV), 115(3):211-252, 2015. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR. abs/1409.1556, 2014. D. Ulyanov, V. Lebedev, A. Vedaldi, and V. Lempitsky. Texture networks: Feed-forward synthesis of textures. and stylized images. In International Conference on Machine Learning (ICML), 2016..\nL. A. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis using convolutional neural networks. In Advances in Neural Information Processing Systems 28, 2015a. URL http: //arxiv.0rg/abs/1505.07376. L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. CoRR, abs/1508.06576, 2015b.. URLhttp://arxiv.org/abs/1508.06576. R. Haralick, K. Shanmugam, and I. Dinstein. Texture features for image classification. IEEE Transactions on. Systems, Man, and Cybernetics 3 (6), 1973. David J. Heeger and James R. Bergen. Pyramid-based texture analysis/synthesis. In Proceedings of the 22nd. Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1995, pp. 229-238, 1995.\nI. Ustyuzhaninov, W. Brendel, L. A. Gatys, and M. Bethge. Texture synthesis using shallow convolutiona networks with random filters. CoRR, abs/1606.00021, 2016. URL http://arxiv.org/abs/1606. 00021. Z. Wang and E. P. Simoncelli. Local phase coherence and the perception of blur. In Advances in Neural Information Processing Systems 16 [NIPS 2003], pp. 1435-1442, 2003. Ciyou Zhu, Richard H. Byrd, Peihuang Lu, and Jorge Nocedal. L-bfgs-b - fortran subroutines for large-scale bound constrained optimization. Technical report. ACM Trans. Math. Software. 1994\nCiyou Zhu, Richard H. Byrd, Peihuang Lu, and Jorge Nocedal. L-bfgs-b - fortran subroutines for large-scale bound constrained optimization. Technical report, ACM Trans. Math. Software, 1994."}, {"section_index": "10", "section_name": "Supplementary material", "section_text": "A TEXTURE GENERATION\nReference Gatys et al. Ours Ours seed=123 seed=123 seed=123456\nReference Gatys et al. Ours Ours seed=123 seed=123 (seed=123456)\nGatys et al. Starting point Reference Our Gatys et al. Inpainted image\nOur Inpainted image Starting point Reference Gatys et al.\nlished as a conference paper at ICLR 2017 Our Inpainted image Gatys et Gatys et al. Reference Gatys et al.\nStarting point Our Inpainted image Reference Gatys et al. Gatys et al. loss Texture Gatys et al. Ours \"red_leaves\" 3.38e-4 4.72e-4 \"floor' 3.68e-4 4.73e-4 Supplementary material textures \"leaves\" 4.08e-4 5.14e-4 \"cargo\" 3.39e-4 4.73e-4 \"building 8.90e-4 6.67e-4 \"building2\" 7.19e-4 7.78e-4 \"cargo2\" 1.05e-3 1.18e-3 \"small_bricks' 1.21e-3 6.82e-4 \"football_team' 4.56e-3 5.68e-3\nTable 1: Final Gatys et al. loss for all inpainted textures (by order of appearance)\nTable 1 reports the final Gatys et al. losses obtained by both approaches for all inpainted textures showed previously. In most cases, the Gatys et al. (2015a) approach converges to a smaller value which is not surprising since our method optimizes a modified version of the loss. Table 1, combined with the visual aspect of Gatys et al. inpainted renderings, illustrates that only imposing loca. coherence is not enough to obtain good inpainted textures. Our texture representation seems to be better suited for this task.\nReference Starting point. Our Gatys et al.. Inpainted image\nStyle Content&init) Ours Gatys et al Style Content& init) Ours Gatys Style Content&init) Ours Gatys\nStyle Content (& init) Ours Gatys Style Content & init) Ours Gatys Style Content &init) Ours Gatys\nStyle Content& init Ours Gatys\nReference Gatys Ours Ours (seed=123) (seed=123) (seed=123456)"}] |
S19eAF9ee | [{"section_index": "0", "section_name": "STRUCTURED SEOUENCE MODELING WITH GRAPH CONVOLUTIONAL RECURRENT NETWORKS", "section_text": "Youngjoo Seo\nyoungjoo.seo@epfl.ch\noierre.vandergheynst@epfl.ch\nThis paper introduces Graph Convolutional Recurrent Network (GCRN), a deep learning model able to predict structured sequences of data. Precisely, GCRN is a generalization of classical recurrent neural networks (RNN) to data structured by an arbitrary graph. Such structured sequences can represent series of frames in videos, spatio-temporal measurements on a network of sensors, or random walks on a vocabulary graph for natural language modeling. The proposed model com bines convolutional neural networks (CNN) on graphs to identify spatial structures and RNN to find dynamic patterns. We study two possible architectures of GCRN and apply the models to two practical problems: predicting moving MNIST data and modeling natural language with the Penn Treebank dataset. Experiments show that exploiting simultaneously graph spatial and dynamic information about data can improve both precision and learning speed."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Many real-world data can be cast as structured sequences, with spatio-temporal sequences being a special case. A well-studied example of spatio-temporal data are videos, where succeeding frames share temporal and spatial structures. Many works, such as|Donahue et al.(2015); Karpathy & Fei Fei](2015);[Vinyals et al.](2015), leveraged a combination of CNN and RNN to exploit such spatial and temporal regularities. Their models are able to process possibly time-varying visual inputs for variable-length prediction. These neural network architectures consist of combining a CNN for visual feature extraction followed by a RNN for sequence learning. Such architectures have been successfully used for video activity recognition, image captioning and video description.\nMore recently, interest has grown in properly fusing the CNN and RNN models for spatio-temporal. sequence modeling. Inspired by language modeling, Ranzato et al.(2014) proposed a model to. represent complex deformations and motion patterns by discovering both spatial and temporal cor-. relations. They showed that prediction of the next video frame and interpolation of intermediate frames can be achieved by building a RNN-based language model on the visual words obtained by. quantizing the image patches. Their highest-performing model, recursive CNN (rCNN), uses convo-. lutions for both inputs and states. Shi et al.(2015) then proposed the convolutional LSTM network. (convLSTM), a recurrent model for spatio-temporal sequence modeling which uses 2D-grid convo. lution to leverage the spatial correlations in input data. They successfully applied their model to the. prediction of the evolution of radar echo maps for precipitation nowcasting..\nMichael Defferrard\nEPFL. Switzerland\nnichael.defferrard@epfl.ch\nxavier.bresson@epfl.ch"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The spatial structure of many important problems may however not be as simple as regular grids For instance, the data measured from meteorological stations lie on a irregular grid, i.e. a network of heterogeneous spatial distribution of stations. More challenging, the spatial structure of data may not even be spatial, as it is the case for social or biological networks. Eventually, the interpretation that sentences can be regarded as random walks on vocabulary graphs, a view popularized byMikolov et al.(2013), allows us to cast language analysis problems as graph-structured sequence models.\nComputed node with highest probability Yt RNN RNN gates capture the Memory dynamic property of data Target node Graph CNN gates extract the put active node local stationarity and Graph composionality of data CNN Xt Spatio-temporal data on graphs\nFigure 1: Illustration of the proposed GCRN model for spatio-temporal prediction of graph-structured data. The technique combines at the same time CNN on graphs and RNN. RNN can be easily exchanged with LSTM or GRU networks.\nThis work leverages on the recent models ofDefferrard et al.(2016);Ranzato et al.(2014); Shi et al. (2015) to design the GCRN model for modeling and predicting time-varying graph-based data. The core idea is to merge CNN for graph-structured data and RNN to identify simultaneously meaningful spatial structures and dynamic patterns. A generic illustration of the proposed GCRN architecture is given by Figure1\nSequence modeling is the problem of predicting the most likely future length-K sequence given the previous J observations:\nXt+1.,t+K= arg max P(Xt+1,.,Xt+K|Xt-J+1,.,Xt) Xt+1,..,Xt+K\nData xt can be viewed as a graph signal, i.e. a signal defined on an undirected and weighted grapl. G = (V, &, A), where V is a finite set of |V| = n vertices, E is a set of edges and A E Rnn is a weighted adjacency matrix encoding the connection weight between two vertices. A signa Xt : V -> Rdx defined on the nodes of the graph may be regarded as a matrix xt E Rnxdx whose column i is the dx-dimensional value of xt at the ith node. While the number of free variables ir. a structured sequence of length K is in principle O(nK dxK), we seek to exploit the structure of. the space of possible predictions to reduce the dimensionality and hence make those problems more. tractable.\nK = 1 K = 2 RNN capture the nic property of data 3 3 3 3 3 3 3 3 3 K = 3 3 3 - 3 3\n3 3 3 3 3 3 3 K = 3 3 ) 3 3 3 3\nFigure 2: Illustration of the neigh borhood on an 8-nearest-neighbor grid graph. Isotropic spectral filters of support K have access to nodes at most at K - 1 hops\nwhere xt E D is an observation at time t and D denotes the domain of the observed fea- tures. The archetypal application being the n-gram language model (with n = J + 1), where P(xt+1|xt-J+1, ..., xt) models the probability of word xt+1 to appear conditioned on the past J words in the sentence (Graves2013).\nIn this paper, we are interested in special structured sequences, i.e. sequences where features of the observations xt are not independent but linked by pairwise relationships. Such relationships are universally modeled by weighted graphs."}, {"section_index": "3", "section_name": "2.2 LONG SHORT-TERM MEMORY", "section_text": "A special class of recurrent neural networks (RNN) that prevents the gradient from vanishing too quickly is the popular long short-term memory (LSTM) introduced byHochreiter & Schmidhuber (1997). This architecture has proven stable and powerful for modeling long-range dependencies in. various general-purpose sequence modeling tasks (Graves2013} Srivastava et al.|2015} Sutskever et al.I|2014). A fully-connected LSTM (FC-LSTM) may be seen as a multivariate version of LSTM where the input xt E Rdx, cell output ht E [-1, 1]dn and states ct E Rdn are all vectors. In this. paper, we follow the FC-LSTM formulation of|Graves|(2013), that is:\nrit hiht-1+ Wci O Ct-1 + bi f =o(WxfXt+Wnfht-1+Wcf OCt-1+bf)z Ct = ft O Ct-1+it O tanh(Wxcxt + Wncht-1+ bc) 0 = o(Wxoxt + Wnoht-1+ Wco O ct + bo), ht = o O tanh(ct),\nwhere O denotes the Hadamard product, o() the sigmoid function o(x) = 1/(1 + e-x) anc i, f, o E [0, 1]dh are the input, forget and output gates. The weights Wz. E IRdn dx, Wh. E Rdn dn wc. E Rdn and biases b, bf, bc, bo E Rdn are the model parameters|1Such a model is called fully connected because the dense matrices Wx. and Wp. linearly combine all the components of x and h The optional peephole connections wc. O ct, introduced byGers & Schmidhuber(2o0o), have beer found to improve performance on certain tasks."}, {"section_index": "4", "section_name": "2.3 CONVOLUTIONAL NEURAL NETWORKS ON GRAPHS", "section_text": "Generalizing convolutional neural networks (CNNs) to arbitrary graphs is a recent area of interest Two approaches have been explored in the literature: (i) a generalization of the spatial definition of a convolution (Masci et al.]2015] Niepert et al.]2016) and (ii), a multiplication in the grapl Fourier domain by the way of the convolution theorem (Bruna et al.]2014) Defferrard et al.]2016) Masci et al.[(2015) introduced a spatial generalization of CNNs to 3D meshes. The authors used geodesic polar coordinates to define convolution operations on mesh patches, and formulated a deep learning architecture which allows comparison across different meshes. Hence, this method is tailored to manifolds and is not directly generalizable to arbitrary graphs. Niepert et al.[(2016 proposed a spatial approach which may be decomposed in three steps: (i) select a node, (ii) construct its neighborhood and (iii) normalize the selected sub-graph, i.e. order the neighboring nodes. The extracted patches are then fed into a conventional 1D Euclidean CNN. As graphs generally do not possess a natural ordering (temporal, spatial or otherwise), a labeling procedure should be used to impose it. Bruna et al.[(2014) were the first to introduce the spectral framework described below in the context of graph CNNs. The major drawback of this method is its O(n2) complexity, which was overcome with the technique of|Defferrard et al.(2016), which offers a linear complexity O([ and provides strictly localized filters.Kipf & Welling(2016) took a first-order approximation of the spectral filters proposed byDefferrard et al.(2016) and successfully used it for semi-supervised classification of nodes. While we focus on the framework introduced byDefferrard et al.(2016), the proposed model is agnostic to the choice of the graph convolution operator *g.\nAs it is difficult to express a meaningful translation operator in the vertex domain (Bruna et al. 2014] [Niepert et al.[[2016), Defferrard et al.[(2016) chose a spectral formulation for the convolution operator on graph *g. By this definition, a graph signal x E Rnxdx is filtered by a non-parametric kernel ge(A) = diag(0), where 0 e Rn is a vector of Fourier coefficients, as\ny = ge *g x = ge(L)x = ge(UAUT)x = Uge(A)UTx E Rnxdx\nA practical trick is to initialize the biases bi, bt and bo to one such that the gates are initially oper\ni =(WxiXt+ Wniht-1+WciOCt-1+bi) f =0(WxfXt+Wnfht-1+WcfOCt-1+bf) Ct = ft O Ct-1+it O tanh(Wxcxt + Wncht-1 + bc) O = (Wxoxt+ Wnoht-1+wcoO ct + 6o) + = o o tanh(c+)\ny = ge *g x = ge(L)x = ge(UAUT)x = Ugo(A)UTx E Rnxdx\nwhere U E Rnn is the matrix of eigenvectors and A E Rnn the diagonal matrix of eigenvalues. of the normalized graph Laplacian L = In - D-1/2AD-1/2 = UAUT E Rnxn, where In is the identity matrix and D E Rnxn is the diagonal degree matrix with Dui = , Aij (Chung. 1997). Note that the signal x is filtered by ge with an element-wise multiplication of its graph. Fourier transform UTx with ge (Shuman et al.]2013). Evaluating (3) is however expensive, as. the multiplication with U is O(n?). Furthermore, computing the eigendecomposition of L might\nbe prohibitively expensive for large graphs. To circumvent this problem, Defferrard et al. (2016 parametrizes ge as a truncated expansion, up to order K - 1, of Chebyshev polynomials T such that\nK-1 ge(A) = 0kTk(A) k=0\nK -1 0kTk(L)x y = ge *g x = ge(L)x = k=0\nwhere T(L) E Rnn is the Chebyshev polynomial of order k evaluated at the scaled Laplacian. L = 2L/max - In. Using the stable recurrence relation Tk(x) = 2xTk-1(x) - Tk-2(x) with To = 1 and T = x, one can evaluate (5) in O(K[8]) operations, i.e. linearly with the number. of edges. Note that as the filtering operation (5) is an order K polynomial of the Laplacian, it is. K-localized and depends only on nodes that are at maximum K hops away from the central node the K-neighborhood. The reader is referred to Defferrard et al.(2016) for details and an in-depth discussion."}, {"section_index": "5", "section_name": "3 RELATED WORKS", "section_text": "i =0(Wzi*xt+ Wpi* ht-1 +Wc;OCt-1+b) f = xf * Xt + W *ht-1+Wcf OCt-1+6f Ct = ft O Ct-1+itO tanh(Wxc* xt+ Wnc* ht-1 + bc O = (Wxo* xt + Who* ht-1+Wco O Ct + bo) ht = o O tanh(ct)\nwhere * denotes the 2D convolution by a set of kernels. In their setting, the input tensor xt Rnrnedx is the observation of dx measurements at time t of a dynamical system over a spatia region represented by a grid of nr rows and nc columns. The model holds spatially distribute hidden and cell states of size dn given by the tensors ct, ht E Rnrxncxdn. The size m of the convolutional kernels Wh. E Rmxmxdndn and Wx. E Rmxmxdn xdx determines the number 0 parameters, which is independent of the grid size nr nc. Earlier, Ranzato et al. (2014) proposed similar RNN variation which uses convolutional layers instead of fully connected layers. The hidde. state at time t is given by\nht = tanh(o(Wx2* o(Wx1 * xt)) + o(Wp * ht-1))\nMotivated by spatio-temporal problems like modeling human motion and object interactions, Jain. et al.[(2016) developed a method to cast a spatio-temporal graph as a rich RNN mixture which\nwhere the parameter 0 E RK is a vector of Chebyshev coefficients and T(A) E RnXn is the Chebyshev polynomial of order k evaluated at A = 2A/max - In. The graph filtering operation can then be written as\nShi et al. (2015) introduced a model for regular grid-structured sequences, which can be seen as. a special case of the proposed model where the graph is an image grid where the nodes are well. ordered. Their model is essentially the classical FC-LSTM (2) where the multiplications by dense matrices W have been replaced by convolutions with kernels W:.\ni =0(Wxi *xt+Wni* ht-1+Wci O Ct-1+bi) f =0(Wxf*Xt+Wnf*ht-1+WcfOCt-1+bf)z Ct = ft O ct-1+it O tanh(Wxc* xt + Whc* ht-1+ bc) 0 = 0 Vxo* Xt + Who* ht-1 + Wco O Ct + bo) = o o tanh(ct)\nObserving that natural language exhibits syntactic properties that naturally combine words into phrases, Tai et al.(2015) proposed a model for tree-structured topologies, where each LSTM has access to the states of its children. They obtained state-of-the-art results on semantic relatedness and sentiment classification.Liang et al.(2016) followed up and proposed a variant on graphs. Their sophisticated network architecture obtained state-of-the-art results for semantic object parsing on four datasets. In those models, the states are gathered from the neighborhood by way of a weighted sum with trainable weight matrices. Those weights are however not shared across the graph, which would otherwise have required some ordering of the nodes, alike any other spatial definition of graph convolution. Moreover, their formulations are limited to the one-neighborhood of the current node, with equal weight given to each neighbor.\nThe closest model to our work is probably the one proposed by Li et al.(2015), which showed stat-of-the-art performance on a problem from program verification. Whereas they use the iterative procedure of the Graph Neural Networks (GNNs) model introduced by Scarselli et al.(2009) to propagate node representations until convergence, we instead use the graph CNN introduced by Defferrard et al.(2016) to diffuse information across the nodes. While their motivations are quite different, those models are related by the fact that a spectral filter defined as a polynomial of order K can be implemented as a K-layer GNN"}, {"section_index": "6", "section_name": "PROPOSED GCRN MODELS", "section_text": "We propose two GCRN architectures that are quite natural, and investigate their performances in real-world applications in Section5\nModel 1. The most straightforward definition is to stack a graph CNN, defined as (5), for featur extraction and an LSTM, defined as (2), for sequence learning:.\nCNNc(xt cCNN + Wniht-1+ Wci O Ct-1+ bi), = i X f = CNN +Wnfht-1+Wcf O Ct-1+ bf) cCNN + Wncht-1+bc) ct = ft O Ct-1+ it O tanh(Wxcxt 0= ht = o O tanh(ct).\nIn that setting, the input matrix xt E Rnd may represent the observation of d measurements a. time t of a dynamical system over a network whose organization is given by a graph G. xCNN is the where WCNN E RKd d are the Chebyshev coefficients for the graph convolutional kernels o support K. The model also holds spatially distributed hidden and cell states of size dn given by th. matrices Ct, ht E Rnxdh. Peepholes are controlled by wc. E Rnxdn. The weights Wn. E Rdnd. and Wz. E Rdndx are the parameters of the fully connected layers. An architecture such as q8 may be enough to capture the data distribution by exploiting local stationarity and compositionalit properties as well as the dynamic properties.\ni = O(Wxi*G Xt+ Wni*g ht-1+Wci O Ct-1+bi)z f =0(Wxf *g Xt+Wnf *g ht-1+Wcf O Ct-1+ bf) Ct = ft O ct-1+ it O tanh(Wxc *g Xt + Wnc *g ht-1+ bc) O = o(Wxo *g xt + Who *g ht-1 + Wco O ct + bo ht = o O tanh(ct).\nIn that setting, the support K of the graph convolutional kernels defined by the Chebyshev coeffi. cients Wn. E RKdndn and Wz. E RKdnd determines the number of parameters, which is. independent of the number of nodes n. To keep the notation simple, we write Wxi *g xt to mean a. graph convolution of xt with dndx filters which are functions of the graph Laplacian L parametrized by K Chebyshev coefficients, as noted in (4) and (5). In a distributed computing setting, K controls. the communication overhead, i.e. the number of nodes any given node i should exchange with in. order to compute its local states.\nThe proposed blend of RNNs and graph CNNs is not limited to LSTMs and is straightforward to apply to any kind of recursive networks. For example, a vanilla RNN ht = tanh(W.xt + Whht-1\n2The basic idea is to set the transition function as a diffusion and the output function such as to realize the polynomial recurrence, then stack K of those. SeeDefferrard et al.(2016) for details.\nCNNg(xt Vxix CNN +Wniht-1+Wci O Ct-1+ bi); i = f = NN + Wnfht-1+WcfO Ct-1+bf), ct = ft O ct-1 +it O tanh(Wxcxt xCNN + Wncht-1+ bc)z 0 = F Wnoht-1+ wco O ct+ bo)z tanh(ct).\nModel 2. To generalize the convLSTM model (6) to graphs we replace the Euclidean 2D convo. lution * by the graph convolution *g:\ni = o(Wxi *g Xt+ Whi *g ht-1+Wci O Ct-1+bi), f =o(Wxf *g Xt+ Wnf *g ht-1+Wcf O Ct-1+bf)z Ct = ft O Ct-1+it O tanh(Wxc *g xt + Whc *g ht-1+ bc) 0 = o(Wxo *g xt + Wno *g ht-1 + Wco O ct + bo), o o tanh(ct\nArchitecture Structure Filter size. Parameters Runtime Test(w/o Rot) Test(Rot) FC-LSTM N/A N/A 142,667,776 N/A 4832 LSTM+CNN N/A 5 x 5 13,524,496 2.10 3851 4339 LSTM+CNN N/A 9 x 9 43,802,128 6.10 3903 4208 LSTM+GCNN knn = 8 K = 3 1,629,712 0.82 3866 4367 LSTM+GCNN knn = 8 K = 5 2,711,056 1.24 3495 3932 LSTM+GCNN knn = 8 K =7 3,792,400 1.61 3400 3803 LSTM+GCNN knn = 8 K = 9 4,873,744 2.15 3395 3814 LSTM+GCNN knn = 4 K =7 3.792.400 1.61 3446 3844 LSTM+GCNN knn = 16 K =7 3,792,400 1.61 3578 3963\nTable 1: Comparison between models. Runtime is the time spent per each mini-batch in seconds Test cross-entropies correspond to moving MNIST, and rotating and moving MNIST. LSTM+GCNN is Model 2 defined in (9). Cross-entropy of FC-LSTM is taken from Shi et al.[(2015)\nht = tanh(Wx *g xt + Wn *g ht-1)"}, {"section_index": "7", "section_name": "5 EXPERIMENTS", "section_text": "For this synthetic experiment, we use the moving-MNIST dataset generated by Shi et al.(. (2015 All sequences are 20 frames long (10 frames as input and 10 frames for prediction) and contair two handwritten digits bouncing inside a 64 64 patch. Following their experimental setup, al models are trained by minimizing the binary cross-entropy loss using back-propagation througl time (BPTT) and RMSProp with a learning rate of 10-3 and a decay rate of 0.9. We choose th. best model with early-stopping on validation set. All implementations are based on their Theanc. code and dataset|3|The adjacency matrix A is constructed as a k-nearest-neighbor (knn) graph witl. Euclidean distance and Gaussian kernel between pixel locations. For a fair comparison with Sh et al.(2015) defined in (6), all GCRN experiments are conducted with Model 2 defined in (9), whic) is the same architecture with the 2D convolution * replaced by a graph convolution *g. To furthe. explore the impact of the isotropic property of our filters, we generated a variant of the moving. MNIST dataset where digits are also rotating (see Figure[4)\nTable[1shows the performance of various models: (i) the baseline FC-LSTM from Shi et al.(2015) (ii) the 1-layer LSTM+CNN from [Shi et al.(2015) with different filter sizes, and (iii) the proposed LSTM+graph CNN(GCNN) defined in (9) with different supports K. These results show the ability. of the proposed method to capture spatio-temporal structures. Perhaps surprisingly, GCNNs can offer better performance than regular CNNs, even when the domain is a 2D grid and the data is. images, the problem CNNs were initially developed for. The explanation is to be found in the. differences between 2D filters and spectral graph filters. While a spectral filter of support K = 3 corresponds to the reach of a patch of size 5 5 (see Figure|2), the difference resides in the isotropic nature of the former and the number of parameters: K = 3 for the former and 52 = 25 for the later..\nhttp://www.wanghao.in/code/SpARNn-release.zip\nz = o(Wxz*g xt+ Wnz*g ht-1) r = o(Wxr*g xt + Wnr*g ht-1) h = tanh(Wxh *g xt + Wnh *g (r O ht-1)) ht=zOht-1+(1-z)Oh.\nAs demonstrated by Shi et al.[(2015), structure-aware LSTM cells can be stacked and used as sequence-to-sequence models using an architecture composed of an encoder, which processes the input sequence, and a decoder, which generates an output sequence. A standard practice for machine translation using RNNs (Cho et al.2014) Sutskever et al. 2014)\nFigure 3: Cross-entropy on validation set: Left: performance of graph CNN with various filte. support K. Right: performance w.r.t. graph construction..\nEncoder input Sequence Encoder input Sequence ndu lnduf 97 3 Decoder output Sequence Decoder output Sequence E 97 W 0 0 0 W 03 0 OW CNN CNN GCNN GCNN\nFigure 4: Qualitative results for moving MNIST, and rotating and moving MNIST. First row is the input sequence, second the ground truth, and third and fourth are the predictions of the LSTM+CNN(5 x 5) and LSTM+GCNN(knn = 8, K = 7).\nTable[1|indeed shows that LSTM+CNN(5 5) rivals LSTM+GCNN with K = 3. However, when increasing the filter size to 9 9 or K = 5, the GCNN variant clearly outperforms the CNN variant. This experiment demonstrates that graph spectral filters can obtain superior performance on regular. domains with much less parameters thanks to their isotropic nature, a controversial property. Indeed. as the nodes are not ordered, there is no notion of an edge going up, down, on the right or on the. left. All edges are treated equally, inducing some sort of rotation invariance. Additionally, Table|1 shows that the computational complexity of each model is linear with the filter size, and Figure|3 shows the learning dynamic of some of the models.."}, {"section_index": "8", "section_name": "5.2 NATURAL LANGUAGE MODELING ON PENN TREEBANK", "section_text": "The Penn Treebank dataset has 1,036,580 words. It was pre-processed inZaremba et al.(2014) anc splil4into a training set of 929k words, a validation set of 73k words, and a test set of 82k words. The size of the vocabulary of this corpus is 10,000. We use the gensim library'|to compute a word2vec model (Mikolov et al. 2013) for embedding the words of the dictionary in a 200-dimensional space Then we build the adjacency matrix of the word embedding using a 4-nearest neighbor graph with cosine distance. Figure|6|presents the computed adjacency matrix, and its 3D visualization. We used the hyperparameters of the small configuration given by the code based onZaremba et al.(2014): the size of the data mini-batch is 20, the number of temporal steps to unroll is 20, the dimension of the hidden state is 200. The global learning rate is 1.0 and the norm of the gradient is bounded by 5. The learning decay function is selected to be 0.5max(0,#epoch-4). All experiments have 13 epochs, and dropout value is 0.75. ForZaremba et al.(2014), the input representation xt can be either the 200-dim embedding vector of the word, or the 10,000-dim one-hot representation of the word. Fol\n6500 6000 LSTM+CNN(5X5) LSTM+CNN(5X5) LSTM+CNN(5X5)+rotation LSTM+GCNN, knn=4, K=7 6000 LSTM+CNN(9X9) 5500 LSTM+GCNN, knn=8,K=7 LSTM+GCNN+rotation, knn=8, K=3 LSTM+GCNN, knn=16, K=7 5500 LSTM+GCNN, knn=8, K=3 5000 LSTM+GCNN, knn=8, K=5 5000 LSTM+GCNN, knn=8, K=7 -eross LSTM+GCNN, knn=8, K=9 4500 oo! 4500 4000 4000 3500 3500 3000 3000 5 10 15 20 25 30 35 5 10 15 20 0 25 30 35 #Epoch #Epoch\nTrain set. Test set 500 300 LSTM LSTM LSTM+Dropout LSTM+Dropout 400 250 LSTM+GCNN LSTM+GCNN LSTM+GCNN+Dropout LSTM+GCNN+Dropout 300 200 200 150 100 100 0 50 0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14 #epoch #epoch\nFigure 5: Learning dynamic of LSTM with and without graph structure and dropout regularization\nArchitecture Representation Parameters Train Perplexity Test Perplexity Zaremba et al. (2014) code6 embedding 681,800 36.96 117.29 Zaremba et al. (2014) code6 one-hot 34,011,600 53.89 118.82 LSTM embedding 681,800 48.38 120.90 LSTM one-hot 34,011,600 54.41 120.16 LSTM, dropout one-hot 34,011,600 145.59 112.98 GCRN-M1 one-hot 42,011,602 18.49 177.14 GCRN-M1, dropout one-hot 42,011,602 114.29 98.67\nTable 2: Comparison of models in terms of perplexity.Zaremba et al.(2014) code6 is ran as bench mark algorithm. The originalZaremba et al.[(2014) code used as input representation for xt the 200-dim embedding representation of words, computed here by the gensim library5. As our model runs on the 10,000-dim one-hot representation of words, we also ranZaremba et al.(2014) code on this representation. We re-implemented Zaremba et al.[(2014) code with the same architecture and hyperparameters. We remind that GCRN-M1 refers to GCRN Model 1 defined in (8).\nour models, the input representation is a one-hot representation of the word. This choice allows us to use the graph structure of the words.\nTable 2|reports the final train and test perplexity values for each investigated model and Figure |5 plots the perplexity value vs. the number of epochs for the train and test sets with and without dropout regularization. Numerical experiments show:\nhttps://github.com/tensorflow/tensorflow/blob/master/tensorflow/ nodels/rnn/ptb/ptb_word_1m.py\n1. Given the same experimental conditions in terms of architecture and no dropout regulariza tion, the standalone model of LSTM is more accurate than LSTM using the spatial graph information (120.16 vs. 177.14), extracted by graph CNN with the GCRN architecture of Model 1, Eq. (8). 2. However, using dropout regularization, the graph LSTM model overcomes the standalone LSTM with perplexity values 98.67 vs. 112.98. 3. The use of spatial graph information found by graph CNN speeds up the learning process, and overfits the training dataset in the absence of dropout regularization. The graph struc- ture likely acts a constraint on the learning system that is forced to move in the space of language topics. 4. We performed the same experiments with LSTM and Model 2 defined in (9). Model 1 significantly outperformed Model 2, and Model 2 did worse than standalone LSTM. This bad performance may be the result of the large increase of dimensionality in Model 2, as the dimension of the hidden and cell states changes from 200 to 10,000, the size of the vocabulary. A solution would be to downsize the data dimensionality, as done in Shi et al. (2015) in the case of image data.\n0 2000 4000 6000 8000 2000 4000 6000 8000\nFigure 6: Left: adjacency matrix of word embeddings. Right: 3D visualization of words' structure\nThis work aims at learning spatio-temporal structures from graph-structured and time-varying data. In this context, the main challenge is to identify the best possible architecture that combines simul. taneously recurrent neural networks like vanilla RNN, LSTM or GRU with convolutional neural networks for graph-structured data. We have investigated here two architectures, one using a stack of CNN and RNN (Model 1), and one using convLSTM that considers convolutions instead of fully connected operations in the RNN definition (Model 2). We have then considered two applications:. video prediction and natural language modeling. Model 2 has shown good performances in the case of video prediction, by improving the results of Shi et al.(2015). Model 1 has also provided. promising performances in the case of language modeling, particularly in terms of learning speed.. It has been shown that (i) isotropic filters, maybe surprisingly, can outperform classical 2D filters. on images while requiring much less parameters, and (ii) that graphs coupled with graph CNN and. RNN are a versatile way of introducing and exploiting side-information, e.g. the semantic of words,. by structuring a data matrix."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENT", "section_text": "This research was supported in part by the European Union's H2020 Framework Programme (H2020-MSCA-ITN-2014) under grant No. 642685 MacSeNet, and Nvidia equipment grant."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral Networks and Locally Connectec Networks on Graphs. In International Conference on Learning Representations (CML). 2014.\nKyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holge Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistica machine translation. arXiv:1406.1078, 2014.\nF. R. K. Chung. Spectral Graph Theory. American Mathematical Society, 1997\n0 2000 4000 6000 8000 0 2000 4000 6000 8000\nFuture work will investigate applications to data naturally structured as dynamic graph signals, for instance fMRI and sensor networks. The graph CNN model we have used is rotationally-invariant and such spatial property seems quite attractive in real situations where motion is beyond translation We will also investigate how to benefit of the fast learning property of our system to speed up language modeling models. Eventually, it will be interesting to analyze the underlying dynamical property of generic RNN architectures in the case of graphs. Graph structures may introduce stability to RNN systems, and prevent them to express unstable dynamic behaviors.\nMichael Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs witl fast localized spectral filtering. In Advances in Neural Information Processing Systems (NIPs), 2016.\nJeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and descrip- tion. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015..\nFelix A Gers and Jurgen Schmidhuber. Recurrent nets that time and count. In IEEE-INNS-ENNS International Joint Conference on Neural Networks, 2000..\nAlex Graves. Generating sequences with recurrent neural networks. arXiv:1308.0850, 2013.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation. 1997.\nAshesh Jain, Amir R. Zamir, Silvio Savarese, and Ashutosh Saxena. Structural-RNN: Deep Learning on Spatio Temporal Graphs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2O16\nAndrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In IEEI Conference on Computer Vision and Pattern Recognition (CVPR), 2015.\nThomas N. Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks arXiv:1609.02907. 2016\nYujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks arXiv preprint arXiv:1511.05493, 2015.\nMarcAurelio Ranzato, Arthur Szlam, Joan Bruna, Michael Mathieu, Ronan Collobert, and Sumit Chopra. Videc (language) modeling: a baseline for generative models of natural videos. arXiv:1412.6604, 2014..\nXingjian Shi, Zhourong Chen, Hao Wang, Dit- Yan Yeung, Wai-kin Wong, and Wang-chun Woo. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. In Advances in Neural Information Processing Systems (NIPS), 2015.\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems (NIPs), 2014..\nXiaodan Liang, Xiaohui Shen, Jiashi Feng, Liang Lin, and Shuicheng Yan. Semantic object parsing with graph lstm. arXiv:1603.07063, 2016.\nFranco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 2009.\nKai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved semantic representations from tree structured long short-term memory networks. In Association for Computational Linguistics (ACL), 2015.."}] |
Hk4_qw5xe | [{"section_index": "0", "section_name": "TOWARDS PRINCIPLED METHODS FOR TRAINING GENERATIVE ADVERSARIAL NETWORKS", "section_text": "Martin Arjovsky\nCourant Institute of Mathematical Sciences\nmartinarjovsky@gmail.com"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "The goal of this paper is not to introduce a single algorithm or method, but tc make theoretical steps towards fully understanding the training dynamics of gen erative adversarial networks. In order to substantiate our theoretical analysis, we oerform targeted experiments to verify our assumptions, illustrate our claims, anc quantify the phenomena. This paper is divided into three sections. The first sec tion introduces the problem at hand. The second section is dedicated to studying and proving rigorously the problems including instability and saturation that arize when training generative adversarial networks. The third section examines a prac tical and theoretically grounded direction towards solving these problems, while ntroducing new tools to study them"}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Despite their success, there is little to no theory explaining the unstable behaviour of GAN training Furthermore, approaches to attacking this problem still rely on heuristics that are extremely sensitive to modifications. This makes it extremely hard to experiment with new variants, or to use then in new domains, which limits their applicability drastically. This paper aims to change that, by providing a solid understanding of these issues, and creating principled research directions towards adressing them.\nIt is interesting to note that the architecture of the generator used by GANs doesn't differ signifi cantly from other approaches like variational autoencoders (Kingma & Welling2013). After all, at the core of it we first sample from a simple prior z ~ p(z), and then output our final sample ge(z). sometimes adding noise in the end. Always, ge is a neural network parameterized by 0, and the main difference is how ge is trained.\nThis cost function has the good property that it has a unique minimum at Pg = Pr, and it doesn't require knowledge of the unknown Pr(x) to optimize it (only samples). However, it is interesting to see how this divergence is not symetrical between Pr and Pg:\nLeon Bottou\nFacebook AL Research\nleonb@fb.com\nGenerative adversarial networks (GANs)(Goodfellow et al. 2014a) have achieved great success at generating realistic and sharp looking images. However, they are widely general methods, now starting to be applied to several other important problems, such as semisupervised learning, stabi- lizing sequence learning methods for speech and language, and 3D modelling. (Denton et al.]|2015 Radford et al.] [2015} Salimans et al.]2016] Lamb et al.]2016] [Wu et al.[[2016]\nTraditional approaches to generative modeling relied on maximizing likelihood, or equivalently min. imizing the Kullback-Leibler (KL) divergence between our unknown data distribution Pr and our generator's distribution Pg (that depends of course on 0). If we assume that both distributions are. continuous with densities Pr and Pa, then these methods try to minimize.\nPr(x) KL(Pr|Pg) = Pr(x) log dx Pa Jx\nClearly, if we would minimize KL(Pg|Pr) instead, the weighting of these errors would be re. versed, meaning that this cost function would pay a high cost for generating not plausibly looking. pictures. Generative adversarial networks have been shown to optimize (in its original formulation), the Jensen-shannon divergence, a symmetric middle ground to this two cost functions\nPr+P . An impressive experimental analysis where PA is the 'average' distribution, with density. 2 of the similarities, uses and differences of these divergences in practice can be seen at Theis et al. (2016). It is indeed conjectured that the reason of GANs success at producing reallistically looking. images is due to the switch from the traditional maximum likelihood approaches. (Theis et al.2016 Huszar2015). However, the problem is far from closed.\nGenerative adversarial networks are formulated in two steps. We first train a discriminator D to maximize\nOne can show easily that the optimal discriminator has the shape\nand that L(D*, ge) = 2JSD(Pr |Pq) - 2 log 2, so minimizing equation (1) as a function of 0 yields minimizing the Jensen-Shannon divergence when the discriminator is optimal. In theory, one would expect therefore that we would first train the discriminator as close as we can to optimality (so the cost function on 0 better approximates the JSD), and then do gradient steps on 0, alternating these. two things. However, this doesn't work. In practice, as the discriminator gets better, the updates to the generator get consistently worse. The original GAN paper argued that this issue arose from. saturation, and switched to another similar cost function that doesn't have this problem. However even with this new cost function, updates tend to get worse and optimization gets massively unstable. Therefore, several questions arize:.\nThe fundamental contributions of this paper are the answer to all these questions, and perhaps more importantly, to introduce the tools to analyze them properly. We provide a new direction designed tc avoid the instability issues in GANs, and examine in depth the theory behind it. Finally, we state a series of open questions and problems, that determine several new directions of research that begir with our methods.\nIf Pr(x) > Pg(x), then x is a point with higher probability of coming from the data than. being a generated sample. This is the core of the phenomenon commonly described as. mode dropping': when there are large regions with high values of Pr, but small or zero values in Pg. It is important to note that when Pr(x) > 0 but Pg(x) -> 0, the integrand. inside the KL grows quickly to infinity, meaning that this cost function assigns an extremely. high cost to a generator's distribution not covering parts of the data.. If Pr(x) < Pg(x), then x has low probability of being a data point, but high probability of. being generated by our model. This is the case when we see our generator outputting an. image that doesn't look real. In this case, when Pr(x) -> 0 and Pg(x) > 0, we see that the. value inside the KL goes to O, meaning that this cost function will pay extremely low cost. for generating fake looking samples..\nJSD(Pr|Pg) = KL(Pr[PA)+ KL(Pq|PA) 2\nL(D,ge) = Ex~Pr[log D(x)] + Ex~Pg[log(1 - D(x))]\nPr(x) D*(x) = Pr(x) + Pg(x)\nWhy do updates get worse as the discriminator gets better? Both in the original and the new cost function. Why is GAN training massively unstable?. Is the new cost function following a similar divergence to the JSD? If so, what are its. properties? Is there a way to avoid some of these issues?.\nThe theory tells us that the trained discriminator will have cost at most 2 log2 - 2JSD(Pr|Pq) However, in practice, if we just train D till convergence, its error will go to O, as observed in Figure 1 pointing to the fact that the JSD between them is maxed out. The only way this can happen is if the distributions are not continuous'I or they have disjoint supports.\nOne possible cause for the distributions not to be continuous is if their supports lie on low dimen. sional manifolds. There is strong empirical and theoretical evidence to believe that Pr is indeed. extremely concentrated on a low dimensional manifold (Narayanan & Mitter2010). As of Pg, we will prove soon that such is the case as well..\nIn the case of GANs, P, is defined via sampling from a simple prior z ~ p(z), and then applying a function g : Z -> X, so the support of Pg has to be contained in g(Z). If the dimensionality of Z is. less than the dimension of I' (as is typically the case), then it's imposible for Pg to be continuous.. This is because in most cases g(Z) will be contained in a union of low dimensional manifolds, and. therefore have measure O in A. Note that while intuitive, this is highly nontrivial, since having an n-. dimensional parameterization does absolutely not imply that the image will lie on an n-dimensional. manifold. In fact, there are many easy counterexamples, such as Peano curves, lemniscates, and. many more. In order to show this for our case, we rely heavily on g being a neural network, since. we are able to leverage that g is made by composing very well behaved functions. We now state this. properly in the following Lemma:\nLemma 1. Let g : Z -> X be a function composed by affine transformations and pointwise nonlin earities, which can either be rectifiers, leaky rectifiers, or smooth strictly increasing functions (such as the sigmoid, tanh, softplus, etc). Then, g(Z) is contained in a countable union of manifolds oJ dimension at most dim Z. Therefore, if the dimension of Z is less than the one of X, g(Z) will be a set of measure 0 in X."}, {"section_index": "3", "section_name": "Proof. See Appendix|A", "section_text": "Driven by this, this section shows that if the supports of P. and P, are disjoint or lie in low dimen sional manifolds, there is always a perfect discriminator between them, and we explain exactly hov and why this leads to an unreliable training of the generator.\nFor simplicity, and to introduce the methods, we will first explain the case where Pr and Pg hav. disjoint supports. We say that a discriminator D : -> 0, 1| has accuracy 1 if it takes the value on a set that contains the support of P, and value O on a set that contains the support of Pg. Namely. PrD(x) = 1= 1 and PqD(x) = 0 = 1.\nTheorem 2.1. If two distributions P, and Pg have support contained on two disjoint compact sub sets M and P respectively, then there is a smooth optimal discrimator D* : X -> [0,1] that has accuracy 1 and VzD*(x) = 0 for all x E M U P\nProof. The discriminator is trained to maximize\nM ={x : d(x,M) s/3 P ={x:d(x,P) 8/3}\nBy definition of & we have that P and M are clearly disjoint compact sets. Therefore, by Urysohn's. smooth lemma there exists a smooth function D* : X -> [0, 1] such that D*| = 1 and D*[ = 0. Since log D*(x) = 0 for all x in the support of P. and log(1 - D* (x)) = 0 for all x in the support of Pg, the discriminator is completely optimal and has accuracy 1. Furthermore, let x be in M U P. If we assume that x E M, there is an open ball B = B(x, /3) on which D*|B is constant. This. shows that VxD*(x) = 0. Taking x E P and working analogously we finish the proof..\nBy continuous we will actually refer to an absolutely continuous random variable (i.e. one that has density), as it typically done. For further clarification see Appendix[B\nAfter 1 epoch After 1 epoch After 10 epochs After 10 epochs 100 After 25 epochs 0.9 After 25 epochs 10 0.8 104 10 0.7 -SSO. 10 0.6 105 0.5 10b 10 500 0.4 0 1000 1500 2000 2500 3000 3500 4000 100 200 300 400 500 Training iterations Training iterations\nFigure 1: First, we trained a DCGAN for 1, 10 and 25 epochs. Then, with the generator fixec we train a discriminator from scratch. We see the error quickly going to O, even with very fev iterations on the discriminator. This even happens after 25 epochs of the DCGAN, when the sample. are remarkably good and the supports are likely to intersect, pointing to the non-continuity of the distributions. Note the logarithmic scale. For illustration purposes we also show the accuracy of the discriminator, which goes to 1 in sometimes less than 50 iterations. This is 1 even for numerica precision, and the numbers are running averages, pointing towards even faster convergence.\nIn the next theorem, we take away the disjoint assumption, to make it general to the case of twc different manifolds. However, if the two manifolds match perfectly on a big part of the space, no discriminator could separate them. Intuitively, the chances of two low dimensional manifolds having. this property is rather dim: for two curves to match in space in a specific segment, they couldn't be. perturbed in any arbitrarilly small way and still satisfy this property. To do this, we will define the notion of two manifolds perfectly aligning, and show that this property never holds with probability. 1 under any arbitrarilly small perturbations.\nThe interesting thing is that we can safely assume in practice that any two manifolds never perfectly align. This can be done since an arbitrarilly small random perturbation on two manifolds will leac them to intersect transversally or don't intersect at all. This is precisely stated and proven in Lemma 2\nAs stated by Lemma[3] if two manifolds don't perfectly align, their intersection = M n P will. be a finite union of manifolds with dimensions strictly lower than both the dimension of M and the One of P.\nLemma 2. Let M and P be two regular submanifolds of Rd that don't have full dimension. Let n, n. be arbitrary independent continuous random variables. We therefore define the perturbed manifold. as M = M + n and P = P + n'. Then\nPn,n'(M does not perfectly align with P) = 1\nDiscriminator's error Discriminator's accuracy 10 1.0 After 1 epoch. After 1 epoch After 10 epochs After 10 epochs 100 After 25 epochs 0.9 After 25 epochs 10 0.8 104 Aeeunrey 10 0.7 -rrsss L1 10 0.6 10-5 0.5 10~6 0.4 0 500 1000 1500 2000 2500 3000 3500 4000 0 100 200 300 400 500 Training iterations. Training iterations.\nDefinition 2.1. We first need to recall the definition of transversallity. Let M and P be two boundary. free regular submanifolds of F, which in our cases will simply be F = Rd. Let x E M n P be. an intersection point of the two manifolds. We say that M and P intersect transversally in x if T,M + T,P = T,F, where T,M means the tangent space of M around x..\nDefinition 2.2. We say that two manifolds without boundary M and P perfectly align if there is an x E M N P such that M and P don't intersect transversally in x. We shall note the boundary and interior of a manifold M by dM and Int M respectively. We say that two manifolds M and P (with or without boundary) perfectly align if any of the boundary free manifold pairs (Int M, Int P), (Int M, dP), (8M, Int P) or (0M, dP) perfectly align.\nLemma 3. Let M and P be two regular submanifolds of Rd that don't perfectly align and don'1. have full dimension. Let L = M N P. If M and P don't have boundary, then L is also a manifold and has strictly lower dimension than both the one of M and the one of P. If they have boundary. L is a union of at most 4 strictly lower dimensional manifolds. In both cases, L has measure O in. both M and P.\nWe now state our perfect discrimination result for the case of two manifolds\nTheorem 2.2. Let P, and P, be two distributions that have support contained in two closed mani-. folds M and P that don't perfectly align and don't have full dimension. We further assume that P. and Pg are continuous in their respective manifolds, meaning that if there is a set A with measure O in M, then Pr(A) = O (and analogously for Pg). Then, there exists an optimal discriminator. D* : X -> [0, 1] that has accuracy 1 and for almost any x in M or P, D* is smooth in a neigh. bourhood of x and x D*(x) = 0.\nProof. By Lemma 3 we know that = M n P is strictly lower dimensional than both M and P and has measure O on both of them. By continuity, Pr(L) = 0 and Pg(L) = 0. Note that this implie. the support of Pr is contained in ML and the support of P, is contained in P L..\nLet x E M \\ L. Therefore, x E Pc (the complement of P) which is an open set, so there exists a ball of radius ex such that B(x, ex) P = 0. This way, we define\nB(x, ex/3 xEM\\L\nWe define P analogously. Note that by construction these are both open sets on Rd. Since M \\ L C M, and P \\ L C P, the support of Pr and Pg is contained in M and P respectively. As well by construction, M n P = 0.\nThese two theorems tell us that there are perfect discriminators which are smooth and constan almost everywhere in M and P. The fact that the discriminator is constant in both manifolds point to the fact that we won't really be able to learn anything by backproping through it, as we shall see in the next subsection. To conclude this general statement, we state the following theorem on the divergences of P, and Pa, whose proof is trivial and left as an exercise to the reader.\nTheorem 2.3. Let P. and Pg be two distributions whose support lies in two manifolds M and P that don't have full dimension and don't perfectly align. We further assume that Pr and Pg are continuous in their respective manifolds. Then,.\nJSD(Pr|Pg) =log2 KL(Pr|Pg) =+o KL(Pq|Pr) =+o\nNote that these divergences will be maxed out even if the two manifolds lie arbitrarilly close to eac other. The samples of our generator might look impressively good, yet both KL divergences wil. be infinity. Therefore, Theorem2.3|points us to the fact that attempting to use divergences out o. the box to test similarities between the distributions we typically consider might be a terrible idea. Needless to say, if these divergencies are always maxed out attempting to minimize them by gradien. descent isn't really possible. We would like to have a perhaps softer measure, that incorporates. notion of distance between the points in the manifolds. We will come back to this topic later ii. section 3] where we explain an alternative metric and provide bounds on it that we are able t. analyze and optimize.\nLet us define D*(x) = 1 for all x E M, and O elsewhere (clearly including P. Since log D* (x) = 0 for all x in the support of Pr and log(1 - D* (x)) = 0 for all x in the support of Pq, the discriminator is completely optimal and has accuracy 1. Furthermore, let x E M. Since M is an open set and D* is constant on M, then VD*| = 0. Analogously, VD*|p = 0. Therefore, the set of points where D* is non-smooth or has non-zero gradient inside M U P is contained in L, which has null-measure in both manifolds, therefore concluding the theorem.\ns completely optimal and has accuracy 1. Furthermore, let x E M. Since M is an open set and\nJSD(Pr|Pg) = log2 KL(Pr||Pg) =+o KL(Pq[Pr) =+o\nTheorems|2.1|and|2.2[showed one very important fact. If the two distributions we care about have. supports that are disjoint or lie on low dimensional manifolds, the optimal discriminator will be perfect and its gradient will be zero almost everywhere."}, {"section_index": "4", "section_name": "2.2.1 THE ORIGINAL COST FUNCTION", "section_text": "We will now explore what happens when we pass gradients to the generator through a discriminator One crucial difference with the typical analysis done so far is that we will develop the theory for an approximation to the optimal discriminator, instead of working with the (unknown) true discrimi nator. We will prove that as the approximaton gets better, either we see vanishing gradients or the massively unstable behaviour we see in practice, depending on which cost function we use..\nIn what follows, we denote by ID| the norm\nD]= sup[D(x)]+VxD(x)z xEX\nThe use of this norm is to make the proofs simpler, but could have been done in another Sobolev norm : 1.p for p < oo covered by the universal approximation theorem in the sense that we can guarantee a neural network approximation in this norm (Hornik1991)\nE |VeEz~p(z)[log(1- D(ge(z)))]|l2 < M\nProof. In both proofs of Theorems 2.1and2.2|we showed that D* is locally 0 on the support of P. Then. using Jensen's inequality and the chain rule on this support we have\nE ||VeEz~p(z)[log(1 D(ge(z)))]|I2 < M\nlim VeEz~p(z)[log(1 D(ge(z)))] = 0 |D-D*II->0\nTheorem 2.4 (Vanishing gradients on the generator). Let ge : Z -> X be a differentiable func. tion that induces a distribution Pg. Let Pr be the real data distribution. Let D be a differen tiable discriminator. If the conditions of Theorems2.1or2.2are satisfied, ||D - D*|| < e, and Ez~p(z)[l|Jege(z)|I2] M2,then\n|VeD(ge(z))|l2 |VgEz~p(z)[log(1- D(ge(z))]|I2 <Ez~p(z) |1- D(ge(z))|2 VxD(ge(z)l2]|Jege(z)2 <Ez~p(z) |1- D(ge(z))|2 (l|VxD*(ge(z))||2+e)2l|Jege(z)||2 Ez~p(z) (|1-D*(ge(z))|-e)2 e2|Jege(z)|l2 M2\n2Since M can depend on 0, this condition is trivially verified for a uniform prior and a neural network. The case of a Gaussian prior requires more work because we need to bound the growth on z, but is also true for current architectures.\nGradient of the generator with the original cost 101 After 1 epoch 100 After 10 epochs After 25 epochs 10-1 10-2 l|(e6a)T 10-3 10-4 10-5 10-6 10-7 10-8 500 1000 1500 2000 2500 3000 3500 4000 Training iterations\nFigure 2: First, we trained a DCGAN for 1, 10 and 25 epochs. Then, with the generator fixed we. train a discriminator from scratch and measure the gradients with the original cost function. We see the gradient norms decay quickly, in the best case 5 orders of magnitude after 4000 discriminator. iterations. Note the logarithmic scale..\nThis shows that as our discriminator gets better, the gradient of the generator vanishes. For com. pleteness, this was experimentally verified in Figure 2. The fact that this happens is terrible, since. the fact that the generator's cost function being close to the Jensen Shannon divergence depends or the quality of this approximation. This points us to a fundamental: either our updates to the discrim. inator will be inacurate, or they will vanish. This makes it difficult to train using this cost function or leave up to the user to decide the precise amount of training dedicated to the discriminator, which can make GAN training extremely hard..\nTo avoid gradients vanishing when the discriminator is very confident, people have chosen to use a different gradient step for the generator.\n0 = VgEz~p(z) [-log D(ge(z))]\nEz~p(z) [-Ve logD*(ge(z))|e=oo] = Ve[KL(Pge|Pr) -2JSD(Pgo|Pr)]|e\nBefore diving into the proof, let's look at equation (3) for a second. This is the inverted KL minus two JSD. First of all, the JSDs are in the opposite sign, which means they are pushing for the distributions to be different, which seems like a fault in the update. Second, the KL appearing in the equation is K L(Pa[Pr), not the one equivalent to maximum likelihood. As we know, this KI assigns an extremely high cost to generating fake looking samples, and an extremely low cost on mode dropping; and the JSD is symetrical so it shouldn't alter this behaviour. This explains wha we see in practice, that GANs (when stabilized) create good looking samples, and justifies what is commonly conjectured, that GANs suffer from an extensive amount of mode dropping.\n3This is important since when backpropagating to the generator, the discriminator is assumed fixec\nWe now state and prove for the first time which cost function is being optimized by this gradient. step. Later, we prove that while this gradient doesn't necessarily suffer from vanishing gradients,. it does cause massively unstable updates (that have been widely experienced in practice) under the prescence of a noisy approximation to the optimal discriminator..\nE z~p(z) [Ve log(1- D*(ge(z))|e=eo] = Ve2JSD(Pqe][Pr)[e=0\nFurthermore, as remarked by Huszar (2016)\nK L(Pqe KL( gec z) KL( qe(z) ing derivatives in 0 at 0o we get D*(ge(z)) VeKL(Pgol|Pr)|e=eo =-VeEz 0=0o - VeKL(Pge|Pgeo -D*(ge(z)) D*(ge(z)) =0c D*(ge(z))\nKL(Pge|Pr) =Ex~P log ge D*(ge(z)) KL(Pge|Pgeq 1_D*(ge(z))\nSubstracting this last equation with the result for the JSD, we obtain our desired result\nWe now turn to our result regarding the instability of a noisy version of the true distriminator\nTheorem 2.6 (Instability of generator gradient updates). Let ge : Z -> X be a differentiable. function that induces a distribution Pg. Let P. be the real data distribution, with either conditions of Theorems2.1or2.2 satisfied. Let D be a discriminator such that D* - D = e is a centered Gaussian process indexed by x and independent for every x (popularly known as white noise) and xD* _ xD = r another independent centered Gaussian process indexed by x and independeni. for every x. Then, each coordinate of.\nProof. Let us remember again that in this case D is locally constant equal to O on the support of P. We denote r(z), e(z) the random variables r(ge(z)), e(ge(z)). By the chain rule and the definition. of r, e, we get\nSince r(z) is a centered Gaussian distribution, multiplying by a matrix doesn't change this fact. Furthermore, when we divide by e(z), a centered Gaussian independent from the numerator, we ge a centered Cauchy random variable on every coordinate. Averaging over z the different independent. Cauchy random variables again yields a centered Cauchy distribution..\nA note on technicality: when e is defined as such, the remaining process is not measurable in x, so we can't take the expectation in z trivially. This is commonly bypassed, and can be formally worked out by stating the expectation as the result of a stochastic differential equation.\nD*(ge(z)) VeKL(Pgel|Pr)|e=0o =-VeEz 0=0o- VeKL(Pge|Pgeo)|o=0c -D*(ge(z)) D*(ge(z)) 0=00 I _ D*(ge(z))\nEz~p(z) -Ve log D(ge(z))\nJege(z)VxD(ge(z)) Ez~p(z) [-Vglog D(ge(z))] = Ez~p(z) D(ge(z)) Jege(z)r(z) =E\n4Note that the theorem holds regardless of the variance of r and e. As the approximation gets better, this error looks more and more as centered random noise due to the finite precision..\nGradient of the generator with the - log D cost 120 After 1 epoch After 10 epochs 100 After 25 epochs 80 ||(06a)1| 60 40 20 1000 2000 3000 4000 5000 6000 7000 Training iterations\nGradient of the generator with the - log D cost 120 After 1 epoch After 10 epochs 100 After 25 epochs 80 60 40 20 1000 2000 3000 4000 5000 6000 7000 Training iterations\nFigure 3: First, we trained a DCGAN for 1, 10 and 25 epochs. Then, with the generator fixed we train a discriminator from scratch and measure the gradients with the - log D cost function. We see the gradient norms grow quickly. Furthermore, the noise in the curves shows that the variance of the gradients is also increasing. All these gradients lead to updates that lower sample quality notoriously.\nNote that even if we ignore the fact that the updates have infinite variance, we still arrive to the fact. that the distribution of the updates is centered, meaning that if we bound the updates the expected update will be 0, providing no feedback to the gradient..\nSince the assumption that the noises of D and VD are decorrelated is albeit too strong, we show in|Figure 3|how the norm of the gradient grows drastically as we train the discriminator closer to optimality, at any stage in training of a well stabilized DCGAN except when it has already con- verged. In all cases, using this updates lead to a notorious decrease in sample quality. The noise in the curves also shows that the variance of the gradients is increasing, which is known to delve into slower convergence and more unstable behaviour in the optimization (Bottou et al.2016).\nAn important question now is how to fix the instability and vanishing gradients issues. Something. we can do to break the assumptions of these theorems is add continuous noise to the inputs of the discriminator, therefore smoothening the distribution of the probability mass.\nTheorem 3.1. If X has distribution Px with support on M and e is an aboslutely continuous distribution with density Pe, then P x+e is absolutely continuous with density.\nx - y Pe(x-y) dPx(y M\n1 l|y-x|I2 Px+e dPx(y) e 202 Z M\n1 e-2l|y-x|-1 Px+e\nThis theorem therefore tells us that the density Px+e(x) is inversely proportional to the average. distance to points in the support of P x, weighted by the probability of these points. In the case. of the support of Px being a manifold, we will have the weighted average of the distance to the points along the manifold. How we choose the distribution of the noise e will impact the notion of distance we are choosing. In our corolary, for example, we can see the effect of changing the. covariance matrix by altering the norm inside the exponential. Different noises with different types of decays can therefore be used.\nNow. the optimal discriminator between IP and I\nand we want to calculate what the gradient passed to the generator is\nTheorem 3.2. Let P. and Pg be two distributions with support on M and P respectively, with e ~ N(0. o2I). Then. the gradient passed to the generator has the form\nEz~p(z) [Velog(1- D*(ge(z)))] Pe(ge(z) -y)Velge(z) -yll2 dPr(y M (ge(z) -y)Ve|lge(z) -y|l2 dPg(y\nwhere a(z) and b(z) are positive functions. Furthermore, b > a if and only if Pr+e > Pg+c, and b < a if and only if Pr+e < Pg+e.\n1 1 x+ Z Id+1\nPr+e(x) ) Pr+e(x) + Pg+e\nThis theorem proves that we will drive our samples ge(z) towards points along the data manifold veighted by their probability and the distance from our samples. Furthermore, the second term lrives our points away from high probability samples, again, weighted by the sample manifold anc listance to these samples. This is similar in spirit to contrastive divergence, where we lower the free nergy of our samples and increase the free energy of data points. The importance of this term is seer nore clearly when we have samples that have higher probability of coming from Pg than from P, n this case, we will have b > a and the second term will have the strength to lower the probability f this too likely samples. Finally, if there's an area around x that has the same probability to come rom Pg than Pr, the gradient contributions between the two terms will cancel, therefore stabilizing he gradient when Pr is similar to P. a\nThere is one important problem with taking gradient steps exactly of the form (4), which is that in that case, D will disregards errors that lie exactly in g(Z), since this is a set of measure O. However, g will be optimizing its cost only on that space. This will make the discriminator extremely susceptible to adversarial examples, and will render low cost on the generator without high cost on the discriminator, and lousy meaningless samples. This is easilly seen when we realize the term inside the expectation of equation (4) will be a positive scalar times Vx log(1 - D*(x))Vege(z) which is the directional derivative towards the exact adversarial term of Goodfellow et al.(2014b) Because of this, it is important to backprop through noisy samples in the generator as well. This will yield a crucial benefit: the generator's backprop term will be through samples on a set of positive measure that the discriminator will care about. Formalizing this notion, the actual gradient through the generator will now be proportional to VeJSD(Pr+e|Pg+e), which will make the two noisy distributions match. As we anneal the noise, this will make Pr and Pg match as well. For completeness, we show the smooth gradient we get in this case. The proof is identical to the one of Theorem3.2 so we leave it to the reader.\nEz~p(z),e'[Ve log(1- D*(ge(z)))] Pe(ge(z) -y)Ve|lge(z) - y2 dPr(y A Pe(ge(z) -y)Ve||ge(z) -yll2 dPg(y) =2VeJSD(Pr+e|Pg+e)\nProof of theorem[3.2] Since the discriminator is assumed fixed when backproping to the generator the only thing that depends on 0 is ge(z) for every z. By taking derivatives on our cost function\n1 1 2o2 Pg+c(go(z)) + Pr+e(go(z)) 1 1 Pr+e(ge(z)) 6(z) 2o2 Pg+e(ge(z)) + Pr+c(ge(z)) Pg+e(ge(z))\nFinishing the proof\nIn the same as with Theorem 3.2 a and b will have the same properties. The main difference is. that we will be moving all our noisy samples towards the data manifold, which can be thought of as moving a small neighbourhood of samples towards it. This will protect the discriminator against measure 0 adversarial examples.\nPg+e(ge(z) Pr+e(go(z)) + Pg+e(go(z)) = Ez~p(z) [Ve log Pg+c(ge(z)) - Ve log(Pg+e(ge(z)) + Pr+c(ge(z)) VePg+e(ge(z)) )VePq+e(ge(z))+VePr+e(ge(z)) p(z) Pg+e(ge(z)) Pg+e(ge(z)) + Pr+e(go(z)) 1 Ve[-Pr+e(ge(z)] p(z) Pg+e(ge(z)) + Pr+e(ge(z)) 1 Pr+e(ge(z)) P Pg+c(ge(z)) + Pr+e(ge(z)) Pg+e(ge(z))\nge(z)) Ez~p(z)[2o2a(z)Ve[-Pr+c(go(z))]-2o2b(z)Ve[-Pg+e(ge(z)) -llge(z)-yll? -|ge(z)-y|l? 202 dPr(y) 202 Ige(z)-yll? Ve||ge(z) - y|l2 dPr(y) 202 gez ege(z) y||2 dPq(y Pe(ge(z)-y)Ve|ge(z) -y||2 dPr(y) Pe(ge(z)-y)Ve|ge(z) -y|l2 dPg(y)\nAn interesting observation is that if we have two distributions Pr and Pg with support on manifolds. that are close, the noise terms will make the noisy distributions Pr+e and Pg+e almost overlap, and. the JSD between them will be small. This is in drastic contrast to the noiseless variants P, and P. where all the divergences are maxed out, regardless of the closeness of the manifolds. We could argue to use the JSD of the noisy variants to measure a similarity between the original distributions,. but this would depend on the amount of noise, and is not an intrinsic measure of P, and Pg. Luckilly,. there are alternatives.\nDefinition 3.1. We recall the definition of the Wasserstein metric W(P, Q) for P and Q two distri butions over I. Namely,\nwhere I is the set of all possible joints on I I that have marginals P and Q\nThe Wasserstein distance also goes by other names, most commonly the transportation metric and. the earth mover's distance. This last name is most explicative: it's the minimum cost of transporting. the whole probability mass of P from its support to match the probability mass of Q on Q's support. This identification of transporting points from P to Q is done via the coupling y. We refer the readei. to[Villani((2009) for an in-depth explanation of these ideas. It is easy to see now that the Wasserstein. metric incorporates the notion of distance (as also seen inside the integral) between the elements in the support of P and the ones in the support of Q, and that as the supports of P and Q get closer and. closer, the metric will go to O, inducing as well a notion of distance between manifolds..\nIntuitively, as we decrease the noise, Px and Px+e become more similar. However, it is easy to see. again that JSD(Px |Px+e) is maxed out, regardless of the amount of noise. The following Lemma. shows that this is not the case for the Wasserstein metric, and that it goes to O smoothly when we decrease the variance of the noise..\nLemma 4. If e is a random vector with mean 0. then we have\nW(Px,Px+e)<V\nProof. Let x ~ Px, and y = x + e with e independent from x. We call y the joint of (x, y), whic clearly has marginals Px and Px+e. Therefore,\nwhere the last inequality was due to Jensen\nWe now turn to one of our main results. We are interested in studying the distance between P and P. without any noise, even when their supports lie on different manifolds, since (for example the closer these manifolds are, the closer to actual points on the data manifold the samples will be Furthermore, we eventually want a way to evaluate generative models, regardless of whether they ar continuous (as in a VAE) or not (as in a GAN), a problem that has for now been completely unsolved The next theorem relates the Wasserstein distance of P, and Pg, without any noise or modification, t the divergence of Pr+e and Pg+e, and the variance of the noise. Since Pr+e and Pg+e are continuous distributions, this divergence is a sensible estimate, which can even be attempted to minimize, sinc a discriminator trained on those distributions will approximate the JSD between them, and provid smooth gradients as per Corolary3.2\nW(P,Q) = inf |x - y||2dy(x, y yer Jxxx\nW(Px,Px+e) x- y|2dy(x,y) =Ex~PxEy~x+e[||x - y||2 =Ex~PxEy~x+c[||e||2] =Ex~PxEe[l|e|2] =Ee[|le|I2] Ee[llell?]2 = V2\nTheorem 3.3. Let P, and Pg be any two distributions, and e be a random vector with mean O and variance V. If Pr+e and Pg+e have support contained on a ball of diameter C, then6\n2V2 + W(Pr+c,Pg+e) 2V2 + C8(IPr+e,Pg+e) 2V2 +C(8(Pr+e,Pm)+8(Pg+e,Pm) < 2V2 K L(Pr- < 2V2 +2C J SD(I Pr+e|Pg+e\nWe first used the Lemma4|to bound everything but the middle term as a function of V. After that, w. followed by the fact that W(P, Q) C&(P, Q) wih & the total variation, which is a popular Lemm arizing from the Kantorovich-Rubinstein duality. After that, we used the triangular inequality on and Pm the mixture distribution between Pg+e and Pr+e. Finally, we used Pinsker's inequality anc. later the fact that each individual K L is only one of the non-negative sumands of the JSD..\nTheorem[3.3|points us to an interesting idea. The two terms in equation (6) can be controlled. The. first term can be decreased by annealing the noise, and the second term can be minimized by a GAN when the discriminator is trained on the noisy inputs, since it will be approximating the JSD. between the two continuous distributions. One great advantage of this is that we no longer have to worry about training schedules. Because of the noise, we can train the discriminator till optimality without any problems and get smooth interpretable gradients by Corollary3.2 All this while still. minimizing the distance between P, and Pg, the two noiseless distributions we in the end care about.."}, {"section_index": "5", "section_name": "ACKNOWLEDGMENTS", "section_text": "The first author would like to especially thank Luis Scoccola for help with the proof of Lemma|1\nThe authors would also like to thank Ishmael Belghazi, Yoshua Bengio, Gerry Che, Soumith Chin tala, Caglar Gulcehre, Daniel Jiwoong Im, Alex Lamb, Luis Scoccola, Pablo Sprechmann, Arthu Szlam, Jake Zhao for insightful comments and advice"}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Leon Bottou, Frank E. Curtis, and Jorge Nocedal. Optimization methods for large-scale machin learning. CoRR, abs/1606.04838, 2016.\nEmily L. Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in Neural Information Processing. Systems 28. pp. 1486-1494. Curran Associates. Inc.. 2015.\nIan J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair. Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor mation Processing Systems 27, pp. 2672-2680. Curran Associates, Inc., 2014a.\nIan J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. CoRR, abs/1412.6572, 2014b\n6while this last condition isn't true if e is a Gaussian, this is easily fixed by clipping the noise\nW(Pr,Pq) <2V2 +2C JSD(Pr+e|Pg+e)\nFerenc Huszar. How (not) to train your generative model: Scheduled sampling, likelihood, adver sary? CoRR, abs/1511.05101, 2015\nAlex Lamb, Anirudh Goyal, Ying Zhang, Saizheng Zhang, Aaron Courville, and Yoshua Bengic Professor forcing: A new algorithm for training recurrent networks. Corr, abs/1610.09038, 2016\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversaria1 networks. CoRR, abs/1511.06434, 2015.\nTim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training gans. CoRR, abs/1606.03498, 2016.\nJiajun Wu, Chengkai Zhang, Tianfan Xue, William T. Freeman, and Joshua B. Tenenbaum. Learn. ing a probabilistic latent space of object shapes via 3d generative-adversarial modeling. Corr abs/1610.07584, 2016."}, {"section_index": "7", "section_name": "A PROOFS OF THINGS", "section_text": "Proof of Lemma[1 We first consider the case where the nonlinearities are rectifiers or leaky rec tifiers of the form o(x) = 1[x < O]c1x + 1[x O]c2x for some c1,c2 E R. In this case g(z) = Dn Wn ... D1 W1z, where W, are affine transformations and D, are some diagonal matri ces dependent on z that have diagonal entries c1 or c2. If we consider D to be the (finite) set of al diagonal matrices with diagonal entries c1 or c2, then g(Z) UD,eD Dn Wn ... D W Z, whicl is a finite union of linear manifolds.\nThe proof for the second case is technical and slightly more involved. When o is a pointwise smooth strictly increasing nonlinearity, then applying it vectorwise it's a diffeomorphism to its im age. Therefore, it sends a countable union of manifolds of dimension d to a countable union of manifolds of dimension d. If we can prove the same thing for affine transformations we will be fin ished, since g(Z) is just a composition of these applied to a dim Z dimensional manifold. Of course it suffices to prove that an affine transformation sends a manifold to a countable union of manifolds without increasing dimension, since a countable union of countable unions is still a countable union Furthermore, we only need to show this for linear transformations, since applying a bias term is a diffeomorphism.\nLet W E Rnm be a matrix. Note that by the singular value decomposition, W = UV, where is a square diagonal matrix with diagonal positive entries and U, V are compositions of changes of basis, inclusions (meaning adding Os to new coordinates) and projections to a subset of the co- ordinates. Multiplying by and applying a change of basis are diffeomorphisms, and adding Os to new coordinates is a manifold embedding, so we only need to prove our statement for projections onto a subset of the coordinates. Let : Rn+k - Rn, where (x1,..., n+k) = (1,..., xn) be our projection and M Rn+k our d-dimensional manifold. If n d, we are done since the image of is contained in all Rn, a manifold with at most dimension d. We now turn to the case where n > d. Let ;(x) = x; be the projection onto the i-th coordinate. If x is a critical point of , since the coordinates of are independent, then x has to be a critical point of a ;. By a consequence of the Morse Lemma, the critical points of , are isolated, and therefore so are the ones of r, meaning that there is at most a countable number of them. Since maps the non-critical points onto a d dimensional manifold (because it acts as an embedding) and the countable number of critical points into a countable number of points (or O dimensional manifolds), the proof is finished.\nProof of Lemma[2] For now we assume that M and P are without boundary. If dim M + dim P > d it is known that under arbitrarilly small perturbations defined as the ones in the statement o. this Lemma, the two dimensions will intersect only transversally with probability 1 by the Genera Position Lemma. If dim M + dim P < d, we will show that with probability 1, M + n and P + n will not intersect, thereby getting our desired result. Let us then assume dim M + dim P < d. Note that M n P 0 if and only if there are x E M, y E P such that x + n = y + n', or equivalently. x - y = n' - n. Therefore, M and P intersect if and only if n' - n E M - P. Since n, n' are. independent continuous random variables, the difference is also continuous. If M - P has measure O in Rd then P(n' n E M - P) = 0, concluding the proof. We will therefore show that M P. has measure 0. Let f : M P -> Rd be f(x, y) = x - y. If m and p are the dimensions of M anc. P, then f is a smooth function between an m + p-dimensional manifold and a d dimensional one. Clearly, the image of f is M P. Therefore,.\nM-P =f({z E M x P|rank(dzf)< m+p}) U f({z E M x P|rank(dzf) =m+p}\nThe first set is the image of the critical points, namely the critical values. By Sard's Lemma, this set has measure 0. Let's call A = {z E M P|rank(dzf) = m + p}. Let z be an element of A. By the inverse function theorem, there is a neighbourhood U, C M P of z such that f|u, is an embedding. Since every manifold has a countable topological basis, we can cover A by countable sets Uzn, where n E N. We will just note them by Un. Since f[un is an embedding. f(Un) is an m + p-dimensional manifold, and since m + p < d, this set has measure O in Rd. Now, f (A) = Unen f(Un), which therefore has measure O in Rd, finishing the proof of the boundary free case.\nNow we consider the case where M and P are manifolds with boundary. By a simple union bound\nProof of Lemma3] Let m = dim M and p = dim P. We again consider first the case where M and P are manifolds without boundary. If m + p < d, then = so the statement is obviously true If m + p d, then M and P intersect transversally. This implies that is a manifold of dimensior m + p - d < m, p. Since is a submanifold of both M and P that has lower dimension, it has measure 0 on both of them.\nWe now tackle the case where M and P have boundaries. Let us remember that M = Int M U aM and the union is disjoint (and analogously for P). By using elementary properties of sets, we can trivially see that\nL = M nP =(Int M n Int P) l\nwhere the unions are disjoint. This is the disjoint union of 4 strictly lower dimensional manifolds by using the first part of the proof. Since each one of these intersections has measure O on either. the interior or boundary of M (again, by the first part of the proof), and interior and boundary are contained in M, each one of the four intersections has measure O in M. Analogously, they have measure O in P, and by a simple union bound we see that has measure O in M and P finishing. the remaining case of the proof.\nProof of Theorem[3.1] We first need to show that Px+e is absolutely continuous. Let A be a Bore. set with Lebesgue measure O. Then, by the fact that e and X are independent, we know by Fubini\nP x - P(A-x) dPx(x Rd 0dPx(x) = 0 Rd\nWhere we used the fact that if A has Lebesgue measure zero, then so does A - x and since P, is absolutely continuous, P (A - x) = 0.\nNow we calculate the density of Px+e. Again, by using the independence of X and e, for any Borel set B we know\nPe(B-y) dPx(y JR d [Pe(B-y)] JP x Pe )dx B-y Pe(x - y)dx B ~PxPe(xy)] dx\nM perfectly aligns with P) Pn,n' (Int M perfectly aligns with Int P) + Pn,n'(Int M perfectly aligns with dP). + Pn,n'(@M perfectly aligns with Int P). + Pn,n'(@M perfectly aligns with aP).\nwhere the last equality arizes when combining the facts that Int M = n + Int M = Int (n + M) =- Int M (and analogously for the boundary and P), that the boundary and interiors of M and P are boundary free regular submanifolds of Rd without full dimension, and then applying the boundary free case of the proof.\nPe(B-y) dPx(y) Px+ Rd Ey~Px[Pc(B-y)] Pe(x)dx ~P B-y Pe(x - y)dx B [Pe(x y)] dx\nTherefore, Px+e(B) = SR Px+e(x)dx for our proposed Px+e and all Borel sets B. By the unique- ness of the Radon-Nikodym theorem, this implies the proposed Px+e is the density of Px+e. The equivalence of the formula changing the expectation for SM Px is trivial by the definition of expec- tation and the fact that the support of Px lies on M.\nwhere d(x, y) is the distance between points (in our case the Euclidean distance)\nor L defined as in equation q1\nIn this appendix we further explain some of the terms and ideas mentioned in the paper, which due to space constrains, and to keep the flow of the paper, couldn't be extremely developed in the main text. Some of these have to do with notation, others with technical elements of the proofs. On the latter case, we try to convey more intuition than we previously could. We present these clarifications in a very informal fashion in the following item list.\nThere are two different but very related properties a random variable can have. A random. variable X is said to be continuous if P(X = x) = 0 for all single points x E X. Note that. a random variable concentrated on a low dimensional manifold such as a plane can have this property. However, an absolutely continuous random variable has the following property:. if a set A has Lebesgue measure 0, then P(X E A) = 0. Since points have measure 0 with the Lebesgue measure, absolute continuity implies continuity. A random variable that's supported on a low dimensional manifold therefore will not be absolutely continuous: let. M a low dimensional manifold be the support of X. Since a low dimensional manifold. has O Lebesgue measure, this would imply P(X E M) = 0, which is an absurd since M was the support of X. The property of X being absolutely continuous can be shown. to be equivalent to X having a density: the existence of a function f : -> R such that. P(X E A) = SA f (x) dx (this is a consequence of the Radon-Nikodym theorem). The annoying part is that in everyday paper writing when we talk about continuous random. variables, we omit the \"absolutely\"' word to keep the text concise and actually talk about. absolutely continuous random variables (ones that have a density), this is done through almost all sciences and throughout mathematics as well, annoying as it is. However we. made the clarification in here since it's relevant to our paper not to mistake the two terms.. The notation Pr[D(x) = 1] = 1 is the abbreviation of Pr[{x E : D(x) = 1}] = 1 for a measure Pr. Another way of expressing this more formally is Pr[D-1(1)] = 1. In the proof of Theorem 2.1] the distance between sets d(A, B) is defined as the usual distance between sets in a metric space. nf\nd(A,B) = inf d(x,y) xEA,yEB\nwhere d(x, y) is the distance between points (in our case the Euclidean distance). Note that not everything that's outside of the support of P. has to be a generated image Generated images are only things that lie in the support of Pg, and there are things that don't need to be in the support of either P, or Pq (these could be places where 0 < D < 1 for example). This is because the discriminator is not trained to discriminate P. from all things that are not Pr, but to distinguish P. from Pg. Points that don't lie in the support of P. or Pg are not important to the performance of the discriminator (as is easily evidenced in its cost). Why we define accuracy 1 as is done in the text is to avoid the identification of a single 'tight' support, since this typically leads to problems (if I take a measure O set from any support it still is the support of the distribution). In the end, what we aim for is: - We want D(x) = 1 with probability 1 when x ~ Pr. - We want D(x) = 0 with probability 1 when x ~ Pg. -Whatever happens elsewhere is irrelevant (as it is also reflected by the cost of the discriminator) We say that a discriminator D* is optimal for ge (or its corresponding Pg) if for all mea- surable functions D : -> 0, 1we have D*\nL(D*,ge) L(D,ge"}] |
H1_EDpogx | [{"section_index": "0", "section_name": "NEAR-DATA PROCESSING FOR MACHINE LEARNING", "section_text": "Hyeokjun Choe, Seil Lee, Hyunha Nam, Seongsik Park, Seijoon Kim"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recent successes in deep learning can be accredited to the availability of big data that has made. the training of large deep neural networks possible. In the conventional memory hierarchy, the training data stored at the low level (e.g., hard disks) need to be moved upward all the way to the. CPU registers. As larger and larger data are being used for training large-scale models such as deep. networks (LeCun et al.||2015), the overhead incurred by the data movement in the hierarchy becomes. more salient, critically affecting the overall computational efficiency and power consumption..\n* To whom correspondence should be addressed"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In computer architecture, near-data processing (NDP) refers to augmenting the memory or the storage with processing power so that it can process the data stored therein. By offloading the computational burden of CPU and saving the need for transferring raw data in its entirety, NDP exhibits a great potential for accelera- tion and power reduction. Despite this potential, specific research activities on NDP have witnessed only limited success until recently, often owing to perfor- mance mismatches between logic and memory process technologies that put a limit on the processing capability of memory. Recently, there have been two ma- jor changes in the game, igniting the resurgence of NDP with renewed interest. The first is the success of machine learning (ML), which often demands a great deal of computation for training, requiring frequent transfers of big data. The second is the advent of NAND flash-based solid-state drives (SSDs) containing multicore processors that can accommodate extra computation for data process- ing. Sparked by these application needs and technological support, we evaluate the potential of NDP for ML using a new SSD platform that allows us to simulate in-storage processing (ISP) of ML workloads. Our platform (named ISP-ML) is a full-fledged simulator of a realistic multi-channel SsD that can execute various ML algorithms using the data stored in the SSD. For thorough performance anal- ysis and in-depth comparison with alternatives, we focus on a specific algorithm: stochastic gradient decent (SGD), which is the de facto standard for training dif- ferentiable learning machines including deep neural networks. We implement and compare three variants of SGD (synchronous, Downpour, and elastic averaging) using ISP-ML, exploiting the multiple NAND channels for parallelizing SGD. In addition, we compare the performance of ISP and that of conventional in-host processing, revealing the advantages of ISP. Based on the advantages and limita- tions identified through our experiments, we further discuss directions for future research on ISP for accelerating ML.\nThe idea of near-data processing (NDP) (Balasubramonian et al.]2014) is to equip the memory. or storage with intelligence (i.e., processors) and let it process the data stored therein firsthand. A successful NDP implementation would reduce the data transfers and power consumption, no. to mention offloading the computational burden of CPUs. The types of NDP realizations include processing in memory (PIM) (Gokhale et al.||1995) and in-storage processing (ISP) (Acharya et al.. 1998;Kim et al.]2016cLee et al.J|2016f|Choi & Kee2015). Despite the potential of NDP, it has no been considered significantly for commercial systems. For PIM, there has been a wide performance. gap between the separate processes to manufacture logic and memory chips. For ISP, commercia. hard disk drives (HDDs), the mainstream storage devices for a long time, normally have limitec. processing capabilities due to tight selling prices..\nRecently, we have seen a resurrection of NDP with renewed interest, which has been triggered b. two major factors, one in the application side and the other in the technology side: First, computing. and data-intensive deep learning is rapidly becoming the method of choice for various machin. learning tasks. To train deep neural networks, a large volume of data is typically needed to ensur performance. Although GPUs and multicore CPUs often provide an effective means for massiv. computation required by deep learning, it remains inevitable to store big training data in the storag. and then transfer them to the CPU/GPU level for computation. Second, NAND flash-based solid state drives (SSDs) are becoming popular, gradually replacing HDDs in various computing sectors. To interface SSDs with the host seamlessly replacing HDDs, SSDs require various software runnin. inside, e.g., for address translation and garbage collection (Kim et al.]2002]Gupta et al.][2009). T suit such needs, SSDs are often equipped with multicore processors, which provide far more pro. cessing capabilities than those in HDDs. Usually, there exists a plenty of idle time in the processor in SSDs that can be exploited for other purposes than SSD housekeeping (Kim et al.|2010) 2016b\nMotivated by these changes and opportunities, we propose a new SsD platform that allows us to. simulate in-storage processing (IsP) of machine learning workloads and evaluate the potential of. NDP for machine learning in ISP. Our platform named ISP-ML is a full-fledged system-level simu-. lator of a realistic multi-channel SsD that can execute various machine learning algorithms using the. data stored in the SsD. For thorough performance analysis and in-depth comparison with alterna- tives, we focus on describing our implementation of a specific algorithm in this paper: the stochastic gradient decent (SGD) algorithm, which is the de facto standard for training differentiable learning machines including deep neural networks. Specifically, we implement three types of parallel SGD. synchronous SGD (Zinkevich et al.]2010), Downpour SGD (Dean et al.]2012), and elastic aver- aging SGD (EASGD) (Zhang et al.2015). We compare the performance of these implementations of parallel SGD using a 10 times amplified version of MNIST (LeCun et al.]1998). Furthermore,. to evaluate the effectiveness of ISP-based optimization by SGD, we compare the performance of ISP-based and the conventional in-host processing (IHP)-based optimization..\nTo the best of the authors' knowledge, this work is one of the first attempts to apply NDP to a multi. channel SSD for accelerating SGD-based optimization for training differentiable learning machines Our specific contributions can be stated as follows:.\nWe created a full-fledged ISP-supporting SSD platform called ISP-ML, which required mult. year team efforts. ISP-ML is versatile and can simulate not only storage-related functionalitie. of a multi-channel SSD but also NDP-related functionalities in realistic manner. ISP-ML ca execute various machine learning algorithms using the data stored in the SsD while supportir. the simulation of multi-channel NAND flash SSDs to exploit data-level parallelism.. We thoroughly tested the effectiveness of our platform by implementing and comparing multip. versions of parallel SGD, which is widely used for training various machine learning algorithn. including deep learning. We also devised a methodology that can carefully and fairly compare tl. performance of IHP-based and ISP-based optimization.. We identified intriguing future research opportunities in terms of exploiting the parallelism pr. vided by the multiple NAND channels inside SSDs. As in high-performance computing, the. exist multiple \"nodes\" (i.e., NAND channel controllers) for sharing workloads, but the comm. nication cost is negligible (due to negligible-latency on-chip communication) unlike the conve tional parallel computing. Using our platform, we envision new designs of parallel optimizatic. and training algorithms that can exploit this characteristic, producing enhanced results..\nVarious types of machine learning algorithms exist (Murphy]2012) Goodfellow et al.2016), an their core concept can often be explained using the following equations:.\nF(D,0) = L(D,0) +r0 0t+1=0t+0(D) 0(D) =-nVF(D,0)\nwhere D and 0 denote the input data and model parameters, respectively, and a loss function L(D, 0. reflects the difference between the optimal and current hypotheses. A regularizer to handle over-. fitting is denoted by r(0), and the objective function F(D, 0) is the sum of the loss and regularize. terms. The main purpose of supervised machine learning can then be formulated as finding optimal. 0 that minimizes F(D, 0). Gradient descent is a first-order iterative optimization algorithm to find. the minimum value of F(D, 0) by updating 0 on every iteration t to the direction of negative gra. dient of F(D, 0), where n is the learning rate. SGD computes the gradient of the parameters and. updates them using a single training sample per iteration. Minibatch (stochastic) gradient decent. uses multiple (but far less than the whole) samples per iteration. As will be explained shortly, we employ minibatch SGD in our framework, setting the size of a minibatch to the number of training. samples in a NAND flash page, which is named 'page-minibatch' (see Figure 2).."}, {"section_index": "3", "section_name": "2.2 PARALLEL AND DISTRIBUTED SGD", "section_text": "Zinkevich et al.(2010) proposed an algorithm that implements parallel SGD in a distributed com puting setup. This algorithm often suffers from excessive latency caused by the need for synchro. nization of all slave nodes. To overcome this weakness, Recht et al.(2011) proposed the lock-free Hogwild! algorithm that can update parameters asynchronously. Hogwild! is normally implemented in a single machine with a multicore processor.Dean et al.(2012) proposed the Downpour SGD for. a distributed computing systems by extending the Hodwild! algorithm. While they successfully im. plemented asynchronous SGD in a distributed computing system, it often fails to overcome commu nication bottlenecks and shows inefficient bandwidth usage, caused by substantial data movements between computing nodes. Recently proposed EASGD (Zhang et al.J 2015) attempted to minimize. communication overhead by reducing the frequency of parameter updates. Many EASGD-based approaches reported its effectiveness in distributed environments.\nSSDs have emerged as a type of next-generation storage device using NAND flash memory (Kim. et al.]2010). As shown in the right image in Figure [1[a), a typical SSD consists of an SSD con troller, a DRAM buffer, and a NAND flash array. The SSD controller is typically composed of. an embedded processor, a cache controller, and channel controllers. The DRAM component, con-. trolled by the cache controller, plays the role of a cache buffer when the NAND flash array is read. or written. The NAND flash array contains multiple NAND chips that can be accessed simultane-. ously thanks to multi-channel configurations and per-channel controllers. Every channel controller is managed by the software called flash translation layer (FTL), which executes wear-leveling and garbage collection to improve the performance and durability of the NAND flash array.."}, {"section_index": "4", "section_name": "2.4 PREVIOUS WORK ON NEAR-DATA PROCESSING", "section_text": "Most of the previous work on ISP focused on popular but inherently simple algorithms, such as scan. join, and query operations (Kim et al.]2016c).Lee et al.(2016) proposed to run the merge operation (frequently used by external sort operation in Hadoop) inside an SSD to reduce IO transfers and. read/write operations, also extending the lifetime of the NAND flash inside the SSD.Choi & Kee (2015) implemented algorithms for linear regression, k-means, and string match in the flash memory. controller (FMC) via reconfigurable stream processors. In addition, they implemented a MapRe duce application inside the embedded processor and FMC of the SSD by using partitioning and. pipelining methods that could improve performance and reduce power consumption. BlueDBM (Jun.\nFigure 1: (a) Block diagram of a typical computing system equipped with an SSD and a magnified view of a usual SsD depicting its internal components and their connections. (b) Schematic of the proposed ISP-ML framework, which is implemented in SystemC using Synopsys Platform Architect (http://www.synopsys.com).\net al. 2015) is an ISP system architecture for distributed computing systems with a flash memory. based embedded field programmable gate array (FPGA). The authors implemented nearest-neighbo. search, graph traversal, and string search algorithms. No prior work ever implemented and evaluatec SSD-based optimization of machine learning algorithms using SGD."}, {"section_index": "5", "section_name": "3 PROPOSED METHODOLOGY", "section_text": "Figure[1(a) shows the block diagram of a typical computing system, which is assumed to have ar SSD as its storage device. Also shown in the figure is a magnified view of the SsD block diagran. that shows the major components of an SsD and their interconnections. Starting from the baselin SSD depicted above, we can implement ISP functionalities by modifying the components markec. with black boxes (i.e., ISP HW and ISP SW in the figure). Figure1(b) shows the detailed schematic. of our proposed ISP-ML platform that corresponds to the SSD block (with ISP components) showr. in Figure1(a).\nIn this section, we provide more details of our ISP-ML framework. In addition, we propose a perfor. mance comparison methodology that can compare the performance of ISP and the conventional IHP in a fair manner. As a specific example of the ML algorithms that can be implemented in ISP-ML we utilize parallel SGD\n(a) User SSD Application! SSD controller Os Embedded SRAM Processor(CPU) ISP SW CPU 4 Main Memory Channel Host I/F ISP-SSD 5SD controlle Controller Cache NAND NAND Controller ISP HW SSD Flash Flash DRAM Channel ISP HW Controller NAND NAND ISP HW Flash Flash (b) SRAM SL Embedded Processor 144 channe H Controller clk/rst HostI/F Cache Controller NAND'Flash"}, {"section_index": "6", "section_name": "3.1 ISP-ML: ISP PLATFORM FOR MACHINE LEARNING ON SSDS", "section_text": "Our ISP-ML is a system-level simulator implemented in SystemC on the Synopsys Platform Ar- chitect environment (http://www.synopsys.com). ISP-ML can simulate hardware and software ISP components marked in Figure 1(b) simultaneously. This integrative functionality is crucial for de- sign space exploration in SSD developments. Moreover, ISP-ML allows us to execute various ma chine learning algorithms described in high-level languages (C or C++) directly on ISP-ML only with minor modifications.\nAt the conception of this research, we could not find any publicly available SSD simulator that could be modified for implementing ISP functionalities. This motivated us to implement a new simulator There exist multiple ways of realizing the idea of ISP in an SSD. The first option would be to use the embedded core inside the SSD controller (Figure1[a)). This option does not require designing a new hardware logic and is also flexible, since the ISP capability is implemented by software. However, this option is not ideal for exploiting hardware acceleration and parallelization. The second option would be to design dedicated hardware logics (such as those boxes with black marks in Figure[1[a) and the entire Figure[1(b)) and integrate them into the SSD controller. Although significantly more efforts are needed for this option compared the first, we chose this second option due to its long-term advantages provided by hardware acceleration and power reduction.\nSpecifically, we implemented two types of ISP hardware components, in addition to the software. components. First, we let each channel controller not only manage read/write operations to/from. its NAND flash channel (as in the usual SSDs) but also perform primitive operations on the data. stored in its NAND channel. The type of primitive operation performed depends on the machine. learning algorithm used (the next subsection explains more details of such operations for SGD). Additionally, each channel controller in ISP-ML (slave) communicates with the cache controller. (master) in a master-slave architecture. Second, we designed the cache controller so that it can. collect the outcomes from each of the channel controllers, in addition to its inherent functionality as a. cache (DRAM) manager inside the SSD controller. This master-slave architecture can be interpreted. as a tiny-scale version of the master-slave architecture commonly used in distributed systems. Just. as the channel controllers, the exact functionality of the cache controller can be optimized depending. on the specific algorithm used. Both the channel controllers and the cache controller have internal memory, but the memory size in the latter is far greater than that in the former.."}, {"section_index": "7", "section_name": "3.2 PARALLEL SGD IMPLEMENTATION ON ISP-ML", "section_text": "Using our ISP-ML platform, we implemented the three types of parallel SGD algorithms outline in Figure2} synchronous SGD (Zinkevich et al.]2010), Downpour SGD (Dean et al.]2012), an EASGD (Zhang et al.]2015). For brevity, we focus on describing the implementation details o these algorithms in ISP-ML and omit the purely algorithmic details of each algorithm; we refer th interested to the corresponding references. Note that the size of a minibatch for the minibatch SGI in our framework is set to the number of training samples in a NAND flash page (referred to as page-minibatch' in Figure 2).\nWe implemented Downpour SGD in a similar way to implementing synchronous SGD; the major. difference is that each channel controller immediately begins the next iteration after transferring the\nSpecific parameters and considerations used in our implementation can be found in Section 4.1 There are a few points worth mentioning. Unlike existing conventional SsD simulators, the base- line SSD implemented in ISP-ML can store data in the NAND flash memory inside. In order to support reasonable simulation speed, we modeled ISP-ML at cycle-accurate transaction level while minimizing negative impact on accuracy. We omit to describe other minor details of hardware logic implementations, as are beyond the scope of the conference.\nFor implementing synchronous SGD, we let each of the n channel controllers synchronously com pute the gradient. Firstly, each channel controller reads page-sized data from the NAND flash mem- ory and then stores the data in the channel controller's buffer. Secondly, the channel controller pulls the cache controller's parameters (0cache) and stores them in the buffer. Using the data and parameters stored in the buffer, each channel controller calculates the gradient in parallel. After transferring the gradient to the cache controller, the channel controllers wait for a signal from the cache controller. The cache controller aggregates and updates the parameters and then sends the channel controller signals to pull and replicate the parameters.\nSynchronous SGD Downpour SGD EASGD Processing by i-th channel controller and Processing by i-th channel controller and Processing by i-th channel controller and cache controller cache controller cache controller Repeat Repeat Repeat Read a page from NAND Read a page from NAND Read a page from NAND pull Ocache pull Ocache Repeat for page-minibatch Oi = Ocache Oi = Ocache 0 =0'-n Vi(0) 40i =0 40i=0 t++ Repeat for page-minibatch Repeat for page-minibatch end 0i =0i -n Vi(0) 0i =0i-n Vi(0) if(t devides t) then 40i = 0i + n Vi(0) 0i = 0i + n Vi(0) pull Ocache t++ t++ 0i = 0 - a(0i - Ocache) end end push (0i - Ocache) push 0 and wait if(t devides t) then Ocache = Ocache + a(O - Ocache) sync. push 0i end Ocache = Ocache - 1/n : 0i Ocache = Ocache - 0i end end end end\nFigure 2: Pseudo-code of the three SGD algorithms implemented in ISP-ML: synchronous. SGD (Zinkevich et al.]2010), Downpour SGD (Dean et al.]2012), and EASGD (Zhang et al.]2015) The shaded line indicates the computation occurring in the cache controller (master); the other lines. are executed in the channel controllers (slaves). Note that the term 'page-minibatch' refers to the. minibatch SGD used in our framework, where the size of a minibatch is set to the number of training samples in a NAND flash page.\ngradient to cache controller. The cache controller updates the parameters with the gradient from the channel controllers sequentially.\nFor EASGD, we let each of the channel controllers have its own SGD parameters unlike synchronous SGD and Downpour SGD. Each channel controller pulls the parameters from the cache controller after computing the gradient and updating its own parameters. Each channel controller calculates the differences between its own parameters and the cache controller's parameters and then pushes the differences to the cache controller.\nOf note is that, besides its widespread use, SGD has some appealing characteristics that facilitate. hardware implementations. We can implement parallel SGD on top of the master-slave architecture realized by the cache controller and the channel controllers. We can also take advantage of effective techniques developed in the distributed and parallel computation domain. Importantly, each SGL iteration is so simple that it can be implemented without incurring excessive hardware overhead.."}, {"section_index": "8", "section_name": "3.3 METHODOLOGY FOR IHP-ISP PERFORMANCE COMPARISON", "section_text": "IHPtime = Ttotal = TnonIO + T10\nTo calculate the expected IHP simulation time adjusted to ISP-ML, the data IO time of IHP i replaced by the data IO time of the baseline SSD in ISP-ML (T1Osim). By using Eq. (4), the expectec IHP simulation time can then be represented by\nExpected IHP simulation time = Tnonio + TiOsim Ttotal T10 + T1Osim\nTo evaluate the effectiveness of ISP, it is crucial to accurately and fairly compare the performances of ISP and the conventional IHP. However, performing this type of comparison is not trivial (see Section 4.3 for additional discussion). Furthermore, the accurate modeling of commercial SSDs equipped with ISP-ML is impossible due to lack of information about commercial SSDs (e.g., there is no public information on the FTL and internal architectures of any commercial SSD). Therefore, we propose a practical methodology for accurate comparison of IHP and ISP performances, as depicted in Figure[3] Note that this comparison methodology is applicable not only to the parallel SGD implementations explained above but also to other ML algorithms that can be executed in ISP-ML.\nIn the proposed comparison methodology, we focus on the data IO latency time of the storage (denoted as To), since it is the most critical factor among those that affect the execution time of IHP. The total processing time of IHP (IHPtime or Ttotal) can then be divided into the data IO time and the non-data IO time (Tnon1o) as follows:\nHPtime = Ttotal = TnonIO + T10.\n(a) Measure total. b execution application time(T.oa). Real System Simulator Host In Measure IO service. Host time(T1o) ISP Cmd IO Trace Extract IO trace while. executing application ISP-ML ISP-ML Storage In Measure baseline. (baseline) (ISP implemented) SSD simulation time with 10 trace(T10sim) (Sim)\nFigure 3: (a) Overview of our methdology to compare the performance of in-host processing (IHP and in-storage processing (ISP). (b) Details of our IHP-ISP comparison flow.\nThe overall flow of the proposed comparison methodology is depicted in Figure[3(b). First, the total processing time (Ttotal) and the data IO time of storage (T1o) are measured in IHP, extracting the IO. trace of storage during an application execution. The simulation IO time (TiOsim) is then measured using the IO trace (extracted from IHP) on the baseline SSD of ISP-ML. Finally, the expected IHP simulation time is calculated by plugging the total processing time (Ttotal), the data IO time of storage (Tio) and the simulation IO time (T1Osim) into Eq. (5). With the proposed method and ISP- ML, which is applicable to a variety of IHP environments regardless of the type of storage used, it is possible to quickly and easily compare performances of various ISP implementations and IHP in a simulation environment.\nAll the experiments presented in this section were run on a machine equipped with an 8-core Intel(R). Core i7-3770K CPU (3.50GHz) with DDR3 32GB RAM, Samsung SSD 840 Pro, and Ubuntu 14.04 LTS (kernel version: 3.19.0-26-generic). We used ARM 926EJ-S (400MHz) as the embedded pro-. cessor inside ISP-ML and DFTL (Gupta et al.] 2009) as the FTL of ISP-ML. The simulation model we used was derived from a commercial product (Micron NAND MT29F8G08ABACA) and had the following specifications: page size = 8KB, tprog _= 300s, tread =75s, and thlock erase = 5ms Each channel controller had 24KB of memory [8KB (page size) for data and 16KB for ISP] and a floating point unit (FPU) having O.5 instruction/cycle performance (with pipelining). The cache con- troller had memory of (n + 1) 8KB (page size), where n is the number of channels (n = 4, 8, 16) Depending on the algorithm running in ISP-ML, we can adjust these parameters..\nNote that the main purpose of our experiments in this paper was to verify the functionality of our ISP-ML framework and to evaluate the effectiveness of ISP over the conventional IHP using SGD even though our framework is certainly not limited only to SGD. To this end, we selected logistic regression, a fundamental ML algorithm that can directly show the advantage of ISP-based optimiza tions over IHP-based optimizations without unnecessary complications. We thus implemented the logistic regression algorithm as a single-layer perceptron (with cross entropy loss) in SystemC and uploaded it to ISP-ML. As stated in Section 5.3, our future work includes the implementation and testing of more complicated models (such as deep neural networks) by reflecting the improvement opportunities revealed from the experiments presented in this paper.\nAs test data, we utilized the samples from the MNIST database (LeCun et al. 1998). To amplify the number of training samples (for showing the scalability of our approach), we used elastic distor- tion (Simard et al.| 2003), producing 10 times more data than the original MNIST (approximately 600,000 training and 10,000 test samples were used in total). To focus on the performance evalua- tion of running ISP operations, we preloaded our NAND flash simulation model with the simulation\n1These are conservative settings, compared with those of the original commercial product; using the speci fications of a commercial product will thus improve the performance of ISP-ML.\n(a) 4-Channel (b) 8-Channel (c) 16-Channel 0.94 0.94 0.94 Synchronous SGD Synchronous SGD Synchronous SGD 0.82 Downpour SGD 0.82 Downpour SGD 0.82 Downpour SGD EASGD EASGD EASGD 0 2 4 6 8 10 12 0 Time(sec) 2 4 Time(sec) 6 8 10 12 0 2 4 Time(sec) 6 8 10 12\nFigure 4: Test accuracy of three ISP-based SGD algorithms versus wall-clock time with a varying number of NAND flash channels: (a) 4 channels, (b) 8 channels, and (c) 16 channels\n0.92 0.88 0.84 lSP(EASGD, 4CH) .--- IHP(2GB-memory) .--- IHP(16GB-memory) ISP(EASGD, 8CH) .--- IHP(4GB-memory) .--- IHP(32GB-memory) lSP(EASGD, 16CH) ).--- IHP(8GB-memory) 0.80 0 4 8 12 16 20 Time(sec)\nFigure 5: Test accuracy of ISP-based EASGD in the 4, 8, and 16 channel configurations and IHP based minibatch SGD using diverse memory sizes..\ndata (the same condition was used for the alternatives for fairness). Based on the size of a training sample this dataset and the size of a NAND page (8KB), we set the size of each minibatch to 10."}, {"section_index": "9", "section_name": "4.2 PERFORMANCE COMPARISON: ISP-BASED OPTIMIZATION", "section_text": "As previously explained, to identify which SGD algorithm would be best suited for use in ISP, we implemented and analyzed three types of SGD algorithms: synchronous SGD, Downpour SGD, and EASGD. For EASGD, we set the moving rate (a) and the communication period () to 0.001 anc 1, respectively. For a fair comparison, we chose different learning rates for different algorithms thai gave the best performance for each algorithm. Figure4 shows the test accuracy of three algorithms with varying numbers of channels (4, 8, and 16) with respect to wall-clock time..\nAs shown in Figure4 using EASGD gave the best convergence speed in all of the cases tested EASGD outperformed synchronous and Downpour SGD by factors of 5.24 and 1.96 on average respectively. Synchronous SGD showed a slower convergence speed when compared to Downpou SGD because it could not start learning on the next set of minibatch until the results of all th channel controllers reported to the cache controller. Moreover, one delayed worker could halt th entire process. This result suggests that EASGD is adequate for all the channel configurations teste in that ISP can benefit from ultra-fast on-chip level communication and employ application-specifi hardware that can eliminate any interruptions from other processors."}, {"section_index": "10", "section_name": "4.3 PERFORMANCE COMPARISON: IHP VERSUS ISP", "section_text": "In large-scale machine learning, the computing systems used may suffer from memory shortage which incurs significant data swapping overhead. In this regard, ISP can provide an effective solution that can potentially reduce data transfer penalty by processing core operations at the storage level\nIn this context, we carried out additional experiments to compare the performance of IHP-based and ISP-based EASGD. We tested the effectiveness of ISP in a memory shortage situation with 5 different configurations of IHP memory: 2GB, 4GB, 8GB, 16GB, and 32GB. We assumed that the host already loaded all of the data to the main memory for IHP. This assumption is realistic because\nFigure 6: Test accuracy of different ISP-based SGD algorithms for a varied number of channels: (a) synchronous SGD, (b) Downpour SGD, and (c) EASGD. (d) Training speed-up for the three SGD algorithms for a various number of channels..\nstate-of-the-art machine learning techniques often employ a prefetch strategy to hide the initial data transfer latency.\nAs depicted in Figure 5] ISP-based EASGD with 16 channels gave the best performance in our. experiments. The convergence speed of the IHP-based optimization slowed down, in accordance. with the reduced memory size. The results with 16GB and 32GB of memory gave similar results. because using 16GB of memory was enough to load and allocate most of the resource required by the process. As a result, ISP was more efficient when memory was insufficient, as would be often the case with large-scale datasets in practice."}, {"section_index": "11", "section_name": "4.4 CHANNEL PARALLELISM", "section_text": "To closely examine the effect of exploiting data-level parallelism on performance, we compared the accuracy of the three SGD algorithms, varying the number of channels (4, 8, and 16), as shown in Figure 6l All the three algorithms resulted in convergence speed-up by using more channels; synchronous SGD achieved 1.48 speed-up when the number of channels increased from 8 to 16 From Figure[6(d), we can also note that the convergence speed-up tends to be proportional to number of channels. These results suggest that the communication overhead in ISP is negligible, and that ISP does not suffer from the communication bottleneck that commonly occurs in distributed computing Systems."}, {"section_index": "12", "section_name": "4.5 EFFECTS OF COMMUNICATION PERIOD IN ASYNCHRONOUS SGD", "section_text": "(a) Synchronous SGD (b) Downpour SGD 0.94 0.94 0.90 0.90 4-Channel 4-Channel 0.82 8-Channel 0.82 8-Channel 16-Channel 16-Channel 0 2 4 6 8 10 12 0 2 4 6 8 10 12 Time(sec) Time(sec) (c) EASGD (d) Speed up 0.94 4 +-Synchronous SGD +Downpour SGD +EASGD 0.90 eennccy dn speed 2 0.86 test 4-Channel 8-Channel 0.82 16-Channel 0 2 4 6 8 10 12 4 8 16 Time(sec) Channel\nFinally, we investigated how changes in the communication period (i.e., how often data exchange occurs during distributed optimization) affect SGD performance in the ISP environment. Figure|7 shows the test accuracy of the Downpour SGD and EASGD algorithms versus wall-clock time when we varied their communication periods. As described in|Zhang et al.(2015), Downpour SGD nor- mally achieved a high performance for a low communication period [ = 1, 4] and became unstable for a high communication period [ = 16, 64] in ISP. Interestingly, in contrast to the conventional\n(a) Downpour SGD (b) EASGD 0.92 0.90 0.90 0.80 eenune : Ceunn 0.70 0.88 est T 0.60 T = 1 t = 4 t = 16 T = 64 T = 1 t = 4 t =16 T = 64 0.50 0.86 0 2 4 6 8 10 0 2 4 6 8 10 Time(sec) Time(sec)\nFigure 7: Test accuracy of ISP-based Downpour SGD and EASGD algorithms versus wall-clock time for different communication periods.\ndistributed computing system setting, the performance of EASGD decreased as the communication. period increased in the ISP setting. This is because the on-chip communication overhead in ISP is. significantly lower than that in the distributed computing system. As a result, there would be no need for extending the communication period to reduce communication overhead in the ISP environment."}, {"section_index": "13", "section_name": "5.1 PARALLELISM IN ISP", "section_text": "Given the advances in underlying hardware and semiconductor technology, ISP can provide vari. ous advantages for data processing involved in machine learning. For example, our ISP-ML could minimize (practically eliminate) the communication overheads between parallel nodes leveraged by ultra-fast on-chip communication inside an SSD. Minimizing communication overheads can im-. prove various key aspects of data-processing systems, such as energy efficiency, data management,. security, and reliability. By exploiting this advantage of fast on-chip communications in ISP, we en- vision that we will be able to devise a new kind of parallel algorithms for optimization and machine learning running on ISP-based SSDs..\nOur experiment results also revealed that a high degree of parallelism could be achieved by increas ing the number of channels inside an SSD. Some of the currently available commercial SSDs have as many as 16 channels. Given that the commercial ISP-supporting SSDs would (at least initially) be targeted at high-end SSD markets with many NAND flash channels, our approach is expected to adc a valuable functionality to such SSDs. Unless carefully optimized, a conventional distributed systen will see diminishing returns as the number of nodes increases, due to the increased communicatior overhead and other factors. Exploiting a hierarchy of parallelism (i.e., parallel computing nodes each of which has ISP-based SSDs with parallelism inside) may provide an effective acceleratior scheme, although a fair amount of additional research is needed before we can realize this idea"}, {"section_index": "14", "section_name": "5.2 ISP-IHP COMPARISON METHODOLOGY", "section_text": "To fairly compare the performances of ISP and IHP, it would be ideal to implement ISP-ML in a real semiconductor chip, or to simulate IHP in the ISP-ML framework. Selecting either option, however is possible but not plausible (at least in academia), because of high cost of manufacturing a chip, and the prohibitively high simulation time for simulating IHP in the Synopsys Platform Architect environment (we would have to implement many components of a modern computer system in order to simulate IHP). Another option would be to implement both ISP and IHP using FPGAs, but it will take another round of significant efforts for developments.\nTo overcome these challenges (still assuring a fair comparison between ISP and IHP), we have pro. posed the comparison methodology described in Section 3.3. In terms of measuring the absolute. running time, our methodology may not be ideal. However, in terms of highlighting relative perfor. mance between alternatives, our method should provide a satisfactory solution..\nOur comparison methodology extracts IO trace from the storage while executing an application in the host, which is used for measuring simulation IO time in the baseline SSD in ISP-ML. In this procedure, we assume that the non-IO time of IHP is consistent regardless of the kind of storage the host has. The validity of this assumption is warranted by the fact that the amount of non-IO time changed by the storage is usually negligible compared with the total execution time or IO time."}, {"section_index": "15", "section_name": "5.3 OPPORTUNITIES FOR FUTURE RESEARCH", "section_text": "In this paper we focused on the implementation and testing of ISP-based SGD as a proof of concept. The simplicity and popularity of (parallel) SGD underlie our choice. By design, it is possible tc run other algorithms in our ISP-ML framework immediately; recall that our framework includes a general-purpose ARM processor that can run executables compiled from C/C++ code. However, i would be meaningless just to have an ISP-based implementation, if its performance is unsatisfactory To unleash the full power of ISP, we need additional ISP-specific optimization efforts, as is typically the case with hardware design..\nWith this in mind, we have started implementing deep neural networks (with realistic numbers o. layers and hyperparameters) using our ISP-ML framework. Especially, we are carefully devising. a way of balancing the memory usage in the DRAM buffer, the cache controller, and the channe. controllers inside ISP-ML. It would be reasonable to see an SSD with a DRAM cache with a fev gigabytes of memory, whereas it is unrealistic to design a channel controller with that much memory. Given that a large amount of memory is needed only to store the parameters of such deep models, an that IHP and ISP have different advantage and disadvantages, it would be intriguing to investigat. how to make IHP and ISP can cooperate to enhance the overall performance. For instance, we car. let ISP-based SSDs perform low-level data-dependent tasks while assigning high-level tasks to th. host, expanding the current roles of the cache controller and the channel controllers inside ISP-MI. to the whole system level.\nOur future work also includes the following: First, we will be able to implement adaptive opti. mization algorithms such as Adagrad (Duchi et al.]2011) and Adadelta (Zeiler2012). Second precomputing meta-data during data writes (instead of data reads) could provide another directior of research that can bring even more speedup. Third, we will be able to implement data shuffl. functionality in order to maximize the effect of data-level parallelism. Currently, ISP-ML arbitraril splits the input data into its multi-channel NAND flash array. Fourth, we may investigate the effec of NAND flash design on performance, such as the NAND flash page size. Typically, the size of a. NAND flash page significantly affects the performance of SSDs, given that the page size (e.g., 8KB. is the basic unit of NAND operation (read and write). In case where the size of a single example of ten exceeds the page size, frequent data fragmentation is inevitable, eventually affecting the overal performance. The effectiveness of using multiple page sizes was already reported for conventiona SSDs (Kim et al.2016a), and we may borrow this idea to further optimize ISP-ML."}, {"section_index": "16", "section_name": "REFERENCES", "section_text": "Anurag Acharya, Mustafa Uysal, and Joel Saltz. Active disks: Programming model, algorithms anc evaluation. In ACM SIGOPS Operating Systems Review. volume 32. pp. 81-91. ACM. 1998.\nThe authors would like to thank Byunghan Lee at Data Science Laboratory, Seoul National Univer sity for proofreading the manuscript. This work was supported in part by BK21 Plus (Electrical and Computer Engineering, Seoul National University) in 2016, in part by a grant from SK Hynix, and n part by a grant from Samsung Electronics.\nJeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, pp. 1223-1231, 2012.\nMaya Gokhale, Bill Holmes, and Ken Iobst. Processing in memory: The terasys massively parallel pim array. Computer, 28(4):23-31, 1995.\nIan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in preparation for MIT\nAayush Gupta, Youngjae Kim, and Bhuvan Urgaonkar. DFTL: a flash translation layer employing demand-based selective caching of page-level address mappings. volume 44. ACM. 2009\nSang-Woo Jun, Ming Liu, Sungjin Lee, Jamey Hicks, John Ankcorn, Myron King, Shuotao Xu et al. Bluedbm: an appliance for big data analytics. In Computer Architecture (ISCA), 2015 ACM/IEEE 42nd Annual International Symposium on, pp. 1-13. IEEE, 2015.\nJin-Young Kim, Tae-Hee You, Sang-Hoon Park, Hyeokjun Seo, Sungroh Yoon, and Eui-Young Chung. An effective pre-store/pre-load method exploiting intra-request idle time of nand flash- based storage devices. under review, 2016b.\nSungchan Kim, Hyunok Oh, Chanik Park, Sangyeun Cho, Sang-Won Lee, and Bongki Moon. In. storage processing of database scans and joins. Information Sciences, 327:183-200. 2016c.\nKevin P Murphy. Machine learnin probabilistic. ective. MIT press, 2012.\nPatrice Y Simard, David Steinkraus, and John C Platt. Best practices for convolutional neura networks applied to visual document analysis. In ICDAR, volume 3, pp. 958-962, 2003.\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 2012.\nSixin Zhang, Anna E Choromanska, and Yann LeCun. Deep learning with elastic averaging sgd. I Advances in Neural Information Processing Svstems. pp. 685-693. 2015\nMartin Zinkevich, Markus Weimer, Lihong Li, and Alex J Smola. Parallelized stochastic gradient descent. In Advances in neural information processing systems, pp. 2595-2603, 2010\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121-2159, 2011\nDong Kim, Kwanhu Bang, Seung-Hwan Ha, Sungroh Yoon, and Eui-Young Chung. Architecture exploration of high-performance pcs with a solid-state disk. IEEE Transactions on Computers. 59(7):878-890, 2010.\nYann LeCun, Corinna Cortes, and Christopher JC Burges. The mnist database of handwritten digits 1998."}] |
SJqaCVLxx | [{"section_index": "0", "section_name": "NEW LEARNING APPROACH BY GENETIC ALGORITHM IN A CONVOLUTIONAL NEURAL NETWORK FOR PATTERN RECOGNITION", "section_text": "Majid Mohammadi\nDepartment of Computer Scienc Shahid Bahonar University. Kerman, Iran. mohammadi @uk. ac. ir."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Almost all of the presented articles in the CNn' are based on the error. backpropagation algorithm and calculation of derivations of error, our. innovative proposal refers to engaging TICA? filters and NsGA-II genetic. algorithms to train the LeNet-5 CNN network. Consequently, genetic algorithm. updates the weights of LeNet-5 CNN network similar to chromosome update. In our approach the weights of LeNet-5 are obtained in two stages. The first is pre training and the second is fine-tuning. As a result, our approach impacts in. learning task."}, {"section_index": "2", "section_name": "1 Background", "section_text": "' Convolutional Neural Network. 2 Topographic independent component analysis 3 Non dominated Sorting Genetic Algorithm II 4 The MNisT database of handwritten digits has a training set of 60,000 examples, and a. set of 10,oo0 examples. It contents a subset of a larger set available from NIST. The digits h been size-normalized and centered in a fixed-size image..\nThe CNN network presented by Fukushima (Fukushima, 1975; Fukushima, 1980; Fukushima 1986; Fukushima, 1989; Imagawa, 1993) in 1975 to solve handwritten digit recognition problems,. called it Neocognitron. Originally the CNN inspired by the works of Hubel and Wiesel (Wiese. 1962) on the neurons of the cat's visual cortex. The LeCun in (Y. LeCun, 1990) has implemented. the first CNN, trained by online Backpropagation, with an extreme accurate recognition process. (high accuracy percentage).\nBriefly, in the paper, using the TICA filters and NSGA-II algorithms caused to be capable of training a type of CNNs called LeNet-5 by NSGA-II algorithm in two stages: pre-training and fine-tuning on the tiny pack of handwritten digits' (a distinct pack encompasses 50 samples from MINST dataset)."}, {"section_index": "3", "section_name": "1.1 The LeNet-5 Model", "section_text": "Figure 1 illustrates the principal architecture of LeNet-5 model which had been trained with LeCun`s Online BP' Algorithm.\nc3: feature maps 16@1010 C1: feature maps S4: feature maps 6@28x28 Input S2: feature maps 16@5x5 32x32 6@14x14 C5: layer 120 F6: layer Output 84 10 Coooonnnenss Suydwesqns Coonnrrnonn Suydwesqns Trainable Weights:156. Trainable Weights:1516 Weights:48120 Weights:850 Connections:122304 Connections:151600 Connections:48120 Connections:850 Neurons:4704 Neurons:1600 Neurons:120 Neurons:10 Trainable Weights:12. Trainable Weights:32 Weights:10164 Connections:5880 Connections:2000 Connections:10164 Neuron:1176 Neurons:400 Neurons:84\nThe CNN extracts the features in hierarchy and creates a spectrum of input pattern upon the outpu layer; the obtain spectrums help to categorize the input facts. The learning method of the CNN. concentrates upon extracting features of the pattern automatically..\nThe LeNet-5 consists of seven layers exclude the input layer (Duffner, 20o7). The size of inpu dimension set to 3232 pixels. The first five layers, C1, S2, C3, S4, and C5 determine convolution and sub-sampling layers.\nThe small window that called receptive filed refers to a dimension of 55 window for convolutior layers and a sub-sampling ratio factor of 2. In figure 2 shows two preliminary layers associatec with the input domain called retina. In general, at the end of each convolution layer the subsampl layer is attached.\nConvolution Feature map Convolution 55 Stage SubSample Input (retina). Feature Map SubSample 2x2\nFigure 2: The convolution map. and subsample of LeNet-5 (Duffner, 2007)\nFigure 1: The architecture of LeNet-5"}, {"section_index": "4", "section_name": "1.2 TICA Filters", "section_text": "Topography independent component analysis is a 2D map component that has been organized with adjacent component to extract similar features (figure 4). The component extracts visual features from tiny patches of natural images. In fact, the model has minimized the correlations of energies with a pertinent object function applied. The model can be a CNN (K. Kavukcuoglu 2009) or can be an optimization algorithm (Koray, 2008). Figure 4 shows the TICA filters with the size of 1616 that after 5000 iterations obtains them."}, {"section_index": "5", "section_name": "1.3 NSGA-II Algorithm", "section_text": "One way for obtaining the answers of equivalent to minimize or maximize exploits the evolutionary algorithms; thus, the evolutionary algorithms to be applied instead of derivative calculations. In the implemented algorithms, the answers instead of gene in chromosome and the chromosome called individual and finally the individual formed a population.\nThe population based on natural selection competes with others for searching. The algorithn performs on population some operation etc. crossover, mutation, selection; the population size,. mutation rate, crossover rate and selection strategy conducts the outcome..\nThe times of running the algorithm called iteration. In the iteration evaluated the population by a function so-called fitness function. Several policies exist to stopping the algorithm: finding the appropriate answer or answers, specified iteration, converge the evaluating of individuals..\nIn fact, the population to be generated randomly in initializing section and evolved in iterations by randomly operators. Some individuals to be chose for mutation of genes and some of them to be chose to mixture with others. At the end of iteration selects to new population in next iteration with a specific policy from some of individuals. Researcher has implemented numerous evolutionary algorithms as the NSGA-II algorithm.\nSrinivas and Deb presented NSGA method in 1993 (Deb, 1995). The NSGA algorithm. implements for MOP target purposes with a high performance. The regenerated version, NSGA-II. presented in 2002 (Amrit Pratap, 2002) which use the number of domination for each individual. by other individual. Constantly, for pertinent performance, the mentioned techniques help to acquire Pareto-Optimal' points with reduced complex computing by exerting non-dominated.\nFigure 4: The TICA Filters with dimensions 1616\nindividuals and calculating of crowding point's distance individuals. Figure 5 illustrates the NSGA-II algorithm.\nTotally, the sorting functional NSGA-II algorithm has been improved by storing the frequency oi each dominated individual with the others\nThe NSGA-II has three crucial issues: quick sorting for non-dominated individuals; computation of crowded distance in entire Pareto-Front; method of individual selection..\nFor quick sorting, the entire individuals must be compared with themselves and pulled the non- dominated out of them with assigning the rank 1. The task is repeated until the entire population allocates a rank from 1 to N to sort the individuals. Figure 6 illustrates the Pareto-Front issue in three parts. The part (a) shows the situation C member with the two lines` perpendicular paralleled with axes on C, as the adjacent of the specified individual C has posed over the district of top left. bottom left, top right and bottom right. The bottom left members dominated by C, If would be assumed the objective functions f1 and f2 maximized, the top right area dominated C member. The district of bottom right and top right have been ignored to determine for dominating. The matlab code declares in mathworks web site'.\nhttps://www.mathworks.com ammed by S. Mostapha Kalami Heris\nStart Define algorithm parameters and fitness function. Generate initial population randomly (Pop). Sorting the non-dominated individual based on rank of batch Computing crowding distance for individual. Select parents with competition and note priority in rank of non-. dominated and then crowding distance. Crossover operation and generate new crossover population POPC Mutation operation and generate new mutation population POPM Merge all population Sorting the rank population with priority in non-dominated. and then crowding distance. Reducing population to the number initial individuals N. No Stop Condition? Yes Stop II olgorith\nstart\nFigure 5: Flow chart of NSGA-II algorithm\nf2 f2 Pareto-Front Infeasible answer J district 0 H A H A E D 1 f1 f1 (b) (a) f2 Pareto-Front 3 Pareto-Front 2 Pareto-Front 1 H D - 1 f1 (c)\nThe Pareto-Front refers to compute additional measure inside of population ranking called crowded distance. Therefore, the distance of chromosomes is computed and stored with respective adjacent in Pareto-Front by using Euclidean distance for entire members. Totally, the population rank and the crowded distance prepare the fitness value for entire members whilst the member with the low rank could be assigned to the low fitness value."}, {"section_index": "6", "section_name": "2 The proposal", "section_text": "The method of individual selection determines the computation of ranking of non-dominated and. crowded distance of individuals. Thus, two values acquired for ranking and computing crowded. distance for comparing two members of population. The algorithm selects the member with lower. priority. Competition between members of the population with identical rank caused to win the one which has the higher crowded distance. The later generation is induced by merging the winner. parents and offspring in competition and after that the population`s size sets the primary length..\nThe proposal focused on genetic or heuristic algorithms to determine the weights of LeNet-5. without using the Backpropagation and the derivation. For better following our proposal, the steps listed below:\n1) The F6 layer has been eliminated in LeNet-5 model; consequently, the length of chromosome in GA reduces to 51046 variables. 2) The train and test validation function have merged and 50 samples in 10 classes (0, 1, 2 3, 4, 5, 6, 7, 8, and 9) selected from MINST dataset randomly. The average of errors for the entire 50 samples leads the LeNet-5 to learning. 3) All the biases in LeNet-5 set to zero. The number of individual population or particles vector size sets to 100. 4) We have two stages with specified respective parameters (table 1) and techniques in the NSGA-II algorithm usage (figure 7). The first stage calls Pre-training and the second\nTable 1: The parameters in the NSGA-II at Pre-training and Fine-tuning stages\n-1(Xobs,i - Xmodel,i) RMSE : n\nWhere M denotes sum of number of templates become correct, N denotes number of all the templates.\ncalls Fine-tuning; the mentioned parameters have been extracted from our experiments and a tuned LeNet-5 with online Backpropagation.. 5) The RMSE' and MCR? employed to minimize the MOp in the NSGA-II or the other. experimental algorithms which compute them with Eq 1 and Eq 2. 6) In the NSGA-II algorithm`s fitness function applied ax and by- (x denote RMsE and y. denotes MCR) 7) The output of LeNet-5`s model for digit 1 and desired digit 1 as depicted the table of digits labels for using in supervised learning (figure 8)..\nVariable Pre-training stage Fine-tuning Stage Iteration(s) 1000 1000 Number of population 100 27 Chromosome Lengths 51046 51046 Variable Minimum -0.9000000000000000 -4.0000000000000004 Variable Maximum 0.9000000000000000 6.0000000000000001 Population Crossover 0.2 0.1 Population Mutation 0.3 0.4 Ratio Mutation Genes 0.5 0.00025 100 chromosomes of Population Initial with TICA filters NSGA-II NSGA-II NSGA-II 1000 1000 1000 Extra Stage Iterations Iterations Iterations First Second Third Stage Stage Stage (Pre-training) (Fine-tuning) 27 of best 15 of best 8 of best chromosomes chromosomes chromosome To Using in The LeNet-5 Neural Networks Figure 7: Stages of the proposal. =1(Xobs,i Xmodel,i) RMSE = (1) n e Xobs,i denotes value of observation in model, Xmodel,i denotes value of output desire on. MCR x 100 (2)\nM MCR = x 100 (2) N\n10 Output Model Value Label Value Neurons For Digit 1. For Digit 1 Value Value :: : Neurons Neurons : :: : ) 10 : Outputs Of Digits Neurons 0 1 2 3 4 5 6 7 8 9 1 0 0 0 0 0 0 0 0 2 0 1 0 0 0 0 0 0 3 0 1 0 0 0 4 0 0 1 0 0 0 5 0 1 0 0 6 0 1 O 0 7 0 0 1 O 0 0 8 0 0 0 0 1 0 0 9 0 0 0 0 0 0 1 0 10 0 0 0 0 0 0 0 0 0 1\nFigure 8: Up: Digit 1 in output of LeNet-5 model and digit 1 in label. Down: Table of digit labels"}, {"section_index": "7", "section_name": "2.1 Pre-training stage", "section_text": "The parts of chromosome that assigns to the layer C1 and C3 in the LeNet-5 should not to b modified at the first stage; at the beginning of the NSGA-II algorithm, C1 and C3 initialized b 160 TICA filters with dimension of 1616 in section 1.2 randomly. The filters should be resized t dimension of 55 to fit the LeNet-5`s weights matrix in respective layers (figure 9). Figure 1 illustrates the pre-training\nFigure 9: The five TICA filters form left to right of the TICA filters in section 1.2 that resize into Dimension of 55.\n0 1 2 3 4 5 6 7 8 9 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1\nFigure 10: Showed relevance between chromosome of the NSGA- II. algorithm and weights of layers in LeNet-5 Neural network at Pre-training stage of initializing\nThe pre-training algorithm declares in algorithm1 with pseudocode in matlab\nlhn I Pre-training 1: Define the parameters of algorithm matched with table 1 and the fitness function equal to and y2 in Cost function 2: Generate initial population`s chromosomes randomly (POP) from the 160 TICA fi dimension of 5x5 3: [TestData TargetData]=SelectSamples(; % Select 50 samples from MINST dataset 3: for i:=1:npop 4: chromosomtoCNN(pop(i)); % Replace the vector of the chromosome into the CNN nel. network 5: [RMSE MCR]=PerformanceCNN(TestData,TargetData); 6: pop(i).Cost=Cost(RMSE MCR); % Calculate the cost function for chromosome. 7: end % Initializing population 8: for iteration=1:nIteration 9: Create popC for Crossover population 10: for iCrossover=1: nCrossover 11: i1=BinaryTournamentSelection(pop); % Select an individual from population 12: i2=BinaryTournamentSelection(pop); % Select an individual from population. 13: popC(iCrossover.1).Position.\nNote I: TICA filters with 5x5 dimensional usec in Chromosome of GA to insert TICA filters 25 into the weights with of The LeNet-5`s 5x5 dimensional Bias Bias maps in layer1 and layer3 Chromosome 1 25 26 51 52 51046 6 Weights of Maps in 60 Weights of Maps layer 1 in The LeNet-5 in layer 1 in The Model LeNet-5 Model Chromosome 100 1 25 26 51 52 51046 Weights of latent Weights of. Weights of. layers layer1 layer3 (classifiers) Note II: All Biases set to zero in The LeNet-5` Neurons LeNet-5 Outcome Insert into Genetic Cost Function Calculate the Algorithm for each in RMSE & MCR of chromosome in Genetic Algoritm Population"}, {"section_index": "8", "section_name": "2.2 Fine-training stage", "section_text": "The 27 pertinent individuals have been selected from the preceding stage for initializing preliminary population. The entire layers` weights have been authorized to be modify in the LeNet-5; caused the TICA filters to be improved to extract pertinent features in the layer C1 anc C3 of the LeNet-5 scheme by the NSGA-II algorithm. In figure 11 and 12 present graphs for RMSE and MCR errors.\nThe code of the fine-tuning algorithm declares in Algorithm 1 with remove the lines of replaced the TICA filter in the chromosome (lines 14, 18, 27) and with initialize the parameters at beginning from table 1."}, {"section_index": "9", "section_name": "2.3 The results", "section_text": "Our practices (methods in table 3 and 4 and respective figures) in the article are listed at table 2. (Note: each of the practices has been rendered five times and the average of best results (minimum. in RMsE and MCR) has been chosen and rounded in the specified iterations at table 3 and 4 anc respective figures). It should be noted that the RMsE and MCR not have related with together; in. whole results some time they are opposed (Figure 13)..\nThe '*' sign marks our proposal`s results in table 2. The two stages` results (pre-training and fine. tuning) have been typed in column 1 and 2 of table 3 and 4 for RMsE and MCR; learning achieved with tracing the results.\npopC(iCrossover,2).Position]=Crossover(pop(i1).Position,pop(i2).Position, VarRange); 14: popC(iCrossover,1).Position(1:1684)=pop(i1).PrimaryPosition(1:1684); % Replace the Choromosome TICA filter 15: chromosomtoCNN(popC(iCrossover)); 16: [RMSE MCR]=PerformanceCNN(TestData,TargetData); 17: popC(iCrossover,1).Cost=Cost(RMSE MCR); % Calculate the cost function for chromosome 18: popC(iCrossover,2).Position(1:1684)=pop(i2).PrimaryPosition(1:1684); % Replace the Choromosome TICA filter 19: chromosomtoCNN(pop(iCrossover)); 20: [RMSE MCR]=PerformanceCNN(TestData,TargetData); 21: popC(iCrossover,2).Cost=Cost(RMSE MCR); % Calculate the cost function for chromosome 22:end % Crossover 23: Create popM for Mutation population 24: for iMutation=1: nMutation 25: i=BinaryTournamentSelection(pop); 26: popM(iMutation).Position=Mutate(pop(i).Position,mu, VarRange); 27: popM(iMutation).Position(1:1684)=pop(i).PrimaryPosition(1:1684); % Replace the Choromosome TICA filter 28: chromosomtoCNN(pop(iMutation)); 29: [RMSE MCR]=PerformanceCNN(TestData,TargetData); 30: popM(iMutation).Cost=Cost(RMSE MCR); % Calculate the cost function for chromosome 31: end % Mutation 33: popC=popC(:); % Turn to on dimension 34: pop=[pop popC popM] % Merge all population 35: [pop F]=NonDominatedSorting(pop); % Sorting the non-dominated individual based on rank of batch for population 36: pop=CalcCrowdingDistance(pop,F); % Computing crowding distance for population 37: pop=SortPopulation(pop); % Sorting the rank population with priority in non-dominated and the crowding distance 38: pop=pop(1:nPop); % Reducing population to the number initial individuals N 39:end % Iteration\nTable 2: Experimental List\n1 *NSGA-II initialized with TICA Filters in MNIST 50 two stages (second stage in our proposal) 2 *NSGA-II initialized with TICA Filters in MNIST 50 two stages (first stage in our proposal) 3 NSGA-II initialized with TICA Filters MNIST 50 4 Standard GA MNIST 50 5 Standard PsO MNIST 50 6 Standard PSO initialized with TICA Filters MNIST 50 7 NSGA-II MNIST 50\nTable 3: The selected RMSE (ii. ercentage) values in experimental for 50 sample\nFigure 11: Compare the selected RMSE (in percentage) values from table 3 in experiments of iterations\nNote: Cost function or fitness function equivalent in MOP in the entire experimental algorithms has applied the ax-+by' equivalent (x denotes RMSE and y denotes MCR; a=b equal to one) a=b equal to one). In SOp' separately applied power equivalent (x?).\nSingle-objective problem\nIteration(s) Experiments (Refer to table 2) 1 2 3 4 5 6 7 100 1.85 4.45 5.8 6.8 3.8 6.8 7.8 300 1.4 3.4 5.1 5.9 3.8 5.8 6.5 500 1.37 2.87 4.9 3.72 3.8 3.77 5.8 700 1.46 2.46 4.8 3.72 3.8 3.77 5.8 950 1.39 2.19 3.59 3.72 3.8 3.77 4.4 1000 1.34 1.94 3.51 3.72 3.8 3.77 4.22 10 8 6 RMSE Error 4 2 0 0 500 1000 1500 Iteration(s)\nIn figure 12, circumstances of the results for the practices 3 up to 7 imply plenty of errors in the MCR error. Figure 13 depicted the outcomes in the proposal in second stage from original values.\nFinally, in the study, the table 5 shows the comparative results with the trained LeNet-5 and a derivation trained model has been practiced on the 50 samples comparing of the ability of generalization. Hence, for test phase, our assessment was carried out on 10,ooo samples which\nwere selected from the MINsT dataset randomly. Our model tested with 35 percent errors and the other tested with 55 percent in MCR measure.\nTable 4: The selected MCR (in percentage) values in experiments for 50 samples\nFigure 12: Compare the selected MCR (in percentage) values from table 4 in experimental in iterations\nTable 5: The selected MCR (in percentage) values in experiments for 10,o00 samples\nNo Experiments MCR DataSet Number of. Number of. Error sample used sample used for for training. final test 1 The LeNet-5 in our approach 35 MNIST 50 10,000 2 LeNet-5 with. 55 MNIST 50 10,000 backpropagation algorithm\nFinally. the novelties in the proposal were listed below.\nIteration(s) Experiments (Refer to table 2) 1 2 3 4 5 6 7 100 12 45 74 88 60 90 90 300 18 35 72 82 60 82 86 500 6 28 64 62 60 68 82 700 4 24 62 62 60 68 80 950 4 24 62 62 60 68 78 1000 4 22 60 62 60 68 70 100 80 60 MCR +~ Error 40 20 *s 0 0 500 1000 1500 -V Iteration(s)\n1) The TICA filters dimension of 1616 resized to dimension of 55 without being requirec of learning them for LeNet-5 scheme. It means researcher can apply TICA filters in theii. convolutional neural network with resizing them in desired condition directly; as. dimension of 16x16 to 5x5. 2) Supervised learning has been employed at the beginning of our proposal with TICA. filters. TICA filters must be learnt by neural network by unsupervised training fron. natural patch images and afterward the original dataset must be taught to CNN model by supervised training. 3) The weights of layer C1 and C3 were not modified to the end of the first stage. simultaneously and afterwards allowed them to be modified in the second stage..\nFigure 13: The second stage in our proposal from original values without select the best them: the. nominate points in fine-tuning stage shows in (a) in the last iteration for multi-objective. optimization in NSGA-II algorithm (the f1 axis determines the RSME and f2 axis determines the MCR); the RSME in 1000 iterations in fine-tuning stage shows in (b); the (c) shows the MCR (in. percentage) in 1000 iterations in fine-tuning stage.."}, {"section_index": "10", "section_name": "3 Conclusion", "section_text": "Furthermore, the proposal is capable of rendering in parallel processing on GPU and clou computing (future proposal).\nAppling the GA algorithm, as NSGA-II trains the paramount neurons in neural networks as the LeNet-5. Subsequently, the plethora of computation is not required in derivation and Backpropagation algorithm; therefore, the approach is advantageous to parallel processing. The generalization test is well as the obtain models in the article trained with a few samples.\n600 500 400 300 200 100 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 0.5 1.5 2 2.5 f, x 10 (a) (b) 100 90 80 70 60 50 40 30 20 0.5 1 1.5 2 2.5 x 10* (c)\nIn the study presented the procedure by applying the TICA filters and the NSGA-II genetic. algorithms with RMsE and MCR objective functions for training of the LeNet-5 convolutional. neural network in two stages. The first stage called pre-training and the second stage called fine-. tuning based on optimization solution. In fact, the TICA filters and NSGA-II algorithm with simple and useful methods in computations helped us to learn the input patterns.."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Amrit Pratap Sameer Agarwal, and T. Meyarivan A fast and elitist multiobjective genetic algorithm: NSGA-II . 2002\nDeb N. Srinivas and K. Multiobjective function optimization using nondominated sorting genetic algorithms . 1995.\nDuffner Stefan Face Image Analysis With Convolutional Neural Networks . [s.l.] : Dissertation 2007.\nFukushima K. A neural-network model for selective attention in visual pattern recognition . 1986\nFukushima K. Analysis of the process of visual pattern recognition by the neocognitron . 1989\nFukushima K. Cognitron:A self-organizing multi layered neural network . 1975.\nFukushima K. Neocognitron: A self-organizing neural-network model for a mechanism of pattern recognition unaffected by shift in position . 1980.\nImagawa K. Fukushima and T. Recognition and segmentation of connected characters with selective attention . 1993.\nWiese D. Hubel and T. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex . 1962.\nK. Kavukcuoglu M.A. Ranzato, R. Fergus, and Y. LeCun Learning invariant features through topographic filter maps . 2009\nKoray Kavukcuoglu, Marc'Aurelio, Ranzato, Yann LeCun Fast Inference in Sparse Coding. Algorithms with Applications to Object Recognition . New York : [s.n.l. 2008"}] |
rk9eAFcxg | [{"section_index": "0", "section_name": "VARIATIONAL RECURRENT ADVERSARIAI DEEP DOMAIN ADAPTATION", "section_text": "Saniay Purushotham*\n{spurusho, wcarvalh, nilanon, yanliu. cs}@usc.edu\nWe study the problem of learning domain invariant representations for time series. data while transferring the complex temporal latent dependencies between domains. Our model termed as Variational Recurrent Adversarial Deep Domain Adaptatior (VRADA) is built atop a variational recurrent neural network (VRNN) and trains. adversarially to capture complex temporal relationships that are domain-invariant. This is (as far as we know) the first to capture and transfer temporal latent de. pendencies of multivariate time-series data. Through experiments on real-world. multivariate healthcare time-series datasets, we empirically demonstrate that learn. ing temporal dependencies helps our model's ability to create domain-invariant representations, allowing our model to outperform current state-of-the-art deep. domain adaptation approaches."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The general approach to tackling domain adaptation has been explored under many facets which include reducing the domain discrepancy between the source and target domains(Ben-David et al (2007)), instance re-weighting (Jiang & Zhai (2007)), subspace alignment (Fernando et al.(2013)) and deep learning (Tzeng et al.[(2015); Ganin & Lempitsky(2014)). Many of these approaches work very well for non-sequential data but are not suitable for multivariate time-series data as the do not usually capture the temporal dependencies present in the data. For sequential data, earlie work has successfully used dynamic Bayesian Networks(Huang & Yates(2009)) and Recurren Neural Networks (Socher et al.(2011)) to learn latent feature representations which were domain invariant. Unfortunately, these works were not flexible enough to model non-linear dynamics o did not explicitly capture and transfer the complex latent dependencies needed to perform domair adaptation of time-series data.\nIn this paper, we address this problem with a model that learns temporal latent dependencies (i.e dependencies between the latent variables across timesteps) that can be transferred across domains that experience different distributions in their features. We draw inspiration from the Variational Recurrent Neural Network (Chung et al.(2016)) and use variational methods to produce a latent representation that captures underlying temporal latent dependencies. Motivated by the theory of domain adaptation (Ben-David et al.(2010)), we perform adversarial training on this representation"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Many real-world applications require effective machine learning algorithms that can learn invariant representations across related time-series datasets. For example, precision medicine for patients of various age groups, mobile application recommendation for users based on locations, and so on In these examples, while the domains (i.e. age group and location) may vary, there exist common predictive patterns that can aid in inferring knowledge from one domain to another. More often than not, some domains have a significantly larger number of observations than others (e.g., respiratory failure in adults vs. children). Therefore effective domain adaption of time-series data is in great demand.\nFigure 1: A Story of Temporal Dependency and Domain Invariance\nt-SNE projections for the latent representations of DNN, R-DANN, and our VRADA model. We show adaptior from Adult-AHRF to Child-AHRF data. Source data is represented with red circles and target data with blue. circles. From left to right, one can see that domain adaptation results in mixing the source and target domain. data distributions. We can also see a story of how encoding more temporal dependency into the latent. representation induces more domain-invariant representations. As models capture more underlying factors of. variation, post domain adaptation representations gradually smoothen and become evenly dispersed, indicating. that temporal dependency acts synergestically with domain adaptation.\nsimilarly to the Domain Adversarial Neural Network (DANN) (Ganin et al.(2016) to make th representations invariant across domains. We call our model the Variational Recurrent Adversaria. Deep Domain Adaptation (VRADA) model. As far as we know, this is the first model capable c accomplishing unsupervised domain adaptation while transferring temporal latent dependencie. for complex multivariate time-series data. Figure 1 shows an example of the domain invariar representations learned by different deep learning models including our VRADA model. From thi. figure, we can see that our model (VRADA) shows better mixing of the domain distributions than th competing models indicating that it learns better domain invariant representations..\nIn order to prove the efficacy of our model, we perform domain adaptation using real-world healthcare time-series data. We choose healthcare data for two primary reasons. (1) Currently, a standard protocol in healthcare is to build, evaluate, and deploy machine learning models for particular datasets that may perform poorly on unseen datasets with different distributions. For example, models built around patient data from particular age groups perform poorly on other age groups because the features used to train the models have different distributions across the groups (Alemayehu & Warner(2004);Lac et al.[(2004); Seshamani & Gray(2004)). Knowledge learned from one group is not transferrable to the other group. Domain adaptation seems like a natural solution to this problem as knowledge needs to be transferred across domains which share features that exhibit different distributions. (2 Healthcare data has multiple attributes recorded per patient visit, and it is longitudinal and episodic in nature. Thus, healthcare data is a suitable platform on which to study a model which seeks to capture complex temporal representations and transfer this knowledge across domains.\nThe rest of the paper is structured as follows. In the following section, we briefly discuss the current state-of-the-art deep domain adaptation approaches. Afterwards, we present our model mathematically, detailing how it simultaneously learns to capture temporal latent dependencies and create domain-invariant representations. In Section 4, we compare and contrast the performance of proposed approach with other approaches on two real-world health care datasets, and provide analysis on our domain-invariant representations.."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Domain adaptation is a specific instance of transfer learning in which the feature spaces are shared but their marginal distributions are different. A good survey on the two has been done in several previous. works (Pan & Yang(2009); Jiang(2008); Patel et al.(2015)). Domain adaptation has been thoroughly studied in computer vision(Saenko et al.(2010); Gong et al.(2012); Fernando et al.(2013)) and natural language processing (NLP) (Blitzer (2007); Foster et al.(2010)) applications. Recently, the deep learning paradigm has become popular in domain adaptation (Chen et al.(2012); Tzeng et al. (2015);[Yang & Eisenstein] Long & Wang(2015)) due to its ability to learn rich, flexible, non-linear domain-invariant representations. Here, we briefly discuss two deep domain adaptation approaches. which are closely related to our proposed model. Domain Adversarial Neural Networks (DANN).\n(a) DNN (b) R-DANN (c) VRADA\nFigure 2: Block diagram of VRADA. Blue lines show the inference process, qe. (zt|x<t, Z<t). Brown lines show the generation process, peg (xt|z<t, x<t). Red lines show the recurrence process where ht is informed by ht-1, which is informed by zt-1 and xt-1. Black lines indicate classification.\n(Ganin et al.(2016)) is a deep domain adaptation model which uses two core components to create domain-invariant representations, a feature extractor that produces the data's latent representation, and an adversarial domain labeler that attempts to classify that data's domain to help the feature extractor produce latent representations which are domain-invariant. In Louizos et al.(2015), the authors propose Variational Fair AutoEncoder, which uses Variational Autoencoding architecture (Kingma & Welling(2013)) to learn latent representations where most of the information about certain known factors of variation are purged from the representation while still retaining as much information about the data as possible. While, these deep learning approaches learn domain-invariant representations, they fail to capture and transfer the underlying complex temporal latent relationships from one domain to another as they use convolutional or feed forward neural networks which we claim are not suitable for multivariate time-series data.\nOther works such as|Huang & Yates|(2009);Xiao & Guo(2013) have used distributed representations for domain adaptation in NLP sequence labeling tasks. However, they either induce hidden states. as latent features using dynamic Bayesian networks (DBNs) or learn generalizable distributed. representations of words using Recurrent Neural Networks (RNN) (Socher et al.(2011) to enable. domain adaptation. These works either model the highly non-linear dynamics, as one can with RNN,. or capture the complex latent dependencies present in sequential data, as one can with DBNs, but. not both. To overcome the challenges of DBNs and RNNs, Variational Recurrent Neural Network (VRNN)(Chung et al.(2016) was proposed recently to capture the complex relationship between the underlying hidden factors of variation and the output variables at different time-steps. The VRNN. uses Variational Autoencoders (VAEs)(Kingma & Welling(2013);Goodfellow et al.(2016)) at each time-step to learn a complex relationship between the latent hidden factors across time-steps. Like the VAE, its latent variable is parametric. Combined, these things make it well-suited for multimodal. sequential data such as multivariate time-series. In the following section, we discuss our approach,. Variational Adversarial Deep Domain Adaptation (VRADA), which uses a VRNN to model and. transfer complex domain-invariant temporal latent relationships for unsupervised domain adaptation of multivariate time-series."}, {"section_index": "4", "section_name": "VARIATIONAL RECURRENT ADVERSARIAL DEEP DOMAIN ADAPTATION", "section_text": "In this section, we present our Variational Recurrent Adversarial Deep Domain Adaptation (VRADA) model for the purpose of capturing and transferring temporal latent dependencies across domains via domain-invariant representations. First, we introduce the notations used in this paper and then discuss our VRADA model in detail.."}, {"section_index": "5", "section_name": "3.1 NOTATIONS", "section_text": "where x? E RD. (Note: in our experiments, for all data samples Ti = t, but for generality we assume that each source domain data sample x's comes with L labels yi E {0, 1}L (for example these labels may correspond to a clinical outcome such as mortality or ICD9 diagnosis codes), while\nnt X1 X2 3 Xt\ntarget domain has no labeled data samples. We assign a domain label d, E {0, 1} to each data sample to indicate if it comes from the source or target domain. d, will be used for adversarial training."}, {"section_index": "6", "section_name": "3.2 VRADA", "section_text": "zt|xt ~ N(z,t,diag(0z,t)), where [z,t, Oz,t] (x+).h+_1\n~ N(o,t, diag(o,t)) where [o,t, o,t]\n1 ) min ) (x';0y,0e)+ XR(0e n n a i=1 i=1\nAs we are interested in achieving domain adaptation via the latent representation z (i.e. to make z domain-invariant), we can adversarially train the above objective function (equation[1) by employing the domain adaptation idea proposed in|Ganin et al.(2016). Let Gy(z; 0y) and Ga(z'; 0a) represent. the source label classifier (to predict source labels y) and domain label classifier (to predict domain labels d;) respectively with parameters 0y and 0d for a given input z'. Here, Gy(.) and Gd(.) can be. deep neural networks. Let us denote their loss functions respectively as.\nCy(x'; 0u,0e) LB(Gy(Ve(x';0e);0y),yi); La(x;0a,0e) = LB(Ga(Ve(x;0e);0a),di\nLy(x';0y,0e) = LB(Gy(Ve(x';0e);0y),yi); Ld(x;0d,0e) =LB(Ga(Ve(x;0e);0d),di)\nwhere Lb is the classification loss such as a binary or categorical cross-entropy loss function and Ve(xi; 0e) is the VRNN encoder that maps input x' to zi..\nN n 1 Re = max Cd 0 d n i=1 i=n+1\nwhere n' is the number of target domain samples. As shown in|Ganin et al.[(2016), R is the domain regularizer and it is derived from the empirical H-divergence between the source domain and target. domain samples(Ben-David et al.(2010)\nThe block diagram of our VRADA model is shown in Figure[2 To explicitly model the dependencies between the latent random variable across time steps, the VRADA model utilizes Variational Recurrent Neural Networks (VRNN) (Chung et al.(2016)). The VRNN effectively contains a Variational Auto Encoders (Kingma & Welling(2013)) at every time step, all of which are conditioned on previous auto-encoders via the hidden state ht-1 of an RNN, such as an LSTM (Hochreiter & Schmidhuber (1997). Therefore, for each time-step of x, we infer a latent random variable z via\n[z} ~ N(x.t, diag(Ox.t)) wherepx.t, Ox.t.\nD(qe.(zt|x<t,zt)|[p(zt|x<t,z<t))+log Pe.(\nNow, for adversarial training, we consider the following domain adaptation term as the regularizer of equation1\nwith the gradient updates calculated as:\nOLy aLr aLd 0e0e- a0e 80y 80d aLr g -n a0g 0d0d-na0a 8Ld 0Ly 80y"}, {"section_index": "7", "section_name": "4 EXPERIMENTS", "section_text": "We conduct experiments on two real-world health care datasets to answer the following questions: (a). How does our VRADA model perform when compared to the state-of-the-art domain adaptation and. non-adaptation approaches? (b) How different are the domain-invariant representations learned by. various domain adaptation methods? (c) How do we show that the temporal latent dependencies are transferred between domains? In the remainder of this section, we will describe the datasets, methods,. empirical results, and show visualizations to answer the above questions.."}, {"section_index": "8", "section_name": "4.1 DATASET DESCRIPTION", "section_text": "Combining the joint optimization problem of equations|1|and|2 leads to our VRADA model, where we minimize the source classification risk and at the same time achieve domain adaptation. Mathe. matically. we optimize the following complete objective function:\nN N 1 La(x';0a)+ x:0-X( La(x';0a))) 0a)+ n i=n+1 (3)\nwhere X is a trade-off between optimizing on making domain-invariant representations and optimiz- ing source classification accuracy. Our optimization involves minimization with respect to some parameters, and maximization with respect to the others, i.e., we iteratively solve the following:\n=arg min E(0e,0g 0g,0y,0e = argmax E(0e, 0g, 0y, 0g 0 d\nwhere n is the learning rate. We can use stochastic gradient descent (SGD) to solve the equations (5-7). To solve equation (4), we can use SGD and the gradient reversal layer (GRL)(Ganin et al. (2016)). The role of GRL is to reverse the gradient sign while performing backpropagation. This. ensures that the domain classification loss is maximized which makes the feature representations. domain-invariant.\nThus, VRADA results in learning feature representations which are domain-invariant (due to domain regressor R) and which capture the temporal latent dependencies (due to optimizing VRNN objective function Lr). These things combine to allow the VRADAs' discriminative power on the source domain to transfer to the target domain.\nMIMIIC-III(Johnson et al.(2016)) is a public dataset with deidentified clinical care data collected at 3eth Israel Deaconess Medical Center from 2001 to 2012. It contains over 58,000 hospital admission ecords of 38,645 adults and 7,875 neonates. For our experiments, we extracted the following two latasets:\nAdult-AHRF dataset: To study domain adaptation for adult patients with acute hypoxemic respiratory failure (AHRF), we extracted 20 time series features (such as Base excess, blood pH value, Mean Air Pressure, PaO2, etc.) from 5527 admission records based on|Khemani\nChild-AHRF dataset: This is a PICU dataset which contains health records of 398 children patien. with acute hypoxemic respiratory failure in the intensive care unit at Children's Hospital Los Angeles. (CHLA)(Khemani et al.(2009)). Similar to Adult-AHRF, this dataset has 20 time series feature. collected for 4 days after ICU admission. This dataset is considered as one group (Group 1: childrer. age 0 to 19 yrs) and represents one domain.\nMortality Prediction: For Adult-AHRF and Child-AHRF datasets, we are interested in predictin mortality, i.e. whether a patient dies from AHRF during their hospital stay. 20.10% of all the patient in Child-AHRF and 13.84% of all patients in Adult-AHRF have a positive mortality label (i.e. th patients who die in hospital).\nICD9 Code Prediction: Each admission record in MIMIC-III dataset has multiple ICD-9 diagnosis codes. We group all the occurrences of the ICD-9 codes into 20 diagnosis groups!2| For the ICD9 dataset, we are interested in predicting these 20 ICD-9 Diagnosis Categories for each admission record. We treat this as a multi-task prediction problem.\nDomain Adaptation Tasks: We study unsupervised domain adaptation (i.e. target domain labels are unavailable during training and validation) task with-in age groups of Adult-AHRF dataset, ICD9 dataset and across Adult and Child-AHRF datasets. For Adult-AHRF and ICD9 datasets, we created 12 source-target domain pairs using the age groups, pairing up each domain D, with another domain Dj+i, for example, the source-target pair 2-5 was used for adapting from group 2 (working-age adult) to group 5 (old elderly). We also created 4 source-target pairs for performing domain adaptation from 4 adult age-groups to 1 child age-group."}, {"section_index": "9", "section_name": "4.2 METHODS AND IMPLEMENTATION DETAILS", "section_text": "We categorize the methods used in our main experiments into the following groups.\net al.(2009). We grouped the patients into 4 groups/cohorts based on their agel1]|I Group 2: working-age adult (20 to 45 yrs, 508 patients); Group 3: old working-age adult (46 to 65 yrs, 1888 patients); Group 4: elderly (66 to 85 yrs, 2394 patients); Group 5: old elderly (85 yrs and up, 437 patients). We treated each group as a separate domain with which we could perform domain adaptation. For each patient, we used the first 4 day after admission (with each day serving as a single time-step) as time series data for training and testing our models. 1CD9 dataset: For this dataset we extracted 99 time series features from 19714 admission records from 4 modalities including input-events (fluids into patient, e.g., insulin), output- events (fluids out of the patient, e.g., urine), lab-events (lab test results, e.g., blood pH values, platelet count, etc.) and prescription-events (drugs prescribed by doctors, e.g., aspirin, potassium chloride, etc.). These modalities are known to be extremely useful for monitoring ICU patients. All the time series are of more than 48 hours of duration, and only the first 24 hours (after admission) 2-hourly sampled time series data is used for training and testing our models. We use this dataset to predict the ICD9 Diagnosis code categories for each patient's admission record.\nNon-adaptive baseline methods: Logistic Regression (LR), Adaboost with decision regres-. sors (Adaboost), and feed forward deep neural networks (DNN). Deep Domain adaptation methods: Domain Adversarial Neural Networks (DANN) (Ganin. et al.(2016)); DANN with a RNN (LSTM) as feature extractor (R-DANN); Variational Fair Autocoder (VFAE)(Louizos et al.(2015)) Our method: Variational Recurrent Adversarial Deep Domain Adaptation (VRADA)[3 [1]|https://www.cms.gov/Research-Statistics-Data-and-Systems/. atistics-Trends-and-Reports/NationalHealthExpendData/ [2]: http://tdrdata.com/ipd/ipd_SearchForICD9CodesAndDescriptions.aspx nditions Originating in the Perinatal Period\"' is not present in the preprocessed dataset.. 3|: Codes will be publicly released soon."}, {"section_index": "10", "section_name": "4.3 OUANTITATIVE RESULTS", "section_text": "In Table[1] we compare non domain adaptation and domain adaptation models' performance on the target domain test subset for the AHRF mortality prediction task. It is immediately clear tha. domain adaptation methods consistently outperform non domain adaptation methods. We see tha. generally the VRADA outperforms both variants of the DANN with it consistently seeing scores. ~ 4% higher. While the standard deviation for the VRADA was about 1%, it was about 2% for the. R-DANN, further showing our models efficacy as it converges to more stable local optima. Oui. model VRADA beats state-of-the-art DANN(Ganin et al.[(2016) and VFAE(Louizos et al.[(2015)) or all the source-pair domain adaptation tasks for Adult-AHRF dataset. For the domain adaptation fron. Adult-AHRF to Child-AHRF dataset, we observe that VRADA mostly outperforms all the competing. models. This shows that our model can perform well even for smaller target domain datasets.\nTable 1: AUC Comparison for AHRF Mortality Prediction task with and without Domain Adaptatio\nIn the above table. we test classification without adaptation using Logistic Regression (LR), Adaboost witl decision tree classifiers and Feed forward Deep Neural Networks (DNN); and with adaptation using Deej Domain Adversarial Neural Networks (DANN), a DANN with an LSTM in its feature extractor (R-DANN Variational Fair Autoencoder (VFAE) and our Variational Adversarial Domain Adaptation Model (VRADA). Al results are reported on the target domain test subset dataset.\nIn all our experiments, we conducted unsupervised domain adaptation where target domain labels are unavailable during training and validation. For R-DANN, we used LSTM(Hochreiter & Schmidhuber (1997)) as the feature extractor network instead of the feed-forward neural networks used in DANN. For VFAE, DANN and all the non-domain adaptive approaches we flattened the time series along time axis and treat it as the input to the model. For fairness, the classifier and feature extractors of the VRADA and R-DANN were equivalent in depth and both had the same model capacity. We also ensure that the size of latent feature representation z' are similar for VRADA and DANN models The model capacity of VFAE was chosen to be similar to VRADA. All the deep domain adaptation models including ours had depth of size 8 (including output classifier layers). We used the Adam optimizer (Kingma & Ba(2014)) and ran all models for 500 epochs with a learning rate of 3e-4 We set an early stopping criteria that the model does not experience a decrease in the validation loss for 20 epochs. Source domain data was split into train/validation subsets with a 70/30 ratio and target domain data into train/validation/test subsets with a 70/15/15 ratio. In order to compare all the methods, we report AUC scores on the entire target domain set, and the test subset for each target domain data of a source-target pair.\nSource-Target LR Adaboost DNN DANN VFAE R-DANN VRADA 3- 2 0.555 0.562 0.569 0.572 0.615 0.603 0.654 4- 2 0.624 0.645 0.569 0.589 0.635 0.584 0.656 5- 2 0.527 0.554 0.551 0.540 0.588 0.611 0.616 2- 3 0.627 0.621 0.550 0.563 0.585 0.708 0.724 4- 3 0.681 0.636 0.542 0.527 0.722 0.821 0.770 5- 3 0.655 0.706 0.503 0.518 0.608 0.769 0.782 2- 4 0.585 0.591 0.530 0.560 0.582 0.716 0.777 3- 4 0.652 0.629 0.531 0.527 0.697 0.769 0.764 5- 4 0.689 0.699 0.538 0.532 0.614 0.728 0.738 2- 5 0.565 0.543 0.549 0.526 0.555 0.659 0.719 3- 5 0.576 0.587 0.510 0.526 0.533 0.630 0.721 4- 5 0.682 0.587 0.575 0.548 0.712 0.747 0.775 5-1 0.502 0.573 0.557 0.563 0.618 0.563 0.639 4-1 0.565 0.533 0.572 0.542 0.668 0.577 0.636 3- 1 0.500 0.500 0.542 0.535 0.570 0.591 0.631 2-1 0.520 0.500 0.534 0.559 0.578 0.630 0.637\nAs the AHRF mortality prediction task made it clear that domain adaptation is necessary for inter group adaptation, for the ICD9 multi-task prediction task that involved data with time-steps of length. 12, we focused strictly on domain adaptive models (i.e. the DANN, R-DANN, and VRADA). Table2 shows the aggregated AUC scores on the entire target domain dataset and test data of the target. domain for the 20 tasks of the ICD9 Code Prediction task. Here, we clearly see that VRADA and\nTable 2: AUC Comparison for ICD9 Diagnosis Code Prediction task\nR-DANN models outperform DANN|Ganin et al.(2016) by significant margins. We also observe that VRADA outperforms R-DANN by 1.5 ~ 2% when averaged over all the source-target domain pairs"}, {"section_index": "11", "section_name": "4.4 DISCUSSION", "section_text": "Figure 3 shows the temporal latent dependencies captured by our VRADA as compared to the. R-DANN for 3-4 source-target pair. While both models learn temporal latent dependencies fairly. well, the VRADA outperforms the R-DANN in two ways. First, the VRADA's neurons learned stronger predictions of whether features are relevant towards modeling the data. If we look at the. VRADA row, for both AHRF and ICD9 we see that the neural activation patterns are more consistent across time-steps than for R-DANN. Figure|4|shows the unrolled memory cell states (in the form Examples (Time * Neurons)) for all the source and target domain data points. We see a consistent activation firing patterns across all these data points for VRADA but not for R-DANN. Together with the stronger performance on 3-4 for AHRF and 2-5 for ICD9, this potentially indicates that VRADA is better learning the temporal dependencies..\nSecond, nuanced values are consistent across time-steps for the VRADA, exhibiting a gradua transition towards stronger activation with time, whereas the temporal activation pattern of the R DANN seems somewhat sporadic. While activation gradients across time are consistent for both the R-DANN and VRADA, more consistent inhibitory and excitatory neuron firing patterns indicate that the VRADA better transfers knowledge. Another indication of domain adaptation was shown in Figure 1c Looking at the t-SNE projections of feature representations of DNN, R-DANN, and VRADA we can see that the addition of temporal latent dependencies might help in better mixing of the domain distributions since we observe that the data is more evenly spread out. Figure|1c|and Figure3|together indicate that the VRADA's temporal latent dependency capturing power and ability to create domain-invariant representations act synergistically. For plots of activation patterns without domain adaptation, please see appendix section|6|2.3."}, {"section_index": "12", "section_name": "5 SUMMARY", "section_text": "Because of its diverse range of patients and its episodic and longitudal nature, healthcare data provides. a good platform to test domain adaptation techniques for temporal data. With it as our example, we. showcase the Variational Recurrent Adversarial Domain Adaptation (VRADA) model's ability to. learn temporal latent representations that are domain-invariant. By comparing our model's latent. representations to others', we show its ability to use variational methods to capture hidden factors of variation and produce more robust domain-invariant representations. We hope this work serves as a. bedrock for future work capturing and adapting temporal latent representations across domains.."}, {"section_index": "13", "section_name": "ACKNOWLEDGMENTS", "section_text": "O.OOe O.OOO O.I 0.70O.O1 Here, we compare results for the ICD9 Diagnosis Code Prediction task on the ICD9 dataset. For each model, the. top row corresponds to the performance on the entire target domain dataset and the bottom row corresponds to performance on the test subset (15%) of the target domain dataset.\nThis material is based upon work supported by the NSF research grants IIS-1134990, IIS-1254206 Samsung GRO Grant and the NSF Graduate Research Fellowship Program under Grant No. DGE 1418060. Any opinions, findings, and conclusions or recommendations expressed in this material. are those of the author(s) and do not necessarily reflect the views of the funding agencies. We also. acknowledge Thailand's Development and Promotion of Science and Technology Talents Project for. financial support. We thank Dr. Robinder Khemani for sharing the Child-AHRF dataset..\nFigure 3: Cell states of memory cell for R-DANN and VRADA showing temporal latent dependencies capture by neurons of the R-DANN and VRADA for the source domain and transferred to the target domain. Each ste along the y-axis refers to the activation of a single neuron with blue for strong inhibition and yellow for stron excitation. Step along the x-axis refers to activation per time-step. The left shows a single example in adapting 3-4 and the right for adapting 2-5.\nFigure 4: Cell states of memory cell for R-DANN and VRADA showing activation for all ICD9 2-5 adaptation. examples. Here, we show temporal dependencies learned across time, feature pairs for examples in a domain The y-axis values refer to values per data point and the x-axis shows activation at time, feature pairs with the. time and feature dimensions being flattened."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortmar Vaughan. A theory of learning from different domains. Machine learning, 79(1-2):151-175, 2010.\nJohn Blitzer. Domain adaptation of natural language processing systems. PhD thesis, University of Pennsylvania, 2007.\nMinmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. Marginalized denoising autoencoders for domain adaptation. arXiy preprint arXiv:1206.4683. 2012\nBasura Fernando, Amaury Habrard, Marc Sebban, and Tinne Tuytelaars. Unsupervised visual domain adaptation using subspace alignment. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2960-2967, 2013.\nL.5 10 10 R-DANN 15 20 12 14 15 16 1.5 3.5 3.5 1C 10 12 10 VRADA 10 12 25 14 16 18 20 20 0.5 2.5 3.5 0.5 2.5 3.5 10 12 10 12 1.5 4.5 1.5 4.5 Source Target Source Target AHRE 3-4 ICD9, 2-5\n50 100 100 10 150 150 150 200 200 250 250 300 300 350 350 400 350 400 5 450 100 450 50 100150200250300350400450 100150200250 300350400450 50100150200250300350400450 50 100150200250300350400450 RDANN VRADA\nJunyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron Courville, and Yoshua Bengio A Recurrent Latent Variable Model for Sequential Data. arXiv.org, May 2016..\nYaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. arXi preprint arXiv:1409.7495, 2014\nIan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. Mit Press, December 2016\nFei Huang and Alexander Yates. Distributional representations for handling sparsity in supervise. sequence-labeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP Volume 1-Volume 1. pp. 495-503. Association for Computational Linguistics. 2009\nJing Jiang. A literature survey on domain adaptation of statistical classifiers. URL: http://sifaka. cs uiuc. edu/jiang4/domainadaptation/survey, 2008\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR abs/1412.6980,2014. URLhttp://arxiv.0rg/abs/1412.6980\nDiederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. arXiv.org, December 2013\nZhiqiang Lao, Dinggang Shen, Zhong Xue, Bilge Karacali, Susan M Resnick, and Christos Da. vatzikos. Morphological classification of brains via high-dimensional shape transformations and machine learning methods. Neuroimage, 21(1):46-57, 2004.\nJing Jiang and ChengXiang Zhai. Instance weighting for domain adaptation in nlp. In ACL, volume 7 pp. 264-271, 2007.\nAEW Johnson, TJ Pollard, L Shen, L Lehman, M Feng, M Ghassemi, B Moody, P Szolovits, LA Celi and RG Mark. Mimic-iii, a freely accessible critical care database. Scientific Data, 2016.\nVishal M Patel, Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. Visual domain adaptation A survey of recent advances. 1EEE signal processing magazine, 32(3):53-69, 2015\nTable 3: AUC Comparison for AHRF Mortality Prediction task for different ty ypes of VRADA training\nEric Tzeng, Judy Hoffman, Trevor Darrell, and Kate Saenko. Simultaneous deep transfer across domains and tasks. In Proceedings of the IEEE International Conference on Computer Vision, pp 4068-4076, 2015.\nMin Xiao and Yuhong Guo. Domain adaptation for sequence labeling tasks with a probabilistic language adaptation model. In ICML (1), pp. 293-301, 2013.\nYi Yang and Jacob Eisenstein. Unsupervised multi-domain adaptation with feature embeddings\nand (c) loading a pretrained VRNN encoder and using the objective as presented in equation3 (denoted by III). Key to note is that in method II, we do not apply variational methods towards learning the shared latent representation. This was done to test whether they were helpful or harmfu. towards the learned latent representation used for classification. In method III, we train VRADA as normal but load a pretrained encoder. We pretrain the encoder by training the VRNN on all source and target domain samples for a desired source-target adaptation pair. In order to choose how many samples would be used for training, we looked at which domain had more examples and chose the larger of the two. For example, if the source domain was group 2 with 508 patients and the targe1 domain was group 5 with 437 patients, the VRNN would see 508 samples of each domain, with group 5 being sampled with replacement after seeing all its samples. As the encoder was used for learning latent representations, we thought it worth investigating whether if pretrained it better captured the latent representations that were being used by the domain classifier for adversarial training. We thought beginning domain classification at a better initialization point might help VRADA avoid local minima. For each method, we fed one source domain sample to Gy and either a source or target domain sample to Gg. (For this training and all training samples, order was randomized.) We only calculated the loss Lr once for the Ga samples so as to not bias the optimization of the VRNN.\nTable3 shows the results of AHRF Mortality Prediction task for different types of VRADA training From these experiments, we found that jointly training VRADA (i.e method I) usually performed better than the other pretrained training approaches.\nA natural question is whether adversarial training at every time-step is more effective than adversarial training at the last time-step of a latent representation. If done at every time-step, the network learns\nTraining 23 24 25 32 34 35 42 43 45 52 53 54 I 0.704 0.777 0.682 0.540 0.764 0.721 0.603 0.727 0.710 0.616 0.782 0.738 II 0.724 0.656 0.719 0.627 0.748 0.683 0.656 0.770 0.755 0.595 0.736 0.732 III 0.721 0.688 0.656 0.654 0.757 0.691 0.609 0.766 0.775 0.602 0.709 0.714\nWe tested 3 variations of training VRADA: (a) training VRADA regularly as discussed in Section[3 (denoted by I), (b) loading a pretrained VRNN encoder and optimizing strictly off the classification errors, i.e.\nN 1 La(x;0a) + L La(x;0a))) ,0d. i=n+1\nto create domain-invariant representations of subsets of your input x<T. Do these domain-invariant representations help the network find more optimal domain-invariant representations of x? We empirically tested this scenario (Table4) and found the results to be sub-optimal when compared to only performing adversarial training at the last time-step (Table1). Below are results for the R-DANN and VRADA models for adversarial training at every time-step.\nTable 4: AUC Comparison for AHRF Mortality Prediction task with adversarial training done at every time-step"}, {"section_index": "15", "section_name": "6.2.2 EFFECT OF RECONSTRUCTION LOSS", "section_text": "Table[5shows the effect of reconstruction loss for our VRADA model. We observe that reconstructing. the original data (i.e. using the decoder for reconstructing the data) helps in the overall performance improvement of our VRADA model.\nIn figures|5|and6|we show the cell state activations for the VRADA and R-DANN without domair adaptation (i.e. no adversarial training). From these figures, we see that the dependencies betweer source and target domains are not transferred correctly since we do not perform adversarial training On the otherhand, as discussed in section4.4] figure [3shows that adversarial training helps ir transferring the dependencies between source and target domains efficiently."}, {"section_index": "16", "section_name": "6.3 R-DANN MODEL INFORMATION", "section_text": "Here we provide more details on the network architectures of the R-DANN and DANN. Please refer. to Figure7|for a diagram of the R-DANN model showing the dimensions of each layer and the. connections between layers. The R-DANN and DANN were essentially identical except that, for the. DANN, the first layer used a fully-connected layer instead of an RNN and took input flattened over. the time-dimension. Thus the input dimensions corresponded to f and t f for the R-DANN and. DANN, respectively, where f is the number of features and t is the length of the time-dimension.\nodel 23 24 25 32 34 35 42 43 45 52 53 54 -DANN .651 .599 .598 .557 .679 .534 .563 .768 .588 .528 .696 .669 RADA .681 .691 .643 .594 .733 .641 .733 .794 .675 .583 .755 .726\nTable 5: AUC Comparison of VRADA model for AHRF Mortality Prediction task with and without reconstruc tion loss"}, {"section_index": "17", "section_name": "VRADA", "section_text": "Figure 5: Cell states of memory cell for R-DANN and VRADA showing temporal latent dependencies captured by neurons of the R-DANN and VRADA for the source domain and the target domain. Each step along the y-axis refers to the activation of a single neuron with blue for strong inhibition and yellow for strong excitation. Step along the x-axis refers to activation per time-step. The figure shows a single example in adapting 3-4 for AHRF dataset.\n5 5 10 10 15 15 20 20 25 25 30 30 0.5 1 1.5 2 2.5 3 3.5 4 4.5 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5 10 10 15 15 20 20 25 25 30 30 0.5 1 1.5 2 2.5 3 3.5 4 4.5 0.5 1 1.5 2 2.5 3 3.5 4 4.5 Source Target"}, {"section_index": "18", "section_name": "VRADA", "section_text": "Figure 6: Cell states of memory cell for R-DANN and VRADA showing temporal latent dependencies capturec. by neurons of the R-DANN and VRADA for the source domain and the target domain. Each step along th y-axis refers to the activation of a single neuron with blue for strong inhibition and yellow for strong excitatior. Step along the x-axis refers to activation per time-step. The figure shows a single example in adapting 2-5 fo. ICD9 dataset.\n5 5 10 10 15 15 20 20 25 25 30 30 35 35 40 40 45 45 50 50 2 4 6 8 10 12 2 4 6 8 10 12 5 5 10 10 15 15 20 20 25 25 30 30 35 35 40 40 45 45 50 50 2 4 6 8 10 12 2 4 6 8 10 12 Source Target\nFigure 7: Block diagram of the R-DANN showing the number of neurons used in each layer and how the layers were connected. This model had a capacity of about 46, 000 parameters.\nFully connected Fully connected output: 1 output: 1 input: 50 input: 50 Fully connected Fully connected output: 50 output: 50 input: 50 input: 50 Fully connected Fully connected output: 50 output: 50 input: 50 input: 50 + Fully connected Fully connected output: 50 output: 50 input: 100 input: 100 Fully connected output: 100 input: 100 Fully connected output: 100 input: 100 Fully connected output: 100 input: 100 RNN output: 100 input: F.\nFully connected output: 100 input: 100\ninput: 100"}] |
BJ_MGwqlg | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Recently, deep neural networks (DNNs) have yielded state-of-the-art performance on a wide array. of AI tasks, including image classification Krizhevsky et al.(2012), speech recognition Hannun. et al.(2014), and language understanding Sutskever et al.(2014). In addition to algorithmic inno-. vations Nair & Hinton(2010); Srivastava et al.(2014); Taigman et al.(2014), a key driver behind these successes are advances in computing infrastructure that enable large-scale deep learning--the training and inference of large DNN models on massive datasets Dean et al.(2012); Farabet et al.. (2013). Indeed, highly efficient GPU implementations of DNNs played a key role in the first break-. through of deep learning for image classification Krizhevsky et al.(2012). Given the ever growing. amount of data available for indexing, analysis, and training, and the increasing prevalence of ever-. larger DNNs as key building blocks for AI applications, it is critical to design computing platforms. to support faster, more resource-efficient DNN computation..\nA set of core design decisions are common to the design of these infrastructures. One such criti- cal choice is the numerical representation and precision used in the implementation of underlying storage and computation. Several recent works have investigated the numerical representation for DNNs Cavigelli et al.(2015);Chen et al.(2014); Du et al.(2014); Muller & Indiveri](2015). One recent work found that substantially lower precision can be used for training when the correct nu- merical rounding method is employed Gupta et al.(2015). Their work resulted in the design of a very energy-efficient DNN platform.\nThis work and other previous numerical representation studies for DNNs have either limited them-. selves to a small subset of the customized precision design space or drew conclusions using only. small neural networks. For example, the work from Gupta et al. 2015 evaluates 16-bit fixed-point. and wider computational precision on LeNet-5 LeCun et al.(1998) and CIFARNETKrizhevsky & Hinton(2009). The fixed-point representation (Figure [1) is only one of many possible numeric. representations. Exploring a limited customized precision design space inevitably results in designs lacking in energy efficiency and computational performance. Evaluating customized precision ac-. curacy based on small neural networks requires the assumption that much larger, production-grade. neural networks would operate comparably when subjected to the same customized precision..\nIn this work, we explore the accuracy-efficiency trade-off made available via specialized custom- precision hardware for inference and present a method to efficiently traverse this large design space to find an optimal design. Specifically, we evaluate the impact of a wide spectrum of customized"}, {"section_index": "1", "section_name": "RETHINKING 1 NUMERICAL REPRESENTATIONS FOR DEEP NEURAL NETWORKS", "section_text": "arker Hill, Babak Zamirai, Shengshuo Lu, Yu-Wei Chao, Michael Laurenzano, Mehrzad Sama Marios Papaefthymiou, Scott Mahlke, Thomas Wenisch, Jia Deng, Lingjia Tang, Jason Mars"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Figure 1: A fixed-point representation. Hard- ware parameters include the total number of bits and the position of the radix point..\nprecision settings for fixed-point and floating-point representations on accuracy and computational performance. We evaluate these customized precision configurations on large, state-of-the-art neu- ral networks. By evaluating the full computational precision design space on a spectrum of these production-grade DNNs, we find that:\nTo make these conclusions on large-scale customized precision design readily actionable for DNN infrastructure designers, we propose and validate a novel technique to quickly search the large cus- tomized precision design space. This technique leverages the activations in the last layer to build a model to predict accuracy based on the insight that these activations effectively capture the prop- agation of numerical error from computation. Using this method on deployable DNNs, including GoogLeNet Szegedy et al.(2015) and VGG Simonyan & Zisserman(2014), we find that using these recommendations to introduce customized precision into a DNN accelerator fabric results in an average speedup of 7.6 with less than 1% degradation in inference accuracy."}, {"section_index": "3", "section_name": "2.1 DESIGN SPACE", "section_text": "We consider three aspects of customized precision number representations. First, we contrast the. high-level choice between fixed-point and floating-point representations. Fixed-point binary arith- metic is computationally identical to integer arithmetic, simply changing the interpretation of each bit position. Floating-point arithmetic, however, represents the sign, mantissa, and exponent of a real number separately. Floating-point calculations involve several steps absent in integer arithmetic. In particular, addition operations require aligning the mantissas of each operand. As a result, floating point computation units are substantially larger, slower, and more complex than integer units..\nmetic is computationally identical to integer arithmetic, simply changing the interpretation of each bit position. Floating-point arithmetic, however, represents the sign, mantissa, and exponent of a real number separately. Floating-point calculations involve several steps absent in integer arithmetic. Ir particular, addition operations require aligning the mantissas of each operand. As a result, floating point computation units are substantially larger, slower, and more complex than integer units. In CPUs and GPUs, available sizes for both integers and floating-point calculations are fixed accord ing to the data types supported by the hardware. Thus, the second aspect of precision customization we examine is to consider customizing the number of bits used in representing floating-point and fixed-point numbers. Third, we may vary the interpretation of fixed-point numbers and assignment of bits to the mantissa and exponent in a floating-point value.\nIn CPUs and GPUs, available sizes for both integers and floating-point calculations are fixed accord-. ing to the data types supported by the hardware. Thus, the second aspect of precision customization we examine is to consider customizing the number of bits used in representing floating-point and fixed-point numbers. Third, we may vary the interpretation of fixed-point numbers and assignment. of bits to the mantissa and exponent in a floating-point value.."}, {"section_index": "4", "section_name": "2.2 CUSTOMIZED PRECISION TYPES", "section_text": "In a fixed-point representation, we select the number of bits as well as the position of the radix point which separates integer and fractional bits, as illustrated in Figure 1| A bit array, x, encoded in fixed.\ninteger fraction exponent mantissa + 10011... - bias 11001.01110.. 1.01101...X2\nFigure 2: A floating-point representation. Hard- ware parameters include the number of mantissa nd exponent bits. and the bias.\n1. Precision requirements do not generalize across all neural networks. This prompts designers of future DNN infrastructures to carefully consider the applications that will be executed on their platforms, contrary to works that design for large networks and evaluate accuracy on small networks Cavigelli et al.(2015);Chen et al.(2014). 2. Many large-scale DNNs require considerably more precision for fixed-point arithmetic than pre- viously found from small-scale evaluations Cavigelli et al.[(2015); Chen et al.[(2014);Du et al. (2014). For example, we find that GoogLeNet requires on the order of 40 bits when implemented with fixed-point arithmetic, as opposed to less than 16 bits for LeNet-5. 3. Floating-point representations are more efficient than fixed-point representations when selecting optimal precision settings. For example, a 17-bit floating-point representation is acceptable for GoogLeNet, while over 40 bits are required for the fixed-point representation - a more expensive computation than the standard single precision floating-point format. Current platform designers should reconsider the use of the floating-point representations for DNN computations instead of the commonly used fixed-point representations Cavigelli et al. (2015); Chen et al. (2014);Du et al.(2014);Muller & Indiveri(2015).\nFigure 3: Floating point multiply-accumulate (MAC) unit with various levels of detail: (a) the high. level mathematical operation, (b) the modules that form a floating point MAC, and (c) the signal propagation of the unit.\nIn contrast to floating point, fixed-point representations with a particular number of bits have a fixe. level of precision. By varying the position of the radix point, we change the representable range\nAn example floating-point representation is depicted in Figure 2] As shown in the figure, there are three parameters to select when designing a floating-point representation: the bit-width of the mantissa, the bit-width of the exponent, and an exponent bias. The widths of the mantissa and exponent control precision and dynamic range, respectively. The exponent bias adjusts the offset of the exponent (which is itself represented as an unsigned integer) relative to zero to fa- cilitate positive and negative exponents. Finally, an additional bit represents the sign. Thus, a floating-point format with Nm mantissa bits, Ne exponent bits, and a bias of b, encodes the value\n1oating-point Iormal 1Vm Imantissa bils, Ne exponenl bits, and a bias of b, encodes the the mantissa and exponent, respectively. Note that the leading bit of the mantissa is assumed to be. 1 and hence is not explicitly stored, eliminating redundant encodings of the same value. A single. precision value in the IEEE-754 standard (i.e. f1oat) comprises 23 mantissa bits, 8 exponent bits,. and a sign bit. IEEE-754 standardized floating-point formats include special encodings for specific. values, such as zero and infinity.\nBoth fixed-point and floating-point representations have limitations in terms of the precision and the dynamic ranges available given particular representations, manifesting themselves computationally as rounding and saturation errors. These errors propagate through the deep neural network in a way that is difficult to estimate holistically, prompting experimentation on the DNN itself."}, {"section_index": "5", "section_name": "2.3 HARDWARE IMPLICATIONS", "section_text": "Reducing the floating-point bit width improves hardware performance in two ways. First, reduce. bit width makes a computation unit faster. Binary arithmetic computations involve chains of logi operations that typically grows at least logarithmically, and sometimes linearly (e.g., the propagatior. of carries in an addition, see Figure[3|(c)), in the number of bits. Reducing the bit width reduces th. length of these chains, allowing the Iogic to operate at a higher clock frequency. Second, reduce. bit width makes a computation unit smaller and require less energy, typically linearly in the numbe. of bits. The circuit delay and area is shown in Figure4when the mantissa bit widths are varied. A. shown in the figure, scaling the length of the mantissa provides substantial opportunity because i defines the size of the internal addition unit. Similar trends follow for bit-widths in other represen tations. When a unit is smaller, more replicas can fit within the same chip area and power budge all of which can operate in parallel. Hence, for computations like those in DNNs, where ampl. parallelism is available, area reductions translate into proportional performance improvement..\nThis trend of bit width versus speed, power, and area is applicable to every computation unit in hardware DNN implementations. Thus, in designing hardware that uses customized representations\nSign Exponent Mantissa Sign Exponent Mantissa 8. / 6 1 Comparator 1 1 FSM Alignment Controller Alignment Delay 1 1 1 Addition/ Subtraction 1 1 Increment/ Alignment / Decrement 1 8 5 4 3 0 Sign Exponent Mantissa (a) (b) (c)\nThe key hardware building block for implementing DNNs is the multiply-accumulate (MAC) op-. eration. The MAC operation implements the sum-of-products operation that is fundamental to the. activation of each neuron. We show a high-level hardware block diagram of a MAC unit in Figure|3. (a). Figure 3 (b) adds detail for the addition operation, the more complex of the two operations. As seen in the figure, floating-point addition operations involve a number of sub-components that. compare exponents, align mantissas, perform the addition, and normalize the result. Nearly all of. the sub-components of the MAC unit scale in speed, power, and area with the bit width..\n1.0 Pa 0.8 0.6 3 0.4 0.2 Normalized Area Normalized Delay 0.0 5 10 15 20 Mantissa Bits\nFigure 4: Delay and area implications of man tissa width, normalized to a 32-bit Single Preci sion MAC with 23 mantissa bits.\nthere is a trade-off between accuracy on the one hand and power, area, and speed on the other. Our goal is to use precision that delivers sufficient accuracy while attaining large improvements in power. area, and speed over standard floating-point designs."}, {"section_index": "6", "section_name": "3 METHODOLOGY", "section_text": "We describe the methodology we use to evaluate the customized precision design space, using image classification tasks of varying complexity as a proxy for computer vision applications. We evaluate DNN implementations using several metrics, classification accuracy, speedup, and energy savings relative to a baseline custom hardware design that uses single-precision floating-point representa tions. Using the results of this analysis, we propose and validate a search technique to efficiently determine the correct customized precision design point."}, {"section_index": "7", "section_name": "3.1 ACCURACY", "section_text": "We evaluate accuracy by modifying the Caffe Jia et al.(2014) deep learning framework to perforn. calculations with arbitrary fixed-point and floating-point formats. We continue to store values as C. f 1oats in Caffe, but truncate the mantissa and exponent to the desired format after each arithmeti operation. Accuracy, using a set of test inputs disjoint from the training input set, is then measurec. oy running the forward pass of a DNN model with the customized format and comparing the out. puts with the ground truth. We use the standard accuracy metrics that accompany the dataset fo. each DNN. For MNIST (LeNet-5) and CIFAR-10 (CIFARNET) we use top-1 accuracy and for Ima geNet (GoogLeNet, VGG, and AlexNet) we use top-5 accuracy. Top-1 accuracy denotes the percen f inputs that the DNN predicts correctly after a single prediction attempt, while top-5 accurac. represents the percent of inputs that DNN predicts correctly after five attempts.."}, {"section_index": "8", "section_name": "3.2 EFFICIENCY", "section_text": "We quantify the efficiency advantages of customized floating-point representations by designing a. floating-point MAC unit in each candidate precision and determining its silicon area and delay char-. acteristics. We then report speedup and energy savings relative to a baseline custom hardware im- plementation of a DNN that uses standard single-precision floating-point computations. We design. each variant of the MAC unit using Synopsys Design Compiler and Synopsys PrimeTime, industry. standard ASIC design tools, targeting a commercial 28nm silicon manufacturing process. The tools. report the power, delay, and area characteristics of each precision variant. As shown in Figure5. we compute speedups and energy savings relative to the standardized IEEE-754 floating-point rep-. resentation considering both the clock frequency advantage and improved parallelism due to area reduction of the narrower bit-width MAC units. This allows customized precision designs to yield a quadratic improvement in total system throughput.."}, {"section_index": "9", "section_name": "3.3 EFFICIENT CUSTOMIZED PRECISION SEARCH", "section_text": "To exploit the benefits of customized precision, a mechanism to select the correct configuratior must be introduced. There are hundreds of designs among floating-point and fixed-point formats due to designs varying by the total bit width and the allocation of those bits. This spectrum oi designs strains the ability to select an optimal configuration. A straightforward approach to selec the customized precision design point is to exhaustively compute the accuracy of each design with a large number of neural network inputs. This strategy requires substantial computational resources that are proportional to the size of the network and variety of output classifications. We describe our technique that significantly reduces the time required to search for the correct configuration in orde. to facilitate the use of customized precision.\nThe key insight behind our search method is that customized precision impacts the underlying in ternal computation, which is hidden by evaluating only the NN final accuracy metric. Thus, insteac\n1.0 Parallelism: 1v Parallelism: 4v 0.8 1t 0.6 32-bit MAC 11-bit 11-bit 11-bit 11-bit 0.4 MAC MAC MAC MAC 0.2 Normalized Area Normalized Delay 0.0 1v/10t 4v14t 5 10 15 20 10x speedup Mantissa Bits\nFigure 5: Speedup calculation with a fixed area budget. The speedup exploits the improved. function delay and parallelism..\nFigure 6: The inference accuracy versus speedup design space for each of the neural networks showing substantial computational performance improvements for minimal accuracy degradation when customized precision floating-point formats are used.\nof comparing the final accuracy generated by networks with different precision configurations, w. compare the original NN activations to the customized precision activations. This circumvents th need to evaluate the large number of inputs required to produce representative neural network accu. racy. Furthermore, instead of examining all of the activations, we only analyze the last layer, sinc. the last layer captures the usable output from the neural network as well as the propagation of los. accuracy. Our method summarizes the differences between the last layer of two configurations by. calculating the linear coefficient of determination between the last layer activations..\nA method to translate the coefficient of determination to a more desirable metric, such as end-to-end . inference accuracy, is necessary. We find that a linear model provides such a transformation. The. customized precision setting with the highest speedup that meets a specified accuracy threshold is. then selected. In order to account for slight inaccuracies in the model, inference accuracy for a subset of configurations is evaluated. If the configuration provided by the accuracy model results. in insufficient accuracy, then an additional bit is added and the process repeats. Similarly, if the. accuracy threshold is met, then a bit is removed from the customized precision format.."}, {"section_index": "10", "section_name": "4 EXPERIMENTS", "section_text": "In this section, we evaluate five common neural networks spanning a range of sizes and depths in the context of customized precision hardware. We explore the trade-off between accuracy and efficiency when various customized precision representations are employed. Next, we address the sources oi accuracy degradation when customized precision is utilized. Finally, we examine the characteristic. of our customized precision search technique.\nWe evaluate the accuracy of customized precision operations on five DNNs: GoogLeNet Szegedy. et al.(2015), VGG Simonyan & Zisserman(2014), AlexNetKrizhevsky et al.(2012), CIFAR NET Krizhevsky & Hinton(2009), and LeNet-5 LeCun et al.(1998). The implementations and pre-trained weights for these DNNs were taken from Caffe Jia et al.(2014). The three largest DNNs. (GoogLeNet, VGG, and AlexNet) represent real-world workloads, while the two smaller DNNs (CI. FARNET and LeNet-5) are the largest DNNs evaluated in prior work on customized precision. For. each DNN, we use the canonical benchmark validation set: ImageNet for GoogLeNet, VGG, and. AlexNet: CIFAR-10 for CIFARNET; MNIST for LeNet-5. We utilize the entire validation set for all experiments, except for GoogLeNet and VGG experiments involving the entire design space. In. these cases we use a randomly-selected 1% of the validation set to make the experiments tractable\nTo evaluate the benefits of customized precision hardware, we swept the design space for accuracy and performance characteristics. This performance-accuracy trade off is shown in Figure 6 Thi figure shows the DNN inference accuracy across the full input set versus the speedup for each o the five DNN benchmarks. The black star represents the IEEE 754 single precision representatioi (i.e. the original accuracy with 1 speedup), while the red circles and blue triangles represent the complete set of our customized precision floating-point and fixed-point representations, respectively\nFor GoogLeNet, VGG, and AlexNet it is clear that the floating-point format is superior to the fixed point format. In fact, the standard single precision floating-point format is faster than all fixed point configurations that achieve above 40% accuracy. Although fixed-point computation is simpler and faster than floating-point computation when the number of bits is fixed, customized precision floating-point representations are more efficient because less bits are needed for similar accuracy.\nCustom Floating Point. Custom Fixed Point * IEEE 754 Single Prec. 100% 100% 100% 100% 100% 80% 80% 80% 80% 80% Aeennney Aeenney Aeennnee 60% Aeennney Aeernrey 60% 60% 60% 60% 40% 40% 40% 40% 40% 20% 20% 20% 20% 20% 0% 0% 0% 0% 0% Ox 5x 1Ox 15x 2Ox 25x Ox 5x 1Ox 15x 2Ox 25x 0x 5x10x15x20x25x Ox 5x 10x 15x 2Ox 25x 0x 5x 1Ox 15x 2Ox 25x Speedup Speedup Speedup Speedup Speedup (a) GoogLeNet (b) VGG (c) AlexNet (d) CIFARNET (e) LeNet-5\n20x 32 25.1x 32 10 <1%Accuracy 10 <1%Accuracy 6.2x 7x 16.8x 28 Degradation 21x 28 Degradation <1% Accuracy 24 Bis <1% Accuracy 5.3x 24 5.9x 13.6x 8 16.8x Degradation Degradation 8 4.4x 4.8x 8 20 - 20 10.4x 12.6x 16 - 3.5x 16 3.7x 7.2x 8.5x 6 12 6 2.6x 12 2.5x 3.9x 8- 4.3x 1.7x 8 1.4X 0.7 0.2 0.8x 4- 0.3x 3 6 912 15182124 8 12 162024 28 32 3 691215182124 4 8 121620242832 Mantissa Bits Integer Bits Mantissa Bits Integer Bits (a) Floating-point speedup (b) Fixed-point speedup (c) Floating-point energy (d) Fixed-point energy.\n100%- 1000 3 p=0.96 90% 500 4 C 80% 500 [1] IEEE 754 Single Prer [2] Custom FL M=8/E=6 70% [3] Custom FL M=2/E=14 1000 [4] Custom FL M=10/E=4 [5]CustomFI L=8/R=6 60% 0 500 1000 1500 2000 2500 3000 50% 0.5 0.6 0.7 0.8 0.9 # of Accumulated Values\nFigure 8: The accumulation of weighted neuron inputs for a spe. cific neuron with various customized precision DNNs as well as the IEEE 754 single precision floating point configuration for refer-. ence. FL and FI are used to abbreviate floating point and fixed-point respectively. The format parameters are as follows: M=mantissa,. E=exponent, L=bits left of radix point, R=bits right of radix point..\nBy comparing the results across the five different networks in Figure6l it is apparent that the siz. and structure of the network impacts the customized precision flexibility of the network. This insigh suggests that hardware designers should carefully consider which neural network(s) they expect thei. device to execute as one of the fundamental steps in the design process. The impact of network siz. on accuracy is discussed in further detail in the following section..\nThe specific impact of bit assignments on performance and energy efficiency are illustrated in Fig. ure[7 This figure shows the the speedup and energy improvements over the single precision floating. point representation as the number of allocated bits is varied. For the floating-point representations. the number of bits allocated for the mantissa (x-axis) and exponent (y-axis) are varied. For the fixed point representations, the number of bits allocated for the integer (x-axis) and fraction (y-axis) are. varied. We highlight a region in the plot deemed to have acceptable accuracy. In this case, we define. acceptable accuracy to be 99% normalized AlexNet accuracy (i.e., no less than a 1% degradation ir accuracy from the IEEE 754 single precision accuracy on classification in AlexNet)..\nThe fastest and most energy efficient representation occurs at the bottom-left corner of the region. with acceptable accuracy, since a minimal number of bits are used. The configuration with the. highest performance that meets this requirement is a floating-point representation with 6 exponent. bits and 7 mantissa bits, which yields a 7.2 speedup and a 3.4 savings in energy over the single. precision IEEE 754 floating-point format. If a more stringent accuracy requirement is necessary 0.3% accuracy degradation, the representation with one additional bit in the mantissa can be used which achieves a 5.7 speedup and 3.0 energy savings."}, {"section_index": "11", "section_name": "4.3 SOURCES OF ACCUMULATION ERROR", "section_text": "In the fixed-point case (green line, representing 16 bits with the radix point in the center), the central. cause of error is from saturation at the extreme values. The running sum exceeds 255, the maximum representable value in this representation, after 60 inputs are accumulated, as seen in the figure.\nFigure 7: The speedup and energy savings as the two parameters are adjusted for the custom floating point and fixed-point representations. The marked area denotes configurations where the total loss. in AlexNet accuracy is less than 1%..\n100%- 1000 13 p=0.96 Noonnee annnee 90% 500 0 [5] 80% -500 [1] IEEE 754 Single Prec. Buuuny [2] Custom FL M=8/E=6 70% [3] Custom FL M=2/E=14 [4] Custom FL M=10/E=4 -1000 [5] Custom FI L=8/R=6 60% 0 500 1000 1500 2000 2500 3000 50%+ 0.5 0.6 0.7 0.8 0.9 # of Accumulated Values Correlation\nFigure 9: The linear fit from the correlation between nor- malized accuracy and last layer activations of the ex act and customized preci- sion DNNs.\nIn order to understand how customized precision degrades DNN accuracy among numeric represen-. tations, we examine the impact of various reduced precision computations on a neuron. Figure|8 presents the serialized accumulation of neuron inputs in the third convolution layer of AlexNet. The x-axis represents the number of inputs that have been accumulated, while the y-axis represents. the current value of the running sum. The black line represents the original DNN computation, a. baseline for customized precision settings to match. We find two causes of error between the cus-. tomized precision fixed-point and floating-point representations, saturation and excessive rounding.\nIdeal Model Model + 1 sample (0.3% search time) Model + 2 samples (0.6% search time) 25x 25x 25x 25x 25x GoogLeNet:FL VGG:FL 20x AlexNet:FL CIFARNET:FL LeNet:FL 20x 20x 20x 15x 15x 15x 15x 15x 10x 10x 10x 10x 10x S 5x 5x 5x 5x 5x 0x 0x 0x 0x 0x 85% 90% 95% 100% 85% 90% 95% 100% 85% 90% 95% 100% 85% 90% 95% 100% 85% 90% 95% 100% 25x 25x 25x 25x 25x GoogLeNet:FI VGG:FI AlexNet:FI CIFARNET:FI LeNet:FI 20x 20x 20x 20x 15x 15x 15x 15x 15x 10x 10x 10x 10x S 5x 5x 5x 5x- 5x Ox 0x- Ox 0x- 0x- 85% 90% 95% 100% 85% 90% 95% 100% 85%90% 95% 100% 85% 90% 95% 100% 85% 90% 95% 100% Target Accuracy Target Accuracy Target Accuracy Target Accuracy Target Accuracy\nFigure 10: The speedup achieved by selecting the customized precision using an exhaustive searcl (i.e. the ideal design) and prediction using the accuracy model with accuracy evaluated for some number of configurations (model + X samples). The floating-point (FL) and fixed-point (FI) results are shown in the top and bottom rows, respectively. The model with two evaluated designs produces the same configurations, but requires <0.6% of the search time.\nFor the next case, the floating-point configuration with 2 bits and 14 bits for the mantissa and ex-. ponent (blue line), respectively, we find that the lack of precision for large values causes excessive. rounding errors. As shown in the figure, after accumulating 120 inputs, this configuration's run-. ning sum exceeds 256, which limits the minimum adjustment in magnitude to 64 (the exponent normalizes the mantissa to 256, so the two mantissa bits represent 128 and 64). Finally, one of the customized precision types that has high performance and accuracy for AlexNet, 8 mantissa bits and 6 exponent bits (red line), is shown as well. This configuration almost perfectly matches the IEEE. 754 floating-point configuration, as expected based on the final output accuracy..\nThe other main cause of accuracy loss is from values that are too small to be encoded as a non-zerc value in the chosen customized precision configuration. These values, although not critical during addition, cause significant problems when multiplied with a large value, since the output should be encoded as a non-zero value in the specific precision setting. We found that the weighted input is minimally impacted, until the precision is reduced low enough for the weight to become zero.\nWhile it may be intuitive based on these results to apply different customized precision settings tc various stages of the neural network in order to mitigate the sudden loss in accuracy, the realizable gains of multi-precision configurations present significant challenges. The variability between units will cause certain units to be unused during specific layers of the neural network causing gains tc diminish (e.g., 11-bit units are idle when 16-bit units are required for a particular layer). Also, the application specific hardware design is already an extensive process and multiple customized preci- sion configurations increases the difficulty of the hardware design and verification process."}, {"section_index": "12", "section_name": "4.4 CUSTOMIZED PRECISION SEARCH", "section_text": "Now we evaluate our proposed customized precision search method. The goal of this method is tc significantly reduce the required time to navigate the customized precision design space and still provide an optimal design choice in terms of speedup, limited by an accuracy constraint..\nCorrelation model. First, we present the linear correlation-accuracy model in Figure[9] which shows the relationship between the normalized accuracy of each setting in the design space and the corre- lation between its last layer activations compared to those of the original NN. This model, although built using all of the customized precision configurations from AlexNet, CIFARNET, and LeNet 5 neural networks, produces a good fit with a correlation of O.96. It is important that the model matches across networks and precision design choices (e.g., floating point versus fixed point), since creating this model for each DNN, individually, requires as much time as exhaustive search.\nValidation. To validate our search technique, Figure 10 presents the accuracy-speedup trade-off curves from our method compared to the ideal design points. We first obtain optimal results via\nAfter reaching saturation, the positive values are discarded and the final output is unpredictable Although floating-point representations do not saturate as easily, the floating-point configuration. with 10 mantissa bits and 4 exponent bits (orange line) saturates after accumulating 1128 inputs. Again. the lost information from saturation causes an unpredictable final output..\n14x- 12x 10x dnpaadd 8x 6X 4x 2x GoogLeNet VGG AlexNet CIFARNET LeNet-5 GEOMEAN Neural Network\nFigure 11: The speedup resulting from searching for the fastest setting with less than 1% inferenc accuracy degradation. All selected customized precision DNNs meet this accuracy constraint.\nexhaustive search. We present our search with a variable number of refinement iterations, wher we evaluate the accuracy of the current design point and adjust the precision if necessary. To verif robustness, the accuracy models were generated using cross-validation where all configurations i the DNN being searched are excluded (e.g., we build the AlexNet model with LeNet and CIFAR NET accuracy/correlation pairs). The prediction is made using only ten randomly selected inputs a tiny subset compared that needed for classification accuracy, some of which are even incorrectl classified by the original neural network. Thus, the cost of prediction using the model is negligible\nWe observe that, in all cases, the accuracy model combined with the evaluation of just two cu. tomized precision configurations provides the same result as the exhaustive search. Evaluating tv. designs out of 340 is 170 faster than exhaustively evaluating all designs. When only one coi. figuration is evaluated instead of two (i.e. a further 50% reduction is search time), the selecte. customized precision setting never violates the target accuracy, but concedes a small amount of pe. formance. Finally, we note that our search mechanism, without evaluating inference accuracy f. any of the design points, provides a representative prediction of the optimal customized precisic. setting. Although occasionally violating the target accuracy (i.e. the cases where the speedup igher than the exhaustive search), this prediction can be used to gauge the amenability of the N. o customized precision without investing any considerable amount of time in experimentation.\nSpeedup. We present the final speedup produced by our search method in Figure |11|when the algorithm is configured for 99% target accuracy and to use two samples for refinement. In all cases, the chosen customized precision configuration meets the targeted accuracy constraint. In most cases, we find that the larger networks require more precision (DNNs are sorted from left to right in descending order based on size). VGG requires less precision than expected, but VGG also uses smaller convolution kernels than all of the other DNNs except LeNet-5."}, {"section_index": "13", "section_name": "5 RELATED WORK", "section_text": "To the best of our knowledge, our work is the first to examine the impact of numeric representations. on the accuracy-efficiency trade-offs on large-scale, deployed DNNs with over half a million neu rons (GoogLeNet, VGG, AlexNet), whereas prior work has only reported results on much smaller. networks such as CIFARNET and LeNet-5 Cavigelli et al.(2015); Chen et al.(2014);Courbariaux et al.(2014);Du et al.(2014);Gupta et al.(2015); Muller & Indiveri(2015). Many of these works fo- cused on fixed-point computation due to the fixed-point representation working well on small-scale -readv DNNs\nOther recent works have looked at alternative neural network implementations such as spiking neura. networks for more efficient hardware implementation[Conti & Benini (2015);Diehl & Cook(2014) This is a very different computational model that requires redevelopment of standard DNNs, unlik our proposed methodologies. Other works have proposed several approaches to improve perfor. mance and reduce energy consumption of deep neural networks by taking advantage of the fact tha. DNNs usually contain redundancies[Chen et al.(2015);Figurnov et al.(2015)."}, {"section_index": "14", "section_name": "6 CONCLUSION", "section_text": "In this work, we introduced the importance of carefully considering customized precision wher realizing neural networks. We show that using the IEEE 754 single precision floating point repre sentation in hardware results in surrendering substantial performance. On the other hand, picking configuration that has lower precision than optimal will result in severe accuracy loss. By reconsid ering the representation from the ground up in designing custom precision hardware and using ou search technique, we find an average speedup across deployable DNNs, including GoogLeNet anc VGG, of 7.6 with less than 1% degradation in inference accuracy."}, {"section_index": "15", "section_name": "REFERENCES", "section_text": "REFERENCES Lukas Cavigelli, David Gschwend, Christoph Mayer, Samuel Willi, Beat Muheim, and Luca Benini.. Origami: A convolutional network accelerator. In 25th edition on Great Lakes Symposium on VLSI, pp. 199-204, 2015. Wenlin Chen, James Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressing. neural networks with the hashing trick. In 32nd International Conference on Machine Learning, pp. 2285-2294, 2015. Yunji Chen, Tao Luo, Shaoli Liu, Shijin Zhang, Liqiang He, Jia Wang, Ling Li, Tianshi Chen, Zhi- wei Xu, Ninghui Sun, et al. Dadiannao: A machine-learning supercomputer. In 47th International Symposium on Microarchitecture 2014, pp. 609-622, 2014. Francesco Conti and Luca Benini. A ultra-low-energy convolution engine for fast brain-inspired vision in multicore clusters. In Design, Automation & Test in Europe, 2015.. Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Low precision arithmetic for deep. learning. CoRR, abs/1412.7024, 2014. URL http://arxiv.0rg/abs/1412.7024 Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior,. Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in Neural Information Processing Systems, pp. 1223-1231, 2012. Peter U Diehl and Matthew Cook. Efficient implementation of stdp rules on spinnaker neuromorphic hardware. In International Joint Conference on Neural Networks, 2014.. Zidong Du, Krishna Palem, Avinash Lingamneni, Olivier Temam, Yunji Chen, and Chengyong. Wu. Leveraging the error resilience of machine-learning applications for designing highly energy efficient accelerators. In 19th Asia and South Pacific Design Automation Conference, 2014.. Clement Farabet, Camille Couprie, Laurent Najman, and Yann LeCun. Learning hierarchical fea- tures for scene labeling. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35 (8):1915-1929, 2013. Michael Figurnov, Dmitry Vetrov, and Pushmeet Kohli. PerforatedCNNs: Acceleration through elimination of redundant convolutions. arXiv preprint arXiv:1504.08362, 2015. Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1737-1746, 2015. Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. Deepspeech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Ser- gio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embed- ding. arXiv preprint arXiv:1408.5093, 2014. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Com. puter Science Department, University of Toronto, Tech. Rep, 1(4):7, 2009. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo- lutional neural networks. In Advances in neural information processing systems, 2012. Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to. document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. Lorenz K Muller and Giacomo Indiveri. Rounding methods for neural networks with low resolution synaptic weights. arXiv preprint arXiv:1504.05767, 2015. Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In 27th International Conference on Machine Learning, 2010. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image. recognition. arXiv preprint arXiv:1409.1556, 2014. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958, 2014. Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural net- works. In Advances in Neural Information Processing Systems, pp. 3104-3112, 2014.. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9, 2015. Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, and Lars Wolf. Deepface: Closing the gap. to human-level performance in face verification. In Computer Vision and Pattern Recognition (CVPR), pp. 1701-1708, 2014. 9"}] |
r1aPbsFle | [{"section_index": "0", "section_name": "TYING WORD VECTORS AND WORD CLASSIFIERS: A LOSS FRAMEWORK FOR LANGUAGE MODELING", "section_text": "Hakan Inan, Khashavar Khosravi\ninanh,khosravi}@stanford.edu\nRecurrent neural networks have been very successful at predicting sequences o. words in tasks such as language modeling. However, all such models are basec. on the conventional classification framework, where the model is trained agains. one-hot targets, and each word is represented both as an input and as an outpu. n isolation. This causes inefficiencies in learning both in terms of utilizing al. of the information and in terms of the number of parameters needed to train. W. ntroduce a novel theoretical framework that facilitates better learning in languag. nodeling, and show that our framework leads to tying together the input embed. ling and the output projection matrices, greatly reducing the number of trainable. variables. Our framework leads to state of the art performance on the Penn Tree ank with a variety of network models.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Neural network models have recently made tremendous progress in a variety of NLP applications. such as speech recognition (Irie et al., 2016), sentiment analysis (Socher et al., 2013), text summa. rization (Rush et al., 2015; Nallapati et al., 2016), and machine translation (Firat et al., 2016).\nDespite the overwhelming success achieved by recurrent neural networks in modeling long range de. pendencies between words, current recurrent neural network language models (RNNLM) are base. on the conventional classification framework, which has two major drawbacks: First, there is no as sumed metric on the output classes, whereas there is evidence suggesting that learning is improve. when one can define a natural metric on the output space (Frogner et al., 2015). In language model ing, there is a well established metric space for the outputs (words in the language) based on wor. embeddings, with meaningful distances between words (Mikolov et al., 2013; Pennington et al 2014). Second, in the classical framework, inputs and outputs are considered as isolated entitie. with no semantic link between them. This is clearly not the case for language modeling, wher. inputs and outputs in fact live in identical spaces. Therefore, even for models with moderately size. vocabularies, the classical framework could be a vast source of inefficiency in terms of the numbe. of variables in the model, and in terms of utilizing the information gathered by different parts of th. model (e.g. inputs and outputs).\nIn this work, we introduce a novel loss framework for language modeling to remedy the above twc. problems. Our framework is comprised of two closely linked improvements. First, we augment the. classical cross-entropy loss with an additional term which minimizes the KL-divergence between. the model's prediction and an estimated target distribution based on the word embeddings space. This estimated distribution uses knowledge of word vector similarity. We then theoretically analyze. this loss, and this leads to a second and synergistic improvement: tying together two large matrices. by reusing the input word embedding matrix as the output classification matrix. We empirically validate our theory in a practical setting, with much milder assumptions than those in theory. We. also find empirically that for large networks, most of the improvement could be achieved by only. reusing the word embeddings.\nWe test our framework by performing extensive experiments on the Penn Treebank corpus, a dataset. widely used for benchmarking language models (Mikolov et al., 2010; Merity et al., 2016). We. demonstrate that models trained using our proposed framework significantly outperform models\nRichard Socher"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "trained using the conventional framework. We also perform experiments on the newly introduced Wikitext-2 dataset (Merity et al., 2016), and verify that the empirical performance of our proposed. framework is consistent across different datasets\nIn any variant of recurrent neural network language model (RNNLM), the goal is to predict the nex word indexed by t in a sequence of one-hot word tokens (y*, ... y*) as follows:\nXt = Lyt-1 ht=f(xt,ht-1) Yt = softmax (Wht + b\nThe matrix L E Rd |V! is the word embedding matrix, where d is the word embedding dimension. and |V| is the size of the vocabulary. The function f(.,.) represents the recurrent neural network. which takes in the current input and the previous hidden state and produces the next hidden state. W E R|V|dn and b E R|V! are the the output projection matrix and the bias, respectively, and dn is. the size of the RNN hidden state. The |V| dimensional yt models the discrete probability distribution. for the next word.\nNote that the above formulation does not make any assumptions about the specifics of the recurrent. neural units, and f could be replaced with a standard recurrent unit, a gated recurrent unit (GRU) (Cho et al., 2014), a long-short term memory (LSTM) unit (Hochreiter & Schmidhuber, 1997), etc. For our experiments, we use LSTM units with two layers.\nWe shall refer to yt as the model prediction distribution for the tth example, and yt as the empir ical target distribution (both are in fact conditional distributions given the history). Since cross- entropy and Kullback-Leibler divergence are equivalent when the target distribution is one-hot, we can rewrite the loss for the tth example as\nTherefore. we can think of the optimization of the conventional loss in an RNNLM as trying to mini- mize the distance' between the model prediction distribution (y) and the empirical target distribution (y*), which, with many training examples, will get close to minimizing distance to the actual tar- get distribution. In the framework which we will introduce, we utilize Kullback-Leibler divergence as opposed to cross-entropy due to its intuitive interpretation as a distance between distributions. although the two are not equivalent in our framework.\nWe propose to augment the conventional cross-entropy loss with an additional loss term as follows:\n9t = softmax (Wht/) Jqug =DKL(Yt l| 9t), Jtot = Jt + QJqug\nWe note, however, that Kullback-Leibler divergence is not a valid distance metric\nXt = Lyt-1 ht=f(xt,ht-1) Yt = softmax (Wht + b)\nXt = Lyt-1 ht=f(xt,ht-1) Yt = softmax (Wht + b)\nGiven yt for the tth example, a loss is calculated for that example. The loss used in the RNNLMs is almost exclusively the cross-entropy between yt and the observed one-hot word token, y* :.\nJt =CE(yt I| yt) =- Yt., log yt,i iE|V|\nJt =DKL(y* I yt)\nIn above, a is a hyperparameter to be adjusted, and yt is almost identical to the regular model prediction distribution yt with the exception that the logits are divided by a temperature parameter T. We define yt as some probability distribution that estimates the true data distribution (conditioned on the word history) which satisfies Eyt = Ey+. The goal of this framework is to minimize the\nLet's denote by e; E R!VI the vector whose jth entry is 1, and others are zero. We can then rewrite (3.4) as\nImplication of (3.5) is the following: Every time the optimizer sees one training example, it takes a step not only on account of the label seen, but it proceeds taking into account all the class labels fo which the conditional probability is not zero, and the relative step size for each step is given by th conditional probability for that label, yt.. Furthermore, this is a much less noisy update since the target distribution is exact and deterministic. Therefore, unless all the examples exclusively belon, to a specific class with probability 1, the optimization will act much differently and train with greatl improved supervision.\nThe idea proposed in the recent work by Hinton et al. (2015) might be considered as an applicatior of this framework, where they try to obtain a good set of y's by training very large models and using the model prediction distributions of those.\nUt = Lyt yt = softmax\nIn words, we first find the target word vector which corresponds to the target word token (resulting. in ut), and then take the inner product of the target word vector with all the other word vectors tc. get an unnormalized probability distribution. We adjust this with the same temperature parameter. T used for obtaining yt and apply softmax. The target distribution estimate, y, therefore measures. the similarity between the word vectors and assigns similar probability masses to words that the. language model deems close. Note that the estimation of y with this procedure is iterative, anc the estimates of y in the initial phase of the training are not necessarily informative. However, as. training procedes, we expect y to capture the word statistics better and yield a consistently more. accurate estimate of the true data distribution."}, {"section_index": "3", "section_name": "THEORETICALLY DRIVEN REUSE OF WORD EMBEDDINGS", "section_text": "We now theoretically motivate and introduce a second modification to improve learning in the lan- guage model. We do this by analyzing the proposed augmented loss in a particular setting, and observe an implicit core mechanism of this loss. We then make our proposition by making this mechanism explicit.\nWe start by introducing our setting for the analysis. We restrict our attention to the case where the. input embedding dimension is equal to the dimension of the RNN hidden state, i.e. d = dx = dn. We also set b = 0 in (2.3) so that yt = W ht. We only use the augmented loss, i.e. Jtot = Jaus, and. we assume that we can achieve zero training loss. Finally, we set the temperature parameter t to be large.\nWe first show that when the temperature parameter, t, is high enough, Jaug acts to match the logits of. the prediction distribution to the logits of the the more informative labels, y. We proceed in the same\nTo understand the effect of optimizing in this setting, let's focus on an ideal case in which we are given the true data distribution so that yt = Eyt, and we only use the augmented loss, Jaug. We will. carry out our investigation through stochastic gradient descent, which is the technique dominantly used for training neural networks. The gradient of Jaus with respect to the logits W ht is.\n1 aug (yt-yt)\nJaug =yt - [e1,...,e|v|]yt=>yt,i(yt-ei) iEV\nAlthough finding a good y in general is rather nontrivial, in the context of language modeling we. can hope to achieve this by exploiting the inherent metric space of classes encoded into the model namely the space of word embeddings. Specifically, we propose the following for y:.\nway as was done in Hinton et al. (2015) to make an identical argument. Particularly, we conside. the derivative of Jaus with respect to the entries of the logits produced by the neural network..\nd. Taug 1 Wht. L'ut as t -> 0o. a(Wht\nWht =LT Ut\nfor each training example, i.e., gradient contributed by each example is zero. Provided that W and I are full rank matrices and there are more linearly independent examples of ht's than the embedding dimension d, we get that the space spanned by the columns of LT' is equivalent to that spanned by the columns of W. Let's now introduce a square matrix A such that W = LT A. (We know A exists since LT and W span the same column space). In this case, we can rewrite\nWht = LTAh, = LTht\nIn other words, by reusing the embedding matrix in the output projection layer (with a transpose) and letting the neural network do the necessary linear mapping h -> Ah, we get the same result as we would have in the first place\nAlthough the above scenario could be difficult to exactly replicate in practice, it uncovers a mech. anism through which our proposed loss augmentation acts, which is trying to constrain the output (unnormalized) probability space to a small subspace governed by the embedding matrix. This sug. gests that we can make this mechanism explicit and constrain W = LI' during training while setting. the output bias, b, to zero. Doing so would not only eliminate a big matrix which dominates the network size for models with even moderately sized vocabularies, but it would also be optimal in our setting of loss augmentation as it would eliminate much work to be done by the augmented loss.."}, {"section_index": "4", "section_name": "5 RELATED WORK", "section_text": "Since their introduction in Mikolov et al. (2010), many improvements have been proposed for RNNLMs , including different dropout methods (Zaremba et al., 2014; Gal, 2015), novel recurrent units (Zilly et al., 2016), and use of pointer networks to complement the recurrent neural network (Merity et al., 2016). However, none of the improvements dealt with the loss structure, and to the best of our knowledge, our work is the first to offer a new loss framework\nOur technique is closely related to the one in Hinton et al. (2015), where they also try to estimate a more informed data distribution and augment the conventional loss with KL divergence between model prediction distribution and the estimated data distribution. However, they estimate their data distribution by training large networks on the data and then use it to improve learning in smaller networks. This is fundamentally different from our approach, where we improve learning by trans. ferring knowledge between different parts of the same network, in a self contained manner.\nThe work we present in this paper is based on a report which was made public in Inan & Khos. ravi (2016). We have recently come across a concurrent preprint (Press & Wolf, 2016) where the. authors reuse the word embedding matrix in the output projection to improve language modeling\nLet's denote by l, the ith column of L. Using the first order approximation of exponential function. around zero (exp (x) ~ 1 + x), we can approximate yt (same holds for yt) at high temperatures as follows:\nexp((ut,li)/T) 1+(Ut,li)/T lV|+jEv(ut,lj)/T\n1+{t,li)/7\nwhich is the desired result that augmented loss tries to match the logits of the model to the logits of y's. Since the training loss is zero by assumption, we necessarily have\nHowever, their work is purely empirical, and they do not provide any theoretical justification for their approach. Finally, we would like to note that the idea of using the same representation for. input and output words has been explored in the past, and there exists language models which could be interpreted as simple neural networks with shared input and output embeddings (Bengio et al.. 2001; Mnih & Hinton, 2007). However, shared input and output representations were implicitly built. into these models, rather than proposed as a supplement to a baseline. Consequently, possibility of. improvement was not particularly pursued by sharing input and output representations..\nIn our experiments, we use the Penn Treebank corpus (PTB) (Marcus et al., 1993), and the Wikitext-. 2 dataset (Merity et al., 2016). PTB has been a standard dataset used for benchmarking language. models. It consists of 923k training. 73k validation. and 82k test words. The version of this dataset which we use is the one processed in Mikolov et al. (2010), with the most frequent 10k words. selected to be in the vocabulary and rest replaced with a an <unk> token 2. wikitext-2 is a dataset. released recently as an alternative to PTB3. It contains 2, 088k training, 217k validation, and 245k. test tokens, and has a vocabulary of 33, 278 words; therefore, in comparison to PTB, it is roughly 2. times larger in dataset size, and 3 times larger in vocabulary.."}, {"section_index": "5", "section_name": "6.1 MODEL AND TRAINING HIGHLIGHTS", "section_text": "We closely follow the LSTM based language model proposed in Zaremba et al. (2014) for con. structing our baseline model. Specifically, we use a 2-layer LSTM with the same number of hidden. units in each layer, and we use 3 different network sizes: small (200 units), medium (650 units), and large (1500 units). We train our models using stochastic gradient descent, and we use a variant of. the dropout method proposed in Gal (2015). We defer further details regarding training the models to section A of the appendix. We refer to our baseline network as variational dropout LSTM, or. VD-LSTM in short."}, {"section_index": "6", "section_name": "6.2 EMPIRICAL VALIDATION FOR THE THEORY OF REUSING WORD EMBEDDINGS", "section_text": "In Section 4, we showed that the particular loss augmentation scheme we choose constrains the output projection matrix to be close to the input embedding matrix, without explicitly doing so by. reusing the input embedding matrix. As a first experiment, we set out to validate this theoretical result. To do this, we try to simulate the setting in Section 4 by doing the following: We select a randomly chosen 20, 000 contiguous word sequence in the PTB training set, and train a 2-layer. LSTM language model with 300 units in each layer with loss augmentation by minimizing the. following loss:\nHere, is the proportion of the augmented loss used in the total loss, and Jaug is scaled by t2|V to approximately match the magnitudes of the derivatives of J and Jaug (see (4.3)). Since we aim to achieve the minimum training loss possible, and the goal is to show a particular result rather than to achieve good generalization, we do not use any kind of regularization in the neural network (e.g. weight decay, dropout). For this set of experiments, we also constrain each row of the inpu embedding matrix to have a norm of 1 because training becomes difficult without this constrain when only augmented loss is used. After training, we compute a metric that measures distance between the subspace spanned by the rows of the input embedding matrix, L, and that spanned by the columns of the output projection matrix, W. For this, we use a common metric based on the relative residual norm from projection of one matrix onto another (Bjorck & Golub, 1973). The computed distance between the subspaces is 1 when they are orthogonal, and 0 when they are the same. Interested reader may refer to section B in the appendix for the details of this metric.\nFigure 1 shows the results from two tests. In one (panel a), we test the effect of using the augmented loss by sweeping in (6.1) from 0 to 1 at a reasonably high temperature (- = 10). With no loss\nJtot = 3Jaug2|V]+ (1 )J\n2pTB can be downloaded at http://www.fit.vutbr.cz/ imikolov/rnnlm/simple-examples.tgz 3wikitext-2 can be downloaded at https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-\naugmentation ( = 0), the distance is almost 1, and as more and more augmented loss is used the distance decreases rapidly, and eventually reaches around 0.06 when only augmented loss is used In the second test (panel b), we set = 1, and try to see the effect of the temperature on the subspace distance (remember the theory predicts low distance when t -> oo). Notably, the augmented loss causes W to approach LI' sufficiently even at temperatures as low as 2, although higher temperatures. still lead to smaller subspace distances.\nThese results confirm the mechanism through which our proposed loss pushes W to learn the same column space as LT, and it suggests that reusing the input embedding matrix by explicitly con- straining W = LT is not simply a kind of regularization, but is in fact an optimal choice in our framework. What can be achieved separately with each of the two proposed improvements as well as with the two of them combined is a question of empirical nature, which we investigate in the next section.\nIn order to investigate the extent to which each of our proposed improvements helps with learning. we train 4 different models for each network size: (1) 2-Layer LSTM with variational dropou (VD-LSTM) (2) 2-Layer LSTM with variational dropout and augmented loss (VD-LSTM +AL (3) 2-Layer LSTM with variational dropout and reused embeddings (VD-LSTM +RE) (4) 2-Laye. LSTM with variational dropout and both RE and AL (VD-LSTM +REAL)..\nFigure 2 shows the validation perplexities of the four models during training on the PTB corpus for small (panel a) and large (panel b) networks. All of AL, RE, and REAL networks significantly outperform the baseline in both cases. Table 1 compares the final validation and test perplexities of the four models on both PTB and Wikitext-2 for each network size. In both datasets, both AL and RE improve upon the baseline individually, and using RE and AL together leads to the best performance. Based on performance comparisons, we make the following notes on the two proposed improvements:\n1.0 0.20 0.18. 0.8 0.16 0.6 0.14: : : 0.12 ..... ++ 0.10 1 +++ ......... 0.2 : 0.08 11111 .. 0.8.0 0.06 0.2 0.4 0.6 0.8 1.0 0 5 10 15 20 25 30 35 40 Proportion of Augmented Loss() Temperature(r) (a) Subspace distance at t = 10 for (b) Subspace distance at different temp. different proportions of Jaug eratures when only Jaus is used\nFigure 1: Subspace distance between LT and W for different experiment conditions for the validation ex-. periments. Results are averaged over 10 independent runs. These results validate our theory under practical conditions.\nAL provides better performance gains for smaller networks. This is not surprising given the fact that small models are rather inflexible, and one would expect to see improved learn- ing by training against a more informative data distribution (contributed by the augmented loss) (see Hinton et al. (2015)). For the smaller PTB dataset, performance with AL sur passes that with RE. In comparison, for the larger Wikitext-2 dataset, improvement by AL is more limited. This is expected given larger training sets better represent the true data distribution, mitigating the supervision problem. In fact, we set out to validate this rea- soning in a direct manner, and additionally train the small networks separately on the first and second halves of the Wikitext-2 training set. This results in two distinct datasets which\n110 100 +REAL ..+.... +REAL +RE +RE 105 +AL ........... +AL A1111 Baseline A+A Baseline 90 100 .......... ++++ 85 95 80 90} 75 85.4 70 10 20 30 40 50 60 10 15 30 45 60 75 Epoch (N) Epoch (N) (a) Small Network (b) Large Network\nFigure 2: Progress of validation perplexities during training for the 4 different models for two (small (200) and large (1500)) network sizes.\nWe list in Table 3 the comparison of models with and without our proposed modifications on the Penn Treebank Corpus. The best LSTM model (VD-LSTM+REAL) outperforms all previous work which uses conventional framework, including large ensembles. The recently proposed recur- rent highway networks (Zilly et al., 2016) when trained with reused embeddings (VD-RHN +RE) achieves the best overall performance, improving on VD-RHN by a perplexity of 2.5.\nTable 1: Comparison of the final word level perplexities on the validation and test set for the 4 different models\nare each about the same size as PTB (1044K vs 929K). As can be seen in Table 2, AI has significantly improved competitive performance against RE and REAL despite the fac that embedding size is 3 times larger compared to PTB. These results support our argumen that the proposed augmented loss term acts to improve the amount of information gathere from the dataset. RE significantly outperforms AL for larger networks. This indicates that, for large models the more effective mechanism of our proposed framework is the one which enforces prox imity between the output projection space and the input embedding space. From a mode complexity perspective, the nontrivial gains offered by RE for all network sizes and fo both datasets could be largely attributed to its explicit function to reduce the model size while preserving the representational power according to our framework.\nPTB Wikitext-2 Network Model Valid Test Valid Test VD-LSTM 92.6 87.3 112.2 105.9 Small4 VD-LSTM+AL 86.3 82.9 110.3 103.8 (200 VD-LSTM+RE 89.9 85.1 106.1 100.5 units) VD-LSTM+REAL 86.3 82.7 105.6 98.9 VD-LSTM 82.0 77.7 100.2 95.3 Medium VD-LSTM+AL 77.4 74.7 98.8 93.1 (650 VD-LSTM+RE 77.1 73.9 92.3 87.7 units) VD-LSTM+REAL 75.7 73.2 91.5 87.0 VD-LSTM 76.8 72.6 Large5 VD-LSTM+AL 74.5 71.2 (1500 VD-LSTM+RE 72.5 69.0 units) VD-LSTM+REAL 71.1 68.5\nTable 3: Comparison of our work to previous state of the art on word-level validation and test perplexities or. the Penn Treebank corpus. Models using our framework significantly outperform other models.\nModel Parameters Validation Test RNN (Mikolov & Zweig) 6M 124.7 RNN+LDA (Mikolov & Zweig) 7M 113.7 RNN+LDA+KN-5+Cache (Mikolov & Zweig) 9M 92.0 Deep RNN (Pascanu et al., 2013a) 6M 107.5 Sum-Prod Net (Cheng et al., 2014) 5M 100.0 LSTM (medium) (Zaremba et al., 2014) 20M 86.2 82.7 CharCNN (Kim et al., 2015) 19M 78.9 LSTM (large) (Zaremba et al., 2014) 66M 82.2 78.4 VD-LSTM (large, untied, MC) (Gal, 2015) 66M 73.4 0.0 Pointer Sentinel-LSTM(medium) (Merity et al., 2016) 21M 72.4 70.9 38 Large LSTMs (Zaremba et al., 2014) 2.51B 71.9 68.7 10 Large VD-LSTMs (Gal, 2015) 660M 68.7 VD-RHN (Zilly et al., 2016) 32M 71.2 68.5 VD-LSTM +REAL (large) 51M 71.1 68.5 VD-RHN +RE (Zilly et al., 2016) 6 24M 68.1 66.0"}, {"section_index": "7", "section_name": "6.4 OUALITATIVE RESULTS", "section_text": "One important feature of our framework that leads to better word predictions is the explicit mecha-. nism to assign probabilities to words not merely according to the observed output statistics, but alsc considering the metric similarity between words. We observe direct consequences of this mecha-. nism qualitatively in the Penn Treebank in different ways: First, we notice that the probability of. generating the <unk> token with our proposed network (VD-LSTM +REAL) is significantly lower. compared to the baseline network (VD-LSTM) across many words. This could be explained by. noting the fact that the <unk> token is an aggregated token rather than a specific word, and it is. often not expected to be close to specific words in the word embedding space. We observe the same. behavior with very frequent words such as 'a\"', \"an\", and \"'the', owing to the same fact that they are. not correlated with particular words. Second, we not only observe better probability assignments for the target words, but we also observe relatively higher probability weights associated with the words. close to the targets. Sometimes this happens in the form of predicting words semantically close to gether which are plausible even when the target word is not successfully captured by the model. We. provide a few examples from the PTB test set which compare the prediction performance of 1500. unit VD-LSTM and 1500 unit VD-LSTM +REAL in table 4. We would like to note that prediction performance of VD-LSTM +RE is similar to VD-LSTM +REAL for the large network..\nTable 2: Performance of the four different small models trained on the equally sized two partitions of Wikitext- 2 training set. These results are consistent with those on PTB (see Table 1), which has a similar training set size with each of these partitions, although its word embedding dimension is three times smaller.\nWikitext-2, Partition 1 Wikitext-2, Partition 2 Network Model Valid Test Valid Test VD-LSTM 159.1 148.0 163.19 148.6 Small VD-LSTM+AL 153.0 142.5 156.4 143.7 (200 VD-LSTM+RE 152.4 141.9 152.5 140.9 units) VD-LSTM+REAL 149.3 140.6 150.5 138.4\n4For PTB, small models were re-trained by initializing to their final configuration from the first training. session. This did not change the final perplexity for baseline, but lead to improvements for the other models. 5Large network results on Wikitext-2 are not reported since computational resources were insufficient to. run some of the configurations. 6This model was develoned folla. 1i(016\nDolndllonllelndllond said it believes that the complaints filed in + federal court"}, {"section_index": "8", "section_name": "7 CONCLUSION", "section_text": "In this work, we introduced a novel loss framework for language modeling. Particularly, we showed that the metric encoded into the space of word embeddings could be used to generate a more in- formed data distribution than the one-hot targets, and that additionally training against this distri- bution improves learning. We also showed theoretically that this approach lends itself to a second improvement, which is simply reusing the input embedding matrix in the output projection layer. This has an additional benefit of reducing the number of trainable variables in the model. We empir- ically validated the theoretical link, and verified that both proposed changes do in fact belong to the same framework. In our experiments on the Penn Treebank corpus and Wikitext-2, we showed that our framework outperforms the conventional one, and that even the simple modification of reusing the word embedding in the output projection layer is sufficient for large networks.\nThe improvements achieved by our framework are not unique to vanilla language modeling, and are. readily applicable to other tasks which utilize language models such as neural machine translation. speech recognition, and text summarization. This could lead to significant improvements in sucl models especially with large vocabularies, with the additional benefit of greatly reducing the numbe. of parameters to be trained.\nTable 4: Prediction for the next word by the baseline (VD-LSTM) and proposed (VD-LSTM +REAL) networks for a few example phrases in the PTB test set. Top 10 word predictions are sorted in descending probability, and are arranged in column-major format..\nTop 10 predicted words Top 10 predicted words Phrase + Next word(s) VD-LSTM VD-LSTM +REAL the 0.27 an 0.03 federal 0.22 connection 0.03 information international a 0.13 august 0.01 the 0.1 august 0.03 said it believes that the federal 0.13 new 0.01 a 0.08 july 0.03 complaints filed in. N 0.09 response 0.01 N 0.06 an 0.03 + federal court (unk) 0.05 connection 0.01 state 0.04 september 0.03 oil company refineries the 0.09 in 0.03 august 0.08 a 0.03 ran flat out to prepare N 0.08 has 0.03 N 0.05 in 0.03 for a robust holiday. a 0.07 is 0.02 early 0.05 that 0.02 driving season in july and. (unk) 0.07 will 0.02 september 0.05 ended 0.02 + august was 0.04 its 0.02 the 0.03 its 0.02 southmark said it plans the 0.06 to 0.03 expected 0.1 a 0.03 to (unk) its (unk) to (unk) 0.05 likely 0.03 completed 0.04 scheduled 0.03 provide financial results a 0.05 expected 0.03 (unk) 0.03 n't 0.03 as soon as its audit is in 0.04 scheduled 0.01 the 0.03 due 0.02 + completed n't 0.04 completed 0.01 in 0.03 to 0.01 (unk) 0.33 industry 0.01 (unk) 0.09 industry 0.03 merieux said the. the 0.06 commerce 0.01 health 0.08 business 0.02 government's minister a 0.01 planning 0.01 development 0.04 telecomm. 0.02 of industry science and other 0.01 management 0.01 the 0.04 human 0.02 + technology others 0.01 mail 0.01 a 0.03 other 0.01"}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Yoshua Bengio, Rejean Ducharme, and Pascal Vincent. A neural probabilistic language model 2001. URL http://www.iro.umontreal.ca/~lisa/pointeurs/nips00_lm.ps\nWei-Chen Cheng, Stanley Kok, Hoai Vu Pham, Hai Leong Chieu, and Kian Ming Adam Chai Language modeling with sum-product networks. 2014.\nGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXi preprint arXiv:1503.02531, 2015.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nKazuki Irie, Zoltan Tuske, Tamer Alkhouli, Ralf Schluter, and Hermann Ney. Lstm, gru, high way and a bit of attention: an empirical overview for language modeling in speech recognition Interspeech, San Francisco, CA, USA, 2016.\nCamille Jordan. Essai sur la geometrie a n dimensions. Bulletin de la Societe mathematique d France, 3:103-174, 1875.\nTomas Mikolov and Geoffrey Zweig. Context dependent recurrent neural network language model\nTomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurren neural network based language model. In Interspeech, volume 2, pp. 3, 2010..\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen. tations of words and phrases and their compositionality. In Advances in neural information pro cessing systems, pp. 3111-3119, 2013.\nYoon Kim. Yacine Jernite, David Sontag, and Alexander M Rush. Character-aware neural language models. arXiv preprint arXiv:1508.06615. 2015\nMitchell P Marcus. Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313-330, 1993.\nStephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.\nRazvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct dee recurrent neural networks. CoRR, abs/1312.6026, 2013a.\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neura networks. 1CML (3), 28:1310-1318, 2013b\nJeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In EMNLP, volume 14, pp. 1532-43, 2014.\nAlexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractiv sentence summarization. arXiv preprint arXiv:1509.00685. 2015.\nJulian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutnik, and Jurgen Schmidhuber. Recurren highway networks. arXiv preprint arXiv:1607.03474, 2016."}, {"section_index": "10", "section_name": "APPENDIX", "section_text": "We begin training with a learning rate of 1 and start decaying it with a constant rate after a certain epoch. This is 5, 10, and 1 for the small, medium, and large networks respectively. The decay rate. is 0.9 for the small and medium networks, and 0.97 for the large network..\nFor both PTB and Wikitext-2 datasets, we unroll the network for 35 steps for backpropagation\nWe use gradient clipping (Pascanu et al., 2013b); i.e. we rescale the gradients using the global norm if it exceeds a certain value. For both datasets, this is 5 for the small and the medium network, and 6 for the large network.\nWe use the dropout method introduced in Gal (2015); particularly, we use the same dropout mask for each example through the unrolled network. Differently from what was proposed in Gal (2015) we tie the dropout weights for hidden states further, and we use the same mask when they are propagated as states in the current layer and when they are used as inputs for the next layer. We. don't use dropout in the input embedding layer, and we use the same dropout probability for inputs and hidden states. For PTB, dropout probabilities are 0.7, 0.5 and 0.35 for small, medium and large networks respectively. For Wikitext-2, probabilities are 0.8 for the small and 0.6 for the medium. networks.\nWhen training the networks with the augmented loss (AL), we use a temperature t = 20. We have. empirically observed that setting a, the weight of the augmented loss, according to a = yt for al. the networks works satisfactorily. We set y to values between 0.5 and 0.8 for the PTB dataset, and between 1.0 and 1.5 for the Wikitext-2 dataset. We would like to note that we have not observed sudden deteriorations in the performance with respect to moderate variations in either t or a.."}, {"section_index": "11", "section_name": "METRIC FOR CALCULATING SUBSPACE DISTANCES", "section_text": "In this section, we detail the metric used for computing the subspace distance between two matrices The computed metric is closely related with the principle angles between subspaces, first defined in. Jordan (1875).\nOur aim is to compute a metric distance between two given matrices, X and Y. We do this in three Steps:\nWe note that d as calculated above is a valid metric up to the equivalence set of matrices which span the same column space, although we are not going to show it. Instead, we will mention some metric properties of d, and relate it to the principal angles between the subspaces. We first work out an expression for d:\n(1) Obtain two matrices with orthonormal columns, U and V, such that span(U)=span(X) and span(V)=span(Y). U and V could be obtained with a QR decomposition.. (2) Calculate the projection of either one of U and V onto the other; e.g. do S = UuTy. where S is the projection of V onto U. Then calculate the residual matrix as R = V S.. (3) Let .lFr denote the frobenious norm, and let C be the number of columns of R. Then the distance metric is found as d where d2 = ||R|Er = Trace(RT R).\nCd2 = Trace(RT R) = Trace((V - UUTV)T(V UUT'V) Trace(VT(I - UUT)(I - UUT)V = Trace (VT(I - UUT)V) Trace ((I - UUT)VVT - Trace(VTV) - Trace (UUTVVT = C - Trace (UuTvvT) = C - Trace ((uTv)T(uTv)) = C -|UTV|Fr 1 - P i=1\nwhere p; is the ith singular value of UT'V, commonly referred to as the ith principle angle between the subspaces of X and Y, 0. In above, we used the cyclic permutation property of the trace in the third and the fourth lines.\nSince d2 is Trace(RT'R), it is always nonnegative, and it is only zero when the residual is zero, which is the case when span(X) = span(Y). Further, it is symmetric between U and V due to namely the average of the sines of the principle angles, which is a quantity between O and 1.\nd2 = Trace(RT R) = Trace ((V UUTV)T(V - UUTv Trace(vT(I -UUT)(I - UUT)V) = Trace (VT(I - UUT)V) = Trace((I - UUT)VVT - Trace(VTV) - Trace (UUT'VI = C - Trace (UUTVVT C Trace ((UI'V) = C -|UTV||Fr =1- p,"}] |
BJwFrvOeg | [{"section_index": "0", "section_name": "A NEURAL KNOWLEDGE LANGUAGE MODEI", "section_text": "1,3,4Universite de Montreal, 2Handong Global University, 4CIFAR Senior Fe {'sjn.ahn, 2heeyoul, 3tanel.parnamaa}@gmail.com. {4yoshua.bengio}@umontreal.ca.\nCurrent language models have significant limitations in their ability to encode and decode factual knowledge. This is mainly because they acquire such knowledge based on statistical co-occurrences, even if most of the knowledge words are rarely observed named entities. In this paper, we propose a Neural Knowledge Language Model (NKLM) which combines symbolic knowledge provided by a knowledge graph with the RNN language model. The model predicts whether the word to generate has an underlying fact or not. Then, a word is either generated from the vocabulary or copied from the description of the predicted fact. We train and test the model on a new dataset, WikiFacts. In experiments, we show that the NKLM significantly improves the perplexity while generating a much smaller number of unknown words. In addition, we demonstrate that the sampled descriptions include named entities which used to be the unknown words in RNN language models."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Kanye West, a famous <unknown> and the husband of <unknown> released his latest album <unknown> in <unknown>\nA core purpose of language is to communicate knowledge. Thus, for human-level language under standing, it is important for a language model to take advantage of knowledge. Although traditional language models are good at capturing statistical co-occurrences of entities as long as they are observed frequently in a corpus (e.g., words like verbs, pronouns, and prepositions), they are in general limited in their ability to encode or decode knowledge, which is often represented by named entities such as person names, place names, years, etc. (as shown in the above example sentence of Kanye West.) When trained with a very large corpus, traditional language models have demonstrated to some extent the ability to encode/decode knowledge (Vinyals & Le, 2015; Serban et al., 2015) However, we claim that simply feeding a larger corpus into a bigger model hardly results in a good knowledge language model.\nThe primary reason for this is the difficulty in learning good representations for rare or unknown. words because these are a majority of the knowledge-related words. In particular, for applications. such as question answering (Iyyer et al., 2014; Weston et al., 2016; Bordes et al., 2015) and dialogue. modeling (Vinyals & Le, 2015; Serban et al., 2015), these words are of our main interest. Specifically. in the recurrent neural network language model (RNNLM) (Mikolov et al., 2010) the computational. complexity is linearly dependent on the number of vocabulary words. Thus, including all words of a language is computationally prohibitive. Instead, we typically fill our vocabulary with a limited. number of frequent words and regard all the other words as the unknown (UNK) word. Even if we. can include a large number of words in the vocabulary, according to Zipf's law, a large portion of the. words will be rarely observed in the corpus and thus learning good representations for these words. remains a problem.\n*This work was done while HC was in Samsung Advanced Institute of Technology"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The fact that languages and knowledge can change over time also makes it difficult to simply rely on a large corpus. Media produce an endless stream of new knowledge every day (e.g., the results of baseball games played yesterday) that is even changing over time (e.g., \"the current president of the\nUnited States is ---\"). Furthermore, a good language model should exercise some level of reasoning. For example, it may be possible to observe several occurrences of Barack Obama's year of birth in a large corpus and thus the model may be able to predict it. However, after seeing mentions of his year of birth, presented with a simple reformulation of that piece of knowledge into a sentence such as \"Barack Obama's age is ---\", one would not expect current language models to handle the required. amount of reasoning in order to predict the next word (i.e. the age) easily. However, a good model. should be able to reason the answer from this context'..\nIn this paper, we propose a Neural Knowledge Language Model (NKLM) as a step towards addressing the limitations of traditional language modeling when it comes to exploiting factual knowledge In particular, we incorporate symbolic knowledge provided by a knowledge graph (Nickel et al. 2015) into the RNNLM. A knowledge graph (KG) is a collection of facts which have a form of (subject, relationship, object). We observe particularly the following properties of KGs that make the connection to the language model sensible. First, facts in KGs are mostly about rare words in text corpora. KGs are managed and updated in a similar way that Wikipedia pages are managed to date The KG embedding methods (Bordes et al., 2011; 2013) provide distributed representations for the entities in the KG. The graph can be traversed for reasoning (Gu et al., 2015). Finally, facts come along with textual representations which we call the fact description and take advantage of here\nTraining the above model in a supervised way requires to align words with facts. To this end, we. introduce a new dataset. called WikiFacts. For each topic in the dataset. a set of facts from the Freebas. KG (Bollacker et al., 2008) and a Wikipedia description of the same topic is provided along with the. alignment information. This alignment is done automatically by performing string matching between. the fact description and the Wikipedia description.\nThere have been remarkable advances in language modeling research based on neural networks (Ben gio et al., 2003; Mikolov et al., 2010). In particular, the RNNLMs are interesting for their ability to. take advantage of longer-term temporal dependencies without a strong conditional independence assumption. It is especially noteworthy that the RNNLM using the Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) has recently advanced to the level of outperforming. carefully-tuned traditional n-gram based language models (Jozefowicz et al., 2016)..\n1We do not investigate the reasoning ability in this paper but highlight this example because the explic representation of facts would help to handle such examples.\nThere are a few differences between the NKLM and the traditional RNNLM. First, we assume that a word generation is either based on a fact or not. Thus, at each time step, before predicting a word, we. oredict whether the word to generate has an underlying fact or not. As a result, our model provides. he predictions over facts in a topic in addition to the word predictions. Similarly to how contex1. nformation of previous words flows through the hidden states in the RNNLM, in the NKLM the. orevious information on both facts and words flow through an RNN and provide richer context Second, the model has two ways to generate the next word. One option is to generate a \"vocabulary. word\"' from the vocabulary softmax as is in the RNNLM. The other option is to generate a \"knowledge. word\"' by copying a word contained in the description of the predicted fact. Considering that the. fact description is often short and consists of out-of-vocabulary words, we predict the position of the word to copy within the fact description. This knowledge-copy mechanism makes it possible to. generate words which are not in the predefined vocabulary. Thus, it does not require to learn explicil. embeddings of the words to generate, and consequently resolves the rare/unknown word problem Lastly, the NKLM can immediately adapt to adding or modifying knowledge because the model. earns to predict facts, which can easily be modified without having to retrain the model..\nThere have been many efforts to speed up the language models so that they can cover a larger. vocabulary. These methods approximate the softmax output using hierarchical softmax (Morin & Bengio, 2005; Mnih & Hinton, 2009), importance sampling (Jean et al., 2015), noise contrastive. estimation (Mnih & Teh, 2012), etc. Although helpful to mitigate the computational problem, these. approaches still suffer from the statistical problem due to rare or unknown words. Having the UNK. word as the output of a generative language model is also inconvenient (e.g, dialogue system)..\nTo help deal with the rare/unknown word problem, the pointer networks (Vinyals et al., 2015) have been adopted to implement the copy mechanism (Gulcehre et al., 2016; Gu et al., 2016) and applied to machine translation and text summarization. With this approach, the (unknown) word to copy from the context sentence is inferred from neighboring words. However, because in our case the context. can be very short and often contains no known relevant words (e.g., person names), we cannot use. the existing approach directly.\nOur knowledge memory is also related to the recent literature on neural networks with external memory (Bahdanau et al., 2014; Weston et al., 2015; Graves et al., 2014). In Weston et al. (2015) given simple sentences as facts which are stored in the external memory, the question answering task. is studied. In fact, the tasks that the knowledge-based language model aims to solve (i.e. predict. the next word) can be considered as a fill-in-the-blank type of question answering. The idea of. jointly using Wikipedia and knowledge graphs has also been used in the context of enriching word. embedding (Celikyilmaz et al., 2015; Long et al., 2016)."}, {"section_index": "3", "section_name": "3.1 PRELIMINARY", "section_text": "A topic2 k in a set of entities E is associated with topic knowledge Fe (e.g., from Freebase) and topic description Wk (e.g., from Wikipedia). Topic knowledge Fk is a set of facts {ak,1, ak,2,. , ak,|Fk|) where each fact a is a triple of subject E , relationship, and object E , e.g., (Barack Obama describing the topic (e.g., a description of a topic in Wikipedia). Because the subject entities in Fk. are all equal to the topic entity k' and the words describing relationships can easily be found in the. vocabulary, we use the description of the object entity (e.g., Michelle Obama) as our fact description\nGiven Fk and Wk, we perform simple string matching between words in Wk and words in the. fact descriptions in Fk and thereby build a sequence of augmented observations Yg = {yt = (wt, at, Zt)}t=1:|Ws|. Here, wt E Wx is an observed word, at E Fk a fact on which the generated. word w, is based, and zt a binary variable indicating whether wt is in the vocabulary V (including UNK) or not. Because not all words are based on a fact (e.g., words like, is, a, the, have), we. introduce a special type of fact, called Not-a-Fact (NaF), and assign NaF to such words.\nDuring the inference and training of topic k, we assume that the topic knowledge Fg is loaded in the knowledge memory in a form of a matrix Fk E RDa|Fk! where the i-th column is a fact embedding ak,i E RDa. The fact embedding is the concatenation of subject, relationship, and object embeddings. We obtain these entity embeddings from a preliminary run of a knowledge graph embedding method such as TransE (Bordes et al., 2013). Note that we fix the fact embedding during the training of our model to help the model predict new facts at test time. But, we learn the embedding of the Topic_Itself. For notation, to denote the vector representation of any object of our interest we use bold lowercase characters. For example, the embedding of a word wt is represented by wt = W[wt] where WD |V| is the word embedding matrix, and W[wt] denotes the wr-th column of W.\nFor example, a description \"Rogers was born in Latrobe, Pennsylvania in 1928' from a topic Fred. Rogers in Wikipedia, is augmented to, Y = {(w=\"Rogers'\", a=0, z=0), (\"was\", NaF, 1), (\"born\" NaF, 1), (\"in\", NaF, 1), (\"Latrobe\", 42, O), (\"Pennsylvania\", 42, 1), (\"in\", NaF, 1), (\"1928\", 83, O)}. Here, we use facts on Fred Rogers, a42 = (Fred Rogers, Place_of_Birth, Latrobe Pennsylvania), a83 =. (Fred_Rogers, Year_of_Birth, 1928), and a special fact a = (Fred_Rogers, Topic_Itself, Fred_Rogers which we define in order to refer to the topic string itself. We also assume here that the words Rogers. Latrobe and 1928 are not in the vocabulary..\nZt Wo a1 01 at a2 02 a3 03 Kt a4 04 nt aN ON X t NaF at-1 Wt Topic Knowledge\nAt each time step, the NKLM follows four sub-steps. First, using both the word and fact outputs from the previous time step as the input of the current time step, we update the LSTM controller Second, given the output of the LSTM, the NKLM predicts a fact (including NaF) and extracts corresponding fact embedding from the knowledge memory. Thirdly, with the extracted fact and the state of the LSTM controller, the NKLM makes a binary decision to choose the source of word generation. Finally, a word is generated according to the chosen source. A model diagram is depicted in Fig. 1. In the following, we describe these four steps in more detail.\n1) Input Representation and LSTM Controller. As shown in Fig. 1, the input at time step t is th concatenation of three embedding vectors corresponding to a fact at-1, a vocabulary word w-1, anc a copied word w_1, all predicted in the previous time step. However, because at a time step, th predicted word comes only either from the vocabulary or by copying from the fact description, w set either wt-1 or w-1 to a zero vector when it is not selected in the previous step. As we shall see we use position embeddings to represent the copied words by its position within the fact descriptio. And, because the dimensions of the vocabulary word embedding and the position embedding fo copied words are different, we use such concatenation of wy-1 and wi_1 to represent the wor input. The resulting input representation xt = fconcat(at-1, w-1, w-1) is then fed into the LSTM controller, and obtain the output states (ht, ct) = fLsTm(xt, h-1). Note that a-1 and w-1 (e.g corresponding to n-th position) together can deliver information that a symbol in n-th position in th description of fact at-1 was used in the previous time step.\n2) Fact Extraction. Then, we predict a relevant fact at on which the word wt will be based. If the word wt is supposed to be irrelevant to any fact, the NaF type is predicted. Unlike the fact embeddings, we learn the NaF embedding during training.\nPredicting a fact is done in two steps. First, a fact-key kfact E RDa is generated by kfact =. ffactkey(ht, ek). Here, ek E RDa is the topic context embedding (or a subgraph embedding of. the topic) which encodes information about what facts are available in the knowledge memory sc. that the key generator adapts to changes in the knowledge memory. For example, if we remove a. fact from the memory, without retraining, the fact-key generator should be aware of the absence of. that information and thus should not generate a key vector for the removed fact. Although, in the experiments, we use mean-pooling (average of the all fact embeddings in the knowledge memory) t obtain e, one can also consider using the soft-attention mechanism (Bahdanau et al., 2014). For the. fact-key generator ffactkey, we use an MLP with one hidden layer of ReLU nonlinearity..\nThen, using the generated fact-key kfact, we perform key-value lookup over the knowledge memor. F k to predict a fact and retrieve its embedding at,.\nFigure 1: The NKLM model. The input. consisting of a word (either w-1 or wt-1). and a fact (at-1) goes into LSTM. The. LSTM's output ht together with the knowl-. edge context e generates the fact key kt.. Using the fact key, the fact embedding. at is retrieved from the topic knowledge. memory. Using at and ht, knowledge-copy. switch zt is determined, which in turn de termines the next word generation source w' or w. The copied word w is a sym-. bol taken from the fact description Oa+.\nexp(kfactFk[at]) P(at|ht (kfactFk[a']) at = argmax P(at|ht); at EFk at = Fk[at].\nNote that in order to perform the copy mechanism, we need to pick a single fact from the knowledg. memory instead of using the weighted average of the fact embeddings as in the soft-attention\n3) Knowledge-Copy Switch. Given the encoding of the context h and the embedding of the extracted fact at, the model decides the source for the next word generation: either from the vocabulary or from the fact description by copy. As zt = 1 if the word wt is in the vocabulary, we define the probability of selecting copy as:\nHere, fcopy is an MLP with one ReLU hidden layer and a single linear output unit. For facts about attributes such as nationality or profession, the words in the fact description (e.g., \"American\"' or 'actor') are likely to be in the vocabulary, but for facts like the year of birth or father name, the model is likely to choose to copy.\n4) Word Generation. Word wt is generated from the source indicated by the copy-switch t as follows:\nwt E y. if zt < 0.5 Otherwise.\nexp(kVoca W[w]) P(w =w(ht) w'ev exp(kvocaW[w'])\nFor knowledge word w E Oa+, we predict the position of the word in the fact description and then copy the word on the predicted position to output. This is because, unlike with the traditional copy mechanism, our context words (i.e., the fact description) often consist of all unknown words and/o. are short in length. Copying allows us not to rely on the word embeddings for the knowledge words Instead, we learn the position embeddings shared among all knowledge words. This makes sense because words in the fact description usually appear one by one in increasing order. Thus, given that the first symbol o1 = \"Michelle'\" was used in the previous time step and prior to that other words such as \"President\" and \"Us\" were also observed, the model can easily predict that it is time to select the second symbol, i.e., O2 = \"Obama\".\nexp(kTsP[n]) P(w = on|ht, a OOS exp(kTsP[n'])\nwith n' running from O to |Oat| -- 1. Here, PD. Nmax is the position embedding matrix. Note that Nmax is typically a much smaller number (e.g., 20 in our experiments) than the size of vocabulary.. The position embedding matrix P is learned during training..\nAlthough in this paper we find that the simple position prediction performs well, we note that one could also consider a more advanced encoding such as one based on a convolutional network (Kim. 2014) to model the fact description. At test time, to compute p(wt|w%t), we can obtain {<t, at} from {w<t} and Fk using the automatic labeling script, and perform the above inference process. with hard decisions taken about zt and at based on the model's predictions.."}, {"section_index": "4", "section_name": "3.3 LEARNING", "section_text": "log-likelihood of the observed words w.r.t the model parameter 0.\ne* = argmax >`log Pe(Wk|Fk). 0 k\nZt = p(1- zt[ht) = sigmoid(fcopy(ht,at))\nFor vocabulary word w E V, we use the softmax function where each output dimension corresponds to a word in the vocabulary including UNK,.\nwhere kyoca E RDw is obtained by fvoca(ht,at) which is an MLP with a ReLU hidden layer and linear output units of dimension Dw .\nFor this copy-by-position, we first generate the position key kpos E RDo by a function fposkey(ht, at) which is again an MLP with one hidden layer and linear outputs whose dimension is equal to the maximum length of the fact descriptions Noa = maxaeF|Oa| where F = UkFk. Then, the n-th max symbol on E Oa+ is chosen by\nTable 1: Statistics of the WikiFacts-FilmActor-v0.1 Dataset\nPe(We[FE) = Pe(Yk[Fe)\nPo(yt|Y1:t-1) = Pe(wt,at,zt|ht) = Po(wt|At,Zt,ht)Pe(at|ht)Pe(zt|ht)\nWe maximize the above objective using stochastic gradient optimization\nAn obstacle in developing the above model is the lack of the dataset where the text corpus is aligned with facts at the word level. To this end, we produced the WikiFacts dataset by aligning Wikipedia descriptions with corresponding Freebase facts. Because many Freebase topics pro vide a link to its corresponding topic in Wikipedia, we choose a set of topics for which both a Freebase entity and a Wikipedia description exist. In the experiments, we used a version called wikiFact s-Fi 1mAct or-v0 . 1 where the domain is restricted to the /Film/Actor in Freebase\nFor all object entity descriptions {Ogk} associated with Fk, we performed string matching to the Wikipedia description We. We used the summary part (first few paragraphs) of the Wikipedia page as text to be modeled but discarded topics for which the number of facts is greater than 1000 or the Wikipedia description is too short (< 3 sentences). For the string matching, we also used the synonyms and alias provided by WordNet (Miller, 1995) and Freebase.\nWe augmented the fact set Fk with the anchor facts Ax whose relationship is all set to. UnknownRelation. That is, observing that an anchor (words under hyperlink) in Wikipedia descriptions has a corresponding Freebase entity as well as being semantically closely related to the. topic in which the anchor is found, we make a synthetic fact of the form (Topic, UnknownRe 1 at i on. Anchor). This potentially compensates for some missing facts in Freebase. Because we extract the. anchor facts from the full Wikipedia page and they all share the same relation, it is more challenging. for the model to use these anchor facts than using the Freebase facts. As a result, for each word w in. the dataset, we have a tuple (w, zw, aw, kw). Here, kw is the topic where w appears. We provide a. summary of the dataset statistics in Table 1. The dataset will be available on a public webpage4."}, {"section_index": "5", "section_name": "4.2 EXPERIMENTS", "section_text": "Setup. We split the dataset into 80/10/10 for train, validation, and test. As a baseline model, we use the RNNLM. For both the NKLM and the RNNLM, two-layer LSTMs with dropout regular ization (Zaremba et al., 2014) are used. We tested models with different numbers of LSTM hidden units [200, 500, 1000], and report results from the 1000 hidden-unit model. For the NKLM, we set the symbol embedding dimension to 40 and word embedding dimension to 400. Under this setting, the number of parameters in the NKLM is slightly smaller than that of the RNNLM. We used\n'https://bitbucket.org/skaasj/wikifact_filmactor\nBecause, given Wx and Fk, a sequence of Yk = {yt = (wt, 2t, at)}t=1:|Wg| is deterministically induced for each word wt, the following equality is satisfied\n|Yk| log Pe(Yk|Fk) =>`log Pe(yt|yi:t-1,Fk) t=1\nThen, after omitting Fk and k for simplicity, we can rewrite the single step conditional probability as\nlanguage modeling. This, however, has a problem in evaluating language models for a corpus containing many named entities: a model can get good perplexity by accurately predicting UNK words. As an extreme example, when all words in a sentence are unknown words, a model predicting everything as UNK will get a good perplexity. Considering that unknown words provide virtually no useful information, this is clearly a problem in tasks such as question answering, dialogue modeling and knowledge language modeling.\nTo this end, we introduce a new evaluation metric, called the Unknown-Penalized Perplexity (UPP) and evaluate the models on this metric as well as the standard perplexity (PPL). Because the actual word underlying the UNK should be one of the out-of-vocabulary (OOV) words, in UPP, we penalize the likelihood of unknown words as follows:\nPUpp(Wunk) = P(wunk)/|Vtotal\\Vvoca|\nHere, Vtotal is a set of all unique words in the corpus, and Vyoca is the vocabulary used in the softmax In other words, in UPP we assume that the OOV set is equal to Vtotal Vyoca and thus assign a uniform probability to OOV words. In another version, UPP-fact, we consider the fact that the RNNLM can also use the knowledge given to the NKLM to some extent, but with limited capability (because the model is not designed for it). For this, we assume that the OOV set is equal to the total knowledge vocabulary of a topic k, i.e.,\nObservations from the experiment results. Our observations from the experiment results are as. follows. (a) The NKLM outperforms the RNNLM in all three perplexity measures. (b) The copy. mechanism is the key of the significant performance improvement. Without the copy mechanism. the NKLM still performs better than the RNNLM due to its usage of the fact information, but the improvement is not so significant. (c) The NKLM results in a much smaller number of UNKs (roughly a half of the RNNLM). (d) When no knowledge is available, the NKLM performs as well as the\nValidation Test Model PPL UPP UPP-f PPL UPP UPP-f # UNK RNNLM 39.4 97.9 56.8 39.4 107.0 58.4 23247 NKLM 27.5 45.4 33.5 28.0 48.7 34.6 12523 no-copy 38.4 93.5 54.9 38.3 102.1 56.4 29756 no-fact-no-copy 40.5 98.8 58.0 40.3 107.4 59.3 32671 no-TransE 48.9 80.7 59.6 49.3 85.8 61.0 13903\nTable 2: We compare four different versions of the NKLM to the RNNLM on three different. perplexity metrics. We used 10K vocabulary. In no-copy, we disabled the knowledge-copy func-. tionality, and in no-fact-no-copy, using topic knowledge is also additionally disabled by setting all facts as NaF. Thus, no-fact-no-copy is very similar to RNNLM. In no-TransE, we used random. vectors instead of the TransE embeddings to initialize the KG entities. As shown, the NKLM shows best performance in all cases. The no-fact-no-copy performs similar to the RNNLM as expected. (slightly worse partly because it has smaller model parameters than that of the RNNLM). As expected,. no-copy performs better than no-fact-no-copy by using additional information from the fact embed-. ding, but without the copy mechanism. In the comparison of the NKLM and no-copy, we can see the. significant gain of using the copy mechanism to predict named entities. In the last column, we can also see that, with the copy mechanism, the number of predicting unknown decreases significantly. Lastly, we can see that the TransE embedding is important..\n100-dimension TransE embeddings for Freebase entities and relations, and concatenate the relation and object embeddings to obtain fact embeddings. We averaged all fact embeddings in Fg to obtain the topic context embedding ek. We unrolled the LSTMs for 30 steps and used minibatch size 20. We trained the models using stochastic gradient ascent with gradient clipping range [-5,5]. The initial learning rate was set to 0.5 for the NKLM and 1.5 for the RNNLM, and decayed after every epoch. by a factor of 0.98. We trained for 50 epochs and report the results chosen by the best validation set results.\nPupp(wunk) = P(wunk)/|VtotalVvoca]\nPUPP-fact(Wunk) = P(Wunk)/|Ok]\nPUpP-fact(wunk) = P(wunk)/|Ok]\nwhere Ok = U,Ogk,i. In other words, by using UPP-fact, we assume that, for an unknown word the RNNLM can pick one of the knowledge words with uniform probability. We describe the detail results and discussion on the experiments in the captions of Table 2, 3, and 4..\nValidation Test Model PPL UPP UPP-f PPL UPP UPP-f # UNK NKLM_5k 22.8 48.5 30.7 23.2 52.0 31.7 19557 RNNLM_5k 27.4 108.5 47.6 27.5 118.3 48.9 34994 NKLM_10k 27.5 45.4 33.5 28.0 48.7 34.6 12523 RNNLM_10k 39.4 97.9 56.8 39.4 107.0 58.4 23247 NKLM_20k 33.4 45.9 37.9 34.7 49.2 39.7 9677 RNNLM_20k 57.9 99.5 72.1 59.3 108.3 75.5 13773 NKLM_40k 41.4 49.0 44.4 43.6 52.7 47.1 5809 RNNLM_40k 82.4 107.9 92.3 86.4 116.9 97.9 9009\nTable 3: The NKLM and the RNNLM are compared for vocabularies of four different sizes. [5K, 10K, 20K, 40K]. As shown, in all cases the NKLM significantly outperforms the RNNLM Interestingly, for the standard perplexity (PPL), the gap between the two models increases as the vocabulary size increases while for UPP the gap stays at a similar level regardless of the vocabulary. size. This tells us that the standard perplexity is significantly affected by the UNK predictions,. because with UPP the contribution of UNK predictions to the total perplexity is very small. Also,. from the UPP value for the RNNLM, we can see that it initially improves when vocabulary size is. increased as it can cover more words, but decreases back when the vocabulary size is largest (40K) because the rare words are added last to the vocabulary..\nTable 4: Sampled Descriptions. Given the warm-up phrases, we generate samples from the NKLM and the RNNLM. We denote the copied knowledge words by [word] and the UNK words by <unk> Overall, the RNNLM generates many UNKs (we used 10K vocabulary) while the NKLM is capable to generate named entities even if the model has not seen some of the words at all during training In the first case, we found that the generated symbols (words in [) conform to the facts of the topic (Louise Allbritton) except that she actually died in Mexico, not in Oklahoma. (We found that the place_of_death fact was missing.) While she is an actress, the model generated a word [Actor This is because in Freebase, there exists only /profession/actor but no /profession/actress. It is alsc noteworthy that the NKLM fails to use the gender information provided by facts; the NKLM use. \"he\"' instead of \"she\"' although the fact /gender/female is available. From this, we see that if a fact is not detected (i.e., NaF), the statistical co-occurrence governs the information flow. Similarly, in other samples, the NKLM generates movie titles (Un Taxi Pour Aouzou), band name (Three Days Grace) and place of birth (Los Angeles). In addition, to see the NKLM's ability to adapt to knowledge updates without retraining, we changed the fact /place_of_birth/Oklahoma to /place_of_birth/Chicagc and found that the NKLM replaces \"Oklahoma\"' by \"Chicago\"' while keeping other words the same\nIn this paper, we presented a novel Neural Knowledge Language Model (NKLM) that brings the symbolic knowledge from a knowledge graph into the expressive power of RNN language models. The\nRNNLM. (e) KG embedding using TransE is an efficient way to initialize the fact embeddings. (f) The NKLM generates named entities in the provided facts whereas the RNNLM generates many. more UNKs. (g) The NKLM shows its ability to adapt immediately to the change of the knowledge (h) The standard perplexity is significantly affected by the prediction accuracy on the unknown words.. Thus, one need carefully consider it as a metric for knowledge-related language models.\nNKLM significantly outperforms the RNNLM in terms of perplexity and generates named entities. which are not observed during training, as well as immediately adapting to changes in knowledge. We. believe that the WikiFact dataset introduced in this paper, can be useful in other knowledge-related. language tasks as well. In addition, the Unknown-Penalized Perplexity introduced in this paper in order to resolve the limitation of the standard perplexity, can be useful in evaluating other language. tasks. The task that we investigated in this paper is limited in the sense that we assume that the true. topic of a given description is known. Relaxing this assumption by making the model search for. proper topics on-the-fly will make the model more practical. We believe that there are many more. open research challenges related to the knowledge language models."}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank Alberto Garcia-Duran, Caglar Gulcehre, Chinnadhurai Sankar, Iulia. Serban and Sarath Chandar for feedback and discussions as well as the developers of Theano (Bastie. et al., 2012), NSERC, CIFAR, Samsung and Canada Research Chairs for funding, and Comput Canada for computing resources."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Yoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. In Journal of Machine Learning Research, 2003.\nKurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collabora tively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pp. 1247-1250. ACM, 2008.\nAntoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. Learning structured embed dings of knowledge bases. In AAAI 2011, 2011.\nAntoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075, 2015.\nAsli Celikyilmaz, Dilek Hakkani-Tur, Panupong Pasupat, and Ruhi Sarikaya. Enriching wor embeddings using knowledge graph for semantic tagging in conversational dialog systems. I 2015 AAAI Spring Symposium Series, 2015.\nJiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. Incorporating copying mechanism i sequence-to-sequence learning. CoRR, abs/1603.06393, 2016.\nKelvin Gu, John Miller, and Percy Liang. Traversing knowledge graphs in vector space. EMNLP 2015, 2015.\nAntoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems, pp. 2787-2795, 2013.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nSebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large target vocabulary for neural machine translation. ACL 2015, 2015.\nYoon Kim. Convolutional neural networks for sentence classification. EMNLP 2014, 2014\nTeng Long, Ryan Lowe, Jackie Chi Kit Cheung, and Doina Precup. Leveraging lexical resources for learning entity embeddings in multi-relational data. 2016.\nTomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurren neural network based language model. In INTERsPEECH 2010, volume 2, pp. 3, 2010.\nGeorge A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11) 39-41, 1995.\nAndriy Mnih and Yee Whye Teh. A fast and simple algorithm for training neural probabilistic language models. ICML 2012, 2012\nMaximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. A review of relationa machine learning for knowledge graphs: From multi-relational link prediction to automatec knowledge graph construction. arXiv preprint arXiv:1503.00759, 2015.\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. NIPs 2015, 2015.\nJason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. ICLR 2015, 2015\nJason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards ai-complete questior answering: A set of prerequisite toy tasks. ICLR 2016, 2016.\nWojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXi preprint arXiv:1409.2329, 2014\nAndriy Mnih and Geoffrey E Hinton. A scalable hierarchical distributed language model. In Advance. in neural information processing systems, pp. 1081-1088, 2009\nOriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015\np_copy_fact 2008 was an american Actor. he was born in Los Angeles california <naf> profession-Screenwriter profession-Actor place_of_birth-Los Angeles profession-Film Producer topic_itself-Rory Calhoun unk_rel-Spellbound unk_rel-California unk_rel-Santa Cruz\nFigure 2: This is a heatmap of an example sentence generated by the NKLM having a warmup \"Rory. Calhoun ( august 8, 1922 april 28\". The first row shows the probability of knowledge-copy switch (Equation 5 in Section 3.1). The bottom heat map shows the state of the topic-memory at each time step (Equation 2 in Section 3.1). In particular, this topic has 8 facts and an additional <NaF> fact For the first six time steps, the model retrieves <NaF>from the knowledge memory, copy-switch is off and the words are generated from the general vocabulary. For the next time step, the model gives. higher probability to three different profession facts: \"Screenwriter\", \"Actor' and \"Film Producer. The fact \"Actor' has the highest probability, copy-switch is higher than O.5, and therefore \"Actor' is. copied as the next word. Moreover, we see that the model correctly retrieves the place of birth fac1. and outputs \"Los Angeles.' After that, the model still predicts the place of birth fact, but copy-switch. decides that the next word should come from the general vocabulary, and outputs \"California.\".\np_copy_fact an english Actor he was born in Oklahoma and died in Oklahoma he was married to Charles Collingwood <naf> education.institution-University of Oklahoma performance.film-Son of Dracula location.people_born_here-Oklahoma City performance.film-The Egg and I marriage.type_of_union-Marriage marriage.spouse-Charles Collingwood profession-Actor topic_itself-Louise Allbritton unk_rel-Universal Studios unk rel-Pasadena Playhouse unk_rel-Pittsburgh unk_rel-Sitting Pretty unk_rel-Hollywood unk_rel-World War II unk_rel-United Service Organizations\nFigure 3: This is an example sentence generated by the NKLM having a warmup \"Louise Allbritton ( 3 july <unk>february 1979 ) was\". We see that the model correctly retrieves and outputs the professior (\"Actor'), place of birth (\"Oklahoma'), and spouse (\"Charles Collingwood\") facts. However, the model makes a mistake by retrieving the place of birth fact in a place where the place of death fact is supposed to be used. This is probably because the place of death fact is missing in this topic memory and then the model searches for a fact about location, which is somewhat encoded in the place ol birth fact. In addition, Louise Allbritton was a woman, but the model generates a male professior \"Actor' and male pronoun \"he'. The \"Actor' is generated because there is no \"Actress\"' representatior in Freebase."}] |
rJ6DhP5xe | [{"section_index": "0", "section_name": "GENERALIZABLE FEATURES FROM UNSUPERVISED LEARNING", "section_text": "Mehdi Mirza & Aaron Courville & Yoshua Bengio\nUniversite de Montreal\n{memirzamo, aaron.courville, yoshua.umontreal}@gmail.com\nHumans learn a predictive model of the world and use this model to reason about future events and the consequences of actions. In contrast to most machine predic- tors, we exhibit an impressive ability to generalize to unseen scenarios and reason intelligently in these settings. One important aspect of this ability is physical in- tuition (Lake et al.|2016). In this work, we explore the potential of unsupervised learning to find features that promote better generalization to settings outside the supervised training distribution. Our task is predicting the stability of towers of square blocks. We demonstrate that an unsupervised model, trained to predict fu- ture frames of a video sequence of stable and unstable block configurations, can yield features that support extrapolating stability prediction to blocks configura- tions outside the training set distribution."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Humans learn a tremendous amount of knowledge about the world with almost no supervision anc can construct a predictive model of the world. We use this model of the world to interact with our environment. As also argued byLake et al.[(2016) one of the core ingredients of human intelligence is intuitive physics. Children can learn and predict some of the common physical behaviors of our world just by observing and interacting without any direct supervision. And they form a sophisti cated predictive model of the physical environment and expect the world to behave based on theil mental model and have a reasonable expectation about unseen situations Teglas et al.(2011).\nDespite impressive progress in the last few years in the training of the supervised models, we have not yet quite been able to achieve similar results in unsupervised learning, and it remains one of the challenging research areas in the field. The full potential of the application of unsupervised learning is yet to be realized.\nIn this work, we leverage unsupervised learning to train a predictive model over sequences. We use the imagined and predicted future sequence data to help a physical environment prediction mode. generalize better to unseen settings\nMore specifically we focus on the task of predicting if a tower of square bricks will fall or not, as. introduced byLerer et al.(2016). They showed that a deep convolution neural network could predici the fall of the towers with super-human accuracy. But despite the strengths of convolution neural. networks,Zhang et al.(2016) shows how deep neural networks have a hard time generalizing to. novel situations in the same way as humans or simulation-based models can do. In this work, we show that deep neural networks are capable of generalizing to novel situations through a form of. unsupervised learning. The core idea is to observe the world without any supervision and build a. future predictive model of it, and in a later stage leverage and utilize the imagined future to train a. better fall prediction model."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Video generation is one active area of research with many applications, and many of the recent. works have been using some of the states of the art neural networks for video generation. Srivas-. tava et al.[(2015) uses LSTM recurrent neural networks to train an unsupervised future predictive mode1 for video generation. And here we use a very similar architecture as described in Section|4.1 Mathieu et al.(2015) combines the common mean-squared-error objective function with an adver-. sarial training cost in order to generate sharper samples. Lotter et al. (2016) introduce another form of unsupervised video prediction training scheme that manages to predict future events such as the. direction of the turn of a car which could have potential use in training of the self-driving cars..\nRecent datasets for predicting the stability of block configurations (Lerer et al.]2016, Zhang et al. 2016) only provide binary labels of stability, and exclude the video simulation of the block configu. ration. We, therefore, construct a new dataset, with a similar setup asLerer et al.(2016);Zhang et al (2016), that includes this video sequence. We use a Javascript based physics engine' Ito generate the data.\nWe construct towers made of 3 - 5 square blocks. To sample a random tower configuration, we uniformly shift each block in its x and y position such that it touches the block below. Because taller towers are more unstable, this shift is smaller when we add more blocks. To simplify our learning setting, we balance the number of stable and unstable block configurations. For each tower height we create 8000, 1000 and 3000 video clips for the training, validation, and test set, respectively. The video clips are sub-sampled in time to include more noticeable changes in the blocks configurations We decided to keep 39 number of frames which with our sub-sampling rate was enough time for unstable towers to collapse. Each video frame is an RGB image of size 64x64. In addition to binary stability label, we include the number of blocks that fell down."}, {"section_index": "3", "section_name": "4 ARCHITECTURE", "section_text": "The core idea of this paper is to use future state predictions of a generative video model to en. hance the performance of a supervised prediction model. Our architecture consists of two separate modules:\nIn the following sections, we explore several different architectures for both modules\nsupervised learning. But since Krizhevsky et al.[(2012) many other regularization Srivastava et al (2014), weight initialization [Glorot & Bengio(2010) and normalization Ioffe & Szegedy(2015) techniques and architecture designs He et al.(2015) has been introduced that diminish the effect of pre-training. Although pre-training still could be useful in data scarce domains they are many other ways and applications that unsupervised learning are still very interesting models and it is a very active area of research. Just to name a few applications are semi-supervised learning|Kingma et al. (2014); Salimans et al.(2016);Dumoulin et al.(2016) super resolution Sonderby et al.[(2016).\nModel-based reinforcement learning (RL) is an active research area that holds the promise of making. the RL agents less data hungry. Learning agents could explore, learn in an unsupervised way about. their world, and learn even more by dreaming about future states. We believe that action-condition. video prediction models are an important ingredient for this task. Fragkiadaki et al.[(2015) learn the dynamics of billiards balls by supervised training of a neural net. Action-conditioned video prediction models have been applied to Atari playing agent[Oh et al.(2015) as well as robotics (Finn. et al.]2016Finn & Levine2016).\nFrame predictor A generative model to predict future frames of a video sequence. This model is trained to either generate the last frame or the complete sequence of frames\nStability predictor In the original task, stability is predicted from a static image of a block config. uration. We explore whether, in addition to initial configuration, the last frame predictior of our unsupervised model improves the performance of the stability prediction.."}, {"section_index": "4", "section_name": "4.1 FUTURE FRAME PREDICTION", "section_text": "We consider two different model architectures for this task. The first one, named ConvDeconv, only. takes the first frame as input and predicts the last frame of the video sequence. The architecture consist of a block of convolution and max-pooling layers. To compensate for the dimensionality. reduction of the max-pooling layers, we have a fully-connected layer following the last max-pooling layer. And finally a subsequent block of deconvolution layers with the output size same as the model. input size. All activation functions are ReLU(Nair & Hinton]2010). See Table[1 for more details of the architecture. The objective function is the mean squared error between the generated last. frame and the ground-truth frame; as a result, this training will not require any labels. We also. experimented with an additional adversarial cost as in Mathieu et al.[(2015) but did not observe any improvement for the stability prediction task. We hypothesize that although the adversaria objective function helps to have sharper images, such improved sample quality does not transfer. to better stability prediction. Figure|1|shows a few examples of the generated data on the test set.. Mean squared error is minimized using the AdaM Optimizer(Kingma & Ba2014) and we use early. stopping when the validation loss does not improve for 100 epochs..\nWe extend this ConvDeconv model in a second architecture, named ConvLSTMDeconv, to predic the next frame at each timestep. This model is composed of an LSTM architecture. The sam convolutional and deconvolutional blocks as ConvDeconv is utilized to respectively input the curren frame to the LSTM transition and output the next frame from the current LSTM state. The detail of the ConvLSTMDeconv model architecture are shown in Table 2|and Figure[3|shows the diagran of the both architectures. During the training at each time step the ground-truth data feeds in to th. model, but during the test time only the initial time step gets the first frame from the data and fc subsequent time steps the generated frames from the previous time steps feed in to the model. The i similar setup to recurrent neural network language models Mikolov (2012), and this is necessary a. during the test time we only have access to the first frame. As before, the model is trained to predic the next frame at each time step by minimizing the predictive mean-squared-error using AdaN. optimizer and early-stopping. For training, we further subsample in time dimension and reduce th. sequence length to 5-time steps. Figure|2|shows some sample generated sequences from the test se1\nTable 1: ConvDecony mode1 architecture FC stands for ''Fully Connected''..\nFigure 1: Samples from the ConvDeconv model. First and second rows show first and last frame respectively from the test data. And the third row shows generated last frame samples."}, {"section_index": "5", "section_name": "4.2 STABILITY PREDICTION", "section_text": "We have two supervised models for stability prediction. The first one will be a baseline that takes as input the first frame and predict the fall of the tower. For this model we use 50 layer ResNe\nLayer Type Output channels/dimensions Kernel/Pool size Layer Type Output channels/Dimension Kernel/Pool size 1 Conv 64 3 x 3 1 Conv 64 3 x 3 2 MaxPool 64 4 x 4 2 MaxPool 64 4 x 4 3 Conv 128 3 x 3 3 Conv 128 3 x 3 4 MaxPool 64 3 x 3 4 MaxPool 64 3 x 3 5 Conv 64 3 x 3 5 Conv 64 3 x 3 6 MaxPool 64 3 x 3 6 MaxPool 64 3 x 3 7 FC 64 64 16 = 6 65536 7 FC LSTM 2000 8 DeConv 64 3 x 3 8 FC 64 64 x 3 9 DeConv 128 3 x 3 9 DeConv 64 3 x 3 10 DeConv 64 3 x 3 10 DeConv 64 3 x 3 11 DeConv 3 3 x 3 11 DeConv 3 3 x 3\nTable 2: ConyLSTMDecony model architecture FC stands for ''Fully Connected\"'\narchitecture from He et al.[(2016). We trained the baseline model on each of the different towei. heights 3, 4, 5. We call it the single model and name experiments 3S, 4S, 5S respectively for the. number of blocks that it was trained on. The second model will be the one using the generated. data: it takes as input the first frame and the generated last frame. It consisted of two 50 Layei ResNet blocks in parallel, one for the first frame and one for last frame and the last hidden laye. of both models are concatenated together before a logistic regression layer (or Softmax in the case. of non-binary labels). Both ResNet blocks share parameters. Based on whether the generated data is coming from ConvDeconv model or ConvLSTMDeconv model we labeled experiments as 3CD 4CD, 5CD and 3CLD, 4CLD, 5CLD respectively.\nFigure 3: Different model architectures. The first two on the left are ConvDeconv and ConvLST. MDeconv described in Section4.1 And the two on the right are models used for the supervised fall prediction described in Section4.2 Single frame predictor is the baseline model. And the double. frame predictor is the model that uses the generated data..\nFigure 2: Samples from the ConvLSTMDeconv model. Each row is a different sample. The lef sequence is the data and the right sequence is the generated data. Note that during generation model only see the first frame and for next time steps uses its own output from the last timestep.\nNone of the models are pre-trained and all the weights are randomly initialized. As in4.1 we use AdaM and we stopped the training when the validation accuracy was not improved for 100 epochs. All images are contrast normalized independently and we augment our training set using random horizontal flip of the images and randomly changing the contrast and brightness..\nDeConv DeConv DeConv Softmax Softmax Logistic Logistic f c LSTM LSTM ResNet50 ResNet50 ResNet50 Conv Conv Conu to Single frame. Double frame ConvDeconv ConvLSTMDeconv Fall Predictor Fall Predictor"}, {"section_index": "6", "section_name": "5 RESULTS", "section_text": "Figure4|shows the classification results for each of the 9 models described in Section4.2|tested or 3, 4 and 5 blocks. Each test case is shown with a different color. And Table 3 shows all the 27 tes. case results' numerical values. In almost all cases the generated data improves the generalizatior performance to test cases with a different number of blocks than it was trained on. For comparisor. we have included results from[Zhang et al.(2016) in Table4] SinceZhang et al.(2016) only reports the results when the models are trained on tower of 4 blocks, the corresponding results would be the. second block row in Table[3] models 4S, 4CD and 4CLD. Even though the datasets are not the same but it can be observed that the range of performance of the baseline 4S model is consistent with the range of performance of AlexNet model on Table|4] It can be seen that how the results of the 4CD. model are significantly better than both IPE and human performance reported inZhang et al.(2016) while the baselines have similar performances.\nOne observation is the fact that the improvements are more significant when it's been tested or scenarios with more bricks than during training. It also improves the reverse case, i.e. fewer bricks than during training, but the improvement is not as significant. It is worth mentioning that testing on a lower number of bricks is a much harder problem as pointed out inZhang et al.(2016) too. In their case, the prediction performance was almost random when going from 4 blocks to 3 blocks which is not the case in our experiments?| One possible explanation for performance loss is that a balanced tower with fewer blocks corresponds to an unstable configuration for a tower with more blocks e.g. a tower with 3 blocks is classified as unstable for a prediction model trained on towers of 5 blocks. One solution could be to train these models to predict how many blocks have fallen instead of a binary stability label. Because we have access to this data in our dataset, we explored the same experiments using these labels. Unfortunately, we did not observe any significant improvement. The main reason could be that the distribution of the number of fallen blocks is extremely unbalanced. I is hard to collect data with a balanced number of fallen blocks because some configurations are thus very unlikely e.g. a tower of 5 blocks with only two blocks falls (the majority of the time the whole tower collapses).\nThe another observation is the fact that models that use ConvDeconv generated data performe. slightly better than those that use ConvLSTMDeconv. As seen in Figure2[the samples in the Con vLSTMDeconv case are more noisy and less sharper than those in Figure[1 This could be causec since after the first time step the model outputs from the last time step is used as input for the nex. time step, the samples degenerates the longer the sequence is..\nFall prediction 100 90 80 ACeunrcy 70 60 50 40 3S 3CD 3CLD 4S 4CD 4CLD 5S 5CD 5CLD\nFigure 4: Accuracy in percentage for each of the 9 models tested on test sets with a different number of blocks. Each color represents the number of blocks that the model was tested on. 50% is chance\nData augmentation was crucial to increase the generalization performance of the stability prediction e.g. 5CD model tested on 4 bricks achieved only 50% without data augmentation while reaching 74.5% accuracy with data augmentation. This significant improvement from data augmentation could be partly because our dataset was relatively small.\nModel ITalnlSel Test sel Accuracy 3S 3 3 91.87 % 3S 3 4 66.1 % 3S 3 5 63.7 % 3CD 3 3 95.5 % 3CD 3 4 92.63 % 3CD 3 5 89 % 3CLD 3 3 93.3 % 3CLD 3 4 90.33 % 3CLD 3 5 84.30 % 4S 4 3 52.5 % 4S 4 4 87 % 4S 4 5 75.53 % 4CD 4 3 80.53 % 4CD 4 4 92.5 % 4CD 4 5 89.1 % 4CLD 4 3 65.53 % 4CLD 4 4 91.20 % 4CLD 4 5 84.20 % 5S 5 3 59.26 % 5S 5 4 67.23 % 5S 5 5 86.50 % 5CD 5 3 58.27 % 5CD 5 4 74.50 % 5CD 5 5 88.53 % 5CLD 5 3 58.90 % 5CLD 5 4 74.50 % 5CLD 5 5 88.53 %\nTable 3: The results from our experiments"}, {"section_index": "7", "section_name": "6 CONCLUSION", "section_text": "In this paper, we showed that data generated from an unsupervised model could help a supervised. learner to generalize to unseen scenarios. We argue that this ability of transfer learning and gener alization by observing the world could be one of the ingredients to construct a model of the world that could have applications in many tasks, such as model-based RL. We aim to extend this work in future by looking at the videos of robots manipulating objects and being able to predict their failure beforehand, which could help an RL agent to explore more intelligently.."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Harm de Vries and Laurent Dinh for their help and feedback in writing the paper. And also thank Adam Lerer and Jiajun Wu for sharing their dataset. We thank NSERC,. CIFAR, IBM, Canada Research Chairs, Google and Samsung for funding."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Model Train set Test set Accuracy AlexNet 4 3 51 % AlexNet 4 4 95 % AlexNet 4 5 78.5 % IPE N/A 3 72 % IPE N/A 4 64 % IPE N/A 5 56 % Human N/A 3 76.5 % Human N/A 4 68.5 % Human N/A 5 59 %\nTable 4: The results reported on 1Zhang et al 2016). We emphasize that these results are on a different dataset.\nYoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle, et al. Greedy layer-wise training of deep networks. Advances in neural information processing systems, 19:153, 2007.\nGeoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504-507, 2006.\nBrenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building. machines that learn and think like people. arXiv preprint arXiv:1604.00289., 2016\nAdam Lerer, Sam Gross, and Rob Fergus. Learning physical intuition of block towers by example arXiv preprint arXiv:1603.01312, 2016\nMichael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440. 2015.\nTomas Mikolov. Statistical language models based on neural networks. Presentation at Google Mountain View. 2nd April. 2012\nJunhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video prediction using deep networks in atari games. In Advances in Neural Information Process. ing Systems, pp. 2863-2871, 2015.\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016.\nCasper Kaae Sonderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszar. Amortisec map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016\nNitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinoy Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learnin. Research, 15(1):1929-1958. 2014\nNitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of videc representations using lstms. CoRR, abs/1502.04681, 2, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015\nErno Teglas, Edward Vul, Vittorio Girotto, Michel Gonzalez, Joshua B Tenenbaum, and Luca I Bonatti. Pure reasoning in 12-month-old infants as probabilistic inference. science, 332(6033) 1054-1059, 2011."}] |
S1vyujVye | [{"section_index": "0", "section_name": "DEEP UNSUPERVISED LEARNING THROUGH SPATIAL CONTRASTING", "section_text": "Elad Hoffer\nTechnion - Israel Institute of Technology Haifa, Israel\nnailon@cs.technion.ac.il\nConvolutional networks have marked their place over the last few years as the best performing model for various visual tasks. They are, however, most suited for supervised learning from large amounts of labeled data. Previous attempts have been made to use unlabeled data to improve model performance by apply- ing unsupervised techniques. These attempts require different architectures and. training methods. In this work we present a novel approach for unsupervised. training of Convolutional networks that is based on contrasting between spatial regions within images. This criterion can be employed within conventional neu- ral networks and optimized using standard techniques such as SGD and back- propagation, thus complementing supervised methods.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "For the past few years convolutional networks (ConvNets, CNNs) LeCun et al.(1998) have proven. themselves as a successful model for vision related tasks|Krizhevsky et al.(2012)Mnih et al.[(2015] Pinheiro et al.(2015) Razavian et al.(2014). A convolutional network is composed of multiple. convolutional and pooling layers, followed by a fully-connected affine transformations. As with. other neural network models, each layer is typically followed by a non-linearity transformation such. as a rectified-linear unit (ReLU). A convolutional layer is applied by cross correlating an image with a trainable weight filter. This. stems from the assumption of stationarity in natural images, which means that parameters learned. for one local region in an image can be shared for other regions and images\nDeep learning models, including convolutional networks, are usually trained in a supervised man- ner, requiring large amounts of labeled data (ranging between thousands to millions of examples per-class for classification tasks) in almost all modern applications. These models are optimized us-. ing a variant of stochastic-gradient-descent (SGD) over batches of images sampled from the whole training dataset and their ground truth-labels. Gradient estimation for each one of the optimized parameters is done by back propagating the objective error from the final layer towards the input. This is commonly known as 'backpropagation''Rumelhart et al..\nIn early works, unsupervised training was used as a part of pre-training procedure to obtain ar. effective initial state of the model. The network was later fine-tuned in a supervised manner a. displayed byHinton(2007). Such unsupervised pre-training procedures were later abandoned, since. they provided no apparent benefit over other initialization heuristics in more careful fully supervisec. training regimes. This led to the de-facto almost exclusive usage of neural networks in supervisec. environments\nIn this work we will present a novel unsupervised learning criterion for convolutional network based on comparison of features extracted from regions within images. Our experiments indicate that by\nItay Hubara\nTechnion - Israel Institute of Technology Haifa, Israel.\nitayh@tx.technion.ac.il"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "using this criterion to pre-train networks we can improve their performance and achieve state-of the-art results.\nUsing unsupervised methods to improve performance have been the holy grail of deep learning for the last couple of years and vast research efforts have been focused on that. We hereby give a short overview of the most popular and recent methods that tried to tackle this problem..\nAutoEncoders and reconstruction loss These are probably the most popular models for unsu. pervised learning using neural networks, and ConvNets in particular. Autoencoders are NNs which aim to transform inputs into outputs with the least possible amount of distortion. An Autoencode. is constructed using an encoder G(x; w1) that maps an input to a hidden compressed representation followed by a decoder F(y; w2), that maps the representation back into the input space. Mathemat. ically, this can be written in the following general form:.\nThis allows an efficient training procedure using the aforementioned backpropagation and SGD tech. niques. Over the years autoencoders gained fundamental role in unsupervised learning and many modification to the classic architecture were made. Ng(2011) regularized the latent representatior. to be sparse, Vincent et al.[(2008) substituted the input with a noisy version thereof, requiring the model to denoise while reconstructing. Kingma et al. (2014) obtained very promising results witl variational autoencoders (VAE). A variational autoencoder model inherits typical autoencoder ar chitecture, but makes strong assumptions concerning the distribution of latent variables. They use. variational approach for latent representation learning, which results in an additional loss componen. which required a new training algorithm called Stochastic Gradient Variational Bayes (SGVB). VAI assumes that the data is generated by a directed graphical model p(x[z) and require the encoder tc learn an approximation qw1 (z[x) to the posterior distribution pw2([x) where w1 and w2 denote the. parameters of the encoder and decoder. The objective of the variational autoencoder in that case has. the following form:\nL(w1,W2,x) =-DkL(qw1(z[x)[[pw(z)) +E l0g Pw2(x|z)\nRecently, a stacked set of denoising autoencoders architectures showed promising results in both semi-supervised and unsupervised tasks. A stacked what-where autoencoder byZhao et al.[(2015] computes a set of complementary variables that enable reconstruction whenever a layer implements a many-to-one mapping. Ladder networks by Rasmus et al.[(2015) - use lateral connections and layer-wise cost functions to allow the higher levels of an autoencoder to focus on invariant abstraci features.\nExemplar Networks: The unsupervised method introduced byDosovitskiy et al.(2014) takes a different approach to this task and trains the network to discriminate between a set of pseudo-classes.. Each pseudo-class is formed by applying multiple transformations to a randomly sampled image patch. The number of pseudo-classes can be as big as the size of the input samples. This criterion ensures that different input samples would be distinguished while providing robustness to the applied. transformations. In this work we will explore an alternative method with a similar motivation..\nx = F(G(x;w1);w2\nThe underlying encoder and decoder contain a set of trainable parameters that can be tied together and optimized for a predefined criterion. The encoder and decoder can have different architectures. including fully-connected neural networks, ConvNets and others. The criterion used for training is the reconstruction loss, usually the mean squared error (MSE) between the original input and its reconstructionZeiler et al.(2010)\nmin||x - x|l2\nContext prediction Another method for unsupervised learning by context was introduced by|Do ersch et al.(2015). This method uses an auxiliary criterion of predicting the location of an image patch given another from the same image. This is done by classification to 1 of 9 possible locations Although the work ofDoersch et al.[(2015) and ours both use patches from an image to perform unsupervised learning, the methods are quite different. Whereas the former used a classification criterion over the spatial location of each patch within a single image, our work is concerned with comparing patches from several images to each other. We claim that this encourages discriminability between images (which we feel to be important aspect of feature learning), and was not an explicit goal in previous work.\nAdversarial Generative Models: This a recently introduced model that can be used in an unsu-. pervised fashionGoodfellow et al.(2014). Adversarial Generative Models uses a set of networks. one trained to discriminate between data sampled from the true underlying distribution (e.g., a set of images), and a separate generative network trained to be an adversary trying to confuse the first. network. By propagating the gradient through the paired networks, the model learns to generate samples that are distributed similarly to the source data. As shown by Radford et al.(2015),this. model can create useful latent representations for subsequent classification tasks..\nSampling Methods: Methods for training models to discriminate between a very large numbe. of classes often use a noise contrasting criterion. In these methods, roughly speaking, the poste. rior probability P(t[yt) of the ground-truth target t given the model output on an input sample. from the true distribution yt = F(x) is maximized, while the probability P(t[yn) given a noise. measurement y = F(n) is minimized. This was successfully used in a language domain to leari. unsupervised representation of words. The most noteworthy case is the word2vec model introduce byMikolov et al.(2013). When using this setting in language applications, a natural contrasting. noise is a smooth approximation of the Unigram distribution. A suitable contrasting distribution i less obvious when data points are sampled from a high dimensional continuous space, such as th. case of image patches.\nThe majority of unsupervised optimization criteria currently used are based on variations of recon struction losses. One limitation of this fact is that a pixel level reconstruction is non-compliant witl the idea of a discriminative objective, which is expected to be agnostic to low level information in the input. In addition, it is evident that MSE is not best suited as a measurement to compare images, for example, viewing the possibly large square-error between an image and a single pixel shifted copy of it. Another problem with recent approaches such asRasmus et al.(2015);Zeiler et al.(2010 is their need to extensively modify the original convolutional network model. This leads to a gap between unsupervised method and the state-of-the-art, supervised, models for classification - which can hurt future attempt to reconcile them in a unified framework, as well as efficiently leverage unlabeled data with otherwise supervised regimes."}, {"section_index": "3", "section_name": "LEARNING BY COMPARISONS", "section_text": "The most common way to train NN is by defining a loss function between the target values and the network output. Learning by comparison approaches the supervised task from a different angle. The main idea is to use distance comparisons between samples to learn useful representations. For example, we consider relative and qualitative examples of the form X1 is closer to X2 than X1 is to X3. Using a comparative measure with neural network to learn embedding space was introduced in the \"Siamese network\"' framework byBromley et al.(1993) and later used in the works of[Chopra et al.(2005). One use for this methods is when the number of classes is too large or expected to vary over time, as in the case of face verification, where a face contained in an image has to compared against another image of a face. This problem was recently tackled by Schroff et al.(2015) for training a convolutional network model on triplets of examples. There, one image served as an anchor x, and an additional pair of images served as a positive example x+ (containing an instance\nof the face of the same person) together with a negative example x-, containing a face of a different. person. The training objective was on the embedded distance of the input faces, where the distance. between the anchor and positive example is adjusted to be smaller by at least some constant a from the negative distance. More precisely, the loss function used in this case was defined as.\nL(x,x+,x_)=max{[F(x)-F(x+)l2-F(x)- F(x_)2+a,0\ne-l|F(x)-F(x+)|l2 L(x,x+,x_) = e-l|F(x)-F(x+)|I2 + e-l|F(x)-F(x-)|I2\nThis loss has a flavor of a probability of a biased coin flip. By 'pushing' this probability to zero we express the objective that pairs of samples coming from distinct classes should be less similar tc each other, compared to pairs of samples coming from the same class. It was shown empirical by Balntas et al.(2016) to provide better feature embeddings than the margin based distance loss|1\nOne implicit assumption in convolutional networks, is that features are gradually learned hierar chically, each level in the hierarchy corresponding to a layer in the network. Each spatial location within a layer corresponds to a region in the original image. It is empirically observed that deeper layers tend to contain more 'abstract' information from the image. Intuitively, features describing different regions within the same image are likely to be semantically similar (e.g. different parts of an animal), and indeed the corresponding deep representations tend to be similar. Conversely regions from two probably unrelated images (say, two images chosen at random) tend to be far from each other in the deep representation. This logic is commonly used in modern deep networks such as Szegedy et al.(2015)Lin et al.(2013)[He et al.(2015), where a global average pooling is used tc aggregate spatial features in the final layer used for classification.\nOur suggestion is that this property, often observed as a side effect of supervised applications, can be used as a desired objective when learning deep representations in an unsupervised task. Later, the. resulting representation can be used, as typically done, as a starting point or a supervised learning task. We call this idea which we formalize below Spatial contrasting. The spatial contrasting crite- rion is similar to noise contrasting estimation|Gutmann & Hyvarinen (2010) Mnih & Kavukcuoglu (2013), in trying to train a model by maximizing the expected probability on desired inputs, while. minimizing it on contrasting sampled measurements.."}, {"section_index": "4", "section_name": "4.1 FORMULATION", "section_text": "We will concern ourselves with samples of images patches x(m) taken from an image x. Our convo- lutional network model, denoted by F(x), extracts spatial features f so that f(m) = F((m)) for an image patch x(m). We will also define P(f1|f2) as the probability for two features f1, f2 to occur together in the same image.\nThis means that features from a patch taken from a specific image can effectively predict, under our model, features extracted from other patches in the same image. Conversely, we want our model to minimize P(f[f) for i. i being two patches taken from distinct images. Following the logic\nwhere F(x) is the embedding (the output of a convolutional neural network), and a is a predefinec margin constant. Another similar model used byHoffer & Ailon(2015) with triplets comparisons for classification, where examples from the same class were trained to have a lower embedded distance than that of two images from distinct classes. This work introduced a concept of a distance ratio loss, where the defined measure amounted to:\nin Eq. (2) for the supervised case, to represent the probability two feature vectors were taken from. the same image. The resulting training loss for a pair of images will be defined as\nLsc(x1, x2) = - log - f e-Lf1\n1) x2 1 L1 2 X. ConvNet Spatial Contrasting -1-(2 min 1-f1\nl|(1)-g(2)| nin 2 -f\nFigure 1: Spatial contrasting depiction\nConvolutional network are usually trained using SGD over mini-batch of samples, therefore we can extract patches and contrasting patches without changing the network architecture. Each image serves as both anchor and positive patches, for which the corresponding features should be closer as well as contrasting samples for other images in that batch. For a batch of N images, two samples from each image are taken, and N2 different distance comparisons are made. The final loss is defined as the average distance ratio for all images in the batch:\nN N 1 Lsc({x}=1) >`LsC(xi,{x}j+i) = 10g N i=1 i=1\nEffectively minimizing a log-probability under the SoftMax measure. This formulation is portrayed in figure 4.1 Since we sample our contrasting sample from the same underlying distribution, we can evaluate this loss considering the image patch as both patch compared (anchor) and contrast symmetrically. The final loss will be the average between these estimations:.\nLsC(x1,x2) Lsc(x1,x2) + Lsc(x2,x1)\nSince the criterion is differentiable with respect to its inputs, it is fully compliant with standard methods for training convolutional network and specifically using backpropagation and gradient. descent. Furthermore, SC can be applied to any layer in the network hierarchy. In fact, SC can. be used at multiple layers within the same convolutional network. The spatial properties of the\nfeatures means that we can sample directly from feature space f(m) E f instead of from the original. image. Therefore SC has a simple implementation which doesn't require substation amount of. computation. The complete algorithm for batch training is described in Algorithm (1). Similar to. the batch normalization (BN) layerIoffe & Szegedy(2015), a recent usage for batch statistics in. neural networks, SC also uses the batch statistics. While BN normalize the input based on the batch statistics, SC sample from it. This can be viewed as a simple sampling from the space of possible. features describing a patch of image..\nAlgorithm 1 Calculation the spatial contrasting loss\nRequire: X = {x}N # Training on batches of images\n# Spatial contrasting loss is the mean of distance ratios\nIn this section we report empirical results showing that using SC loss as an unsupervised pretraining. procedure can improve state-of-the-art performance on subsequent classification. We experimented with MNIST, CIFAR-10 and STL10 datasets. We used modified versions of well studied networks. such as those of|Lin et al.(2013) and Rasmus et al.(2015). A detailed description of our architecture can be found in4\nIn each one of the experiments, we used the spatial contrasting criterion to train the network on the. unlabeled images. In each usage of SC criterion, patch features were sampled from the preceding. layer in uniform. We note that spatial size of sampled patches ranged between datasets, where or. STL10 and Cifar10 it covered about 30% of the image, MNIST required the use of larger patches. covering almost the entire image.Training was done by using SGD with an initial learning rate. of 0.1 that was decreased by a factor of 10 whenever the measured loss stopped decreasing. Afte. convergence, we used the trained model as an initialization for a supervised training on the complete labeled dataset. The supervised training was done following the same regime, only starting with a lower initial learning rate of O.01. We used mild data augmentations, such as small translations and. horizontal mirroring.\nSTL10 (Coates et al.|(2011). This dataset consists of 100, 000 96 96 colored, unlabeled images, together with another set of 5, 000 labeled training images and 8, 000 test images. The label space consists of 10 object classes.. Cifar10 (Krizhevsky & Hinton (2009)). The well known CIFAR-10 is an image classifi-. cation benchmark dataset containing 50, 000 training images and 10, 000 test images. The\nModel STL-10 test accuracy Zero-bias Convnets -Paine et al.(2014 70.2% Triplet network -Hoffer & Ailon72015 70.7% Exemplar Convnets -Dosovitskiy et al.. (2014) 72.8% Target Coding -Yang et al.(2015] 73.15% Stacked what-where AE -Zhao et al.. 2015 74.33% Spatial contrasting initialization (this work) 81.34% 0.1 The same model without initialization. 72.6% 0.1\nAll experiments were conducted using the Torch7 framework by Collobert et al. (2011) Code reproducing these results will by available at https://github.com/eladhoffer/ SpatialContrasting"}, {"section_index": "5", "section_name": "5.1 RESULTS ON STL10", "section_text": "Since STL10 dataset is comprised of mostly unlabeled data, it is most suitable to highlight the ben efits of the spatial contrasting criterion. The initial training was unsupervised, as described earlier using the entire set of 105, O00 samples (union of the original unlabeled set and labeled training set). The representation outputted by the training, was used to initialize supervised training on the 5, 000 labeled images. Evaluation was done on a separate test set of 8, O00 samples. Comparing with state of the art results, we see an improvement of 7% in test accuracy over the best model byZhao et al.(2015), setting the SC as best model at 81.3% test classification accuracy (see Table (1)). We note that the results ofDosovitskiy et al.(2014) are achieved with no fine-tuning ove labeled examples, which may be unfair to this work. We also compare with the same network, bu without SC initialization, which achieves a lower classification of 72.6%. This is an indication tha indeed SC managed to leverage unlabeled examples to provide a better initialization point for the supervised model."}, {"section_index": "6", "section_name": "5.2 RESULTS ON CIFAR10", "section_text": "For Cifar10 dataset, we use the same setting as[Coates & Ng (2012) and Hui](2013) to test a model's ability to learn from unlabeled images. Here, only 4, 000 samples out of 50, 000 are used with theii. label annotation, and the rest of the samples can be used only in an unsupervised manner. The fina test accuracy is measured on the entire 10, 000 test set.. In our experiments, we trained our model using SC criterion on the entire dataset, and then used. only 400 labeled samples per class (for a total of 4000) in a supervised regime over the initializec network. The results are compared with previous efforts in Table (2). Using the SC criterion allowed. an improvement of 6.8% over a non-initialized model, and achieved a final test accuracy of 79.2% This is a competitive result with current state-of-the-art models.."}, {"section_index": "7", "section_name": "5.3 RESULTS ON MNIST", "section_text": "The MNIST dataset is very different in nature from the Cifar10 and STL10 datasets, we experi mented earlier. The biggest difference, relevant to this work, is that spatial regions sampled fron MNIST images usually provide very little, or no information. Thus, SC is much less suited fo MNIST dataset, and was conjured to have little benefit. We still, however, experimented with ini tializing a model with SC criterion and continuing with a fully-supervised regime over all labelec\nTable 1: State of the art results on STL-10 dataset\nimage sizes 32 32 pixels, with color. The classes are airplanes, automobiles, birds, cats,. deer, dogs, frogs, horses, ships and trucks.. MNIST (LeCun et al.(1998)). The MNIST database of handwritten digits is one of the most studied dataset benchmark for image classification. The dataset contains 60,O00 ex-. amples of handwritten digits from O to 9 for training and 10,O00 additional examples for testing. Each sample is a 28 x 28 pixel gray level image..\nTable 2: State of the art results on Cifar10 dataset with only 4000 labeled samples\nexamples. We found again that this provided benefit over training the same network without pre. initialization, improving results from 0.63% to 0.34% error on test set. As mentioned previously, the. effective compared patches of MNIST covered almost the entire image area. This can be attributed to the fact that MNIST requires global features to differentiate between digits. The results, compared. with previous attempts are included in Table (3).\nSince the spatial contrasting loss is a differentiable estimation that can be computed within a net work parallel to supervised losses, in future work we plan to embed it as a semi-supervised model This usage will allow to create models that can leverage both labeled an unlabeled data, and can be compared to similar semi-supervised models such as the ladder network Rasmus et al.(2015). It i: is also apparent that contrasting can occur in dimensions other than the spatial, the most straight forward is the temporal dimension. This suggests that similar training procedure can be applied or. segments of sequences to learn useful representation without explicit supervision.."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Jane Bromley, James W Bentz, Leon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduarc Sackinger, and Roopak Shah. Signature verification using a siamese time delay neural network International Journal of Pattern Recognition and Artificial Intelligence. 7(04):669-688. 1993\nModel Cifar10 (400 per class) test accuracy Convolutional K-means Network -Coates & Ng(2012 70.7% View-Invariant K-means -Hui(2013) 72.6% DCGAN -Radford et al. 12015 73.8% Exemplar Convnets -Dosovitskiy et al.(2014 76.6% Ladder networks -Rasmus et al.(2015 79.6% Conv-CatGan Springenberg(2016) 80.42% ( 0.58) ImprovedGan Salimans et al.f(2016 81.37% ( 2.32) Spatial contrasting initialization (this work) 79.2%(0.3) The same model without initialization 72.4%(0.1)\nIn this work we presented spatial contrasting - a novel unsupervised criterion for training convo lutional networks on unlabeled data. Its is based on comparison between spatial features sampled from a number of images. We've shown empirically that using spatial contrasting as a pretraining technique to initialize a ConvNet, can improve its performance on a subsequent supervised train ing. In cases where a lot of unlabeled data is available, such as the STL10 dataset, this translates to state-of-the-art classification accuracy in the final model.\nVassileios Balntas, Edward Johns, Lilian Tang, and Krystian Mikolajczyk. Pn-net: Conjoined triple. deep network for learning local image descriptors. arXiv preprint arXiv:1601.05030, 2016.\nSumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pp. 539-546. IEEE, 2005.\nAdam Coates and Andrew Y Ng. Learning feature representations with k-means. In Neural Ne works: Tricks of the Trade, pp. 561-580. Springer, 2012.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor-. mation Processing Systems, pp. 2672-2680. 2014.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\nElad Hoffer and Nir Ailon. Deep metric learning using triplet network. In Similarity-Based Pattern Recognition, pp. 8492. Springer, 2015.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by re ducing internal covariate shift. In Proceedings of The 32nd International Conference on Machin Learning, pp. 448-456, 2015.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Con volutional Neural Networks. Advances In Neural Information Processing Systems, pp. 1-9, 2012\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen tations of words and phrases and their compositionality. In Advances in neural information pro cessing systems, pp. 3111-3119, 2013.\nKa Y Hui. Direct modeling of complex invariances for visual object features. In Proceedings of the OthIntern 852-3602013\nKevin Jarrett, Koray Kavukcuoglu, Marc'Aurelio Ranzato, and Yann LeCun. What is the best multi stage architecture for object recognition? In Computer Vision, 2009 IEEE 12th International Conference on, pp. 2146-2153. IEEE. 2009.\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Com puter Science Department, University of Toronto, Tech. Rep, 2009.\nAndrew Ng. Sparse autoencoder. 2011\nDavid E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back propagating errors. Cognitive modeling, 5(3):1.\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016\nFlorian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for fac recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision anc Pattern Recognition, pp. 815-823, 2015.\nShuo Yang, Ping Luo, Chen Change Loy, Kenneth W Shum, and Xiaoou Tang. Deep representatior learning with target coding. 2015.\nJunbo Zhao, Michael Mathieu, Ross Goroshin, and Yann Lecun. Stacked what-where auto-encoders arXiv preprint arXiv:1506.02351, 2015.\nTom Le Paine, Pooya Khorrami, Wei Han, and Thomas S Huang. An analysis of unsupervised pre-training in light of recent advances. arXiv preprint arXiv:1412.6597, 2014.\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversaria1 networks. arXiv preprint arXiv:1511.06434, 2015."}, {"section_index": "9", "section_name": "7 APPENDIX", "section_text": "Table 4: Convolutional models used, based onLin et al.(2013), Rasmus et al.(2015\nOOCI STL10 CIFAR-10 MNIST Input: 96 96 RGB Input: 32 32 RGB Input: 28 28 monochrome 5 5 conv. 64 BN ReLU 3 3 conv. 96 BN LeakyReLU 5 5 conv. 32 ReLU 1 1 conv. 160 BN ReLU 3 3 conv. 96 BN LeakyReLU 1 1 conv. 96 BN ReLU 3 x 3 conv. 96 BN LeakyReLU 3 3 max-pooling, stride 2 2 2 max-pooling, stride 2 BN 2 2 max-pooling, stride 2 BN 5 5 conv.192 BN ReLU 3 x 3 conv. 192 BN LeakyReLU 3 x 3 conv. 64 BN ReLU 1 1 conv. 192 BN ReLU 3 3 conv. 192 BN LeakyReLU 3 3 conv. 64 BN ReLU 1 1 conv.192 BN ReLU 3 3 conv. 192 BN LeakyReLU 3 3 max-pooling, stride 2 2 2 max-pooling, stride 2 BN 2 2 max-pooling, stride 2 BN 3 3 conv. 192 BN ReLU 1 x 1 conv. 192 BN ReLU 1 x 1 conv. 192 BN ReLU Spatial contrasting criterion 3 3 conv. 256 ReLU 3 3 conv. 192 BN LeakyReLU 3 x 3 conv. 128 BN ReLU 3 3 max-pooling, stride 2 1 1 conv. 192 BN LeakyReLU 1 1 conv. 10 BN ReLU dropout, p = 0.5 1 1 conv. 10 BN LeakyReLU global average pooling 3 3 conv. 128 ReLU global average pooling dropout, p = 0.5 fully-connected 10 10-way softmax\nglobal average pooling\nFigure 2: First layer convolutional filters after spatial-contrasting training"}] |
H1Fk2Iqex | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Representation of bioacoustic sequences started with 'Human' speech in the 70'. Speech automatic. processing yields to the efficient Mel Filter Cepstral Coefficients (MFCC) representation. Today new. bioacoustic representation paradigms arise from environmental monitoring and species classificatior at weak Signal to Noise Ratio (SNR) and with small amount of data per species..\nSeveral neurobiological evidences suggest that auditory cortex is tuned to complex time varying acoustic features, and consists of several fields that decompose sounds in parallel (Kowalski et al. 1996] Mercado et al.2000). Therefore it is more than reasonable to investigate the Chirplet time. frequency representation from acoustic and neurophysiological points of view..\nChirps, or transient amplitude and frequency modulated waveforms, are ubiquitous in nature systems. (Flandrin (2001)), ranging from bird songs and music, to animal vocalization (frogs, whales) and. Speech. Moreover the sinusoidal models are a typical attempt to represent audio signals as a superposition of chirp-like components. Chirp signals are also commonly observed in biosonar. Systems."}, {"section_index": "1", "section_name": "FAST CHIRPLET TRANSFORM TO ENHANCE CNN MA- CHINE LISTENING - VALIDATION ON ANIMAL CALLS AND SPEECH", "section_text": "DYNI, LSIS, Machine Learning & Bioacoustics team AMU, University of Toulon, ENSAM, CNRS La Garde, France\njulien.ricard@gmail.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The Chirplet transform subsumes both Fourier analysis and wavelet analysis, providing a broad framework for mapping one-dimensional sound waveforms into a n-dimensional auditory parameter. space. It offers the processing described in different auditory fields, i.e. cortical regions with. systematically related response sensitivities. Moreover, Chirplet spaces are highly over-complete because there is an infinite number of ways to segment a time-frequency plane, the dictionary is redundant: this corresponds well with the overlapping, parallel signal processing pathways of auditory. corteX.\nThen we suggest that low level CNN layers shall be pretrained by Chirplet kernels. Thus, we define and code a Fast Chirplet Transform (FCT). We conduct validation on real recordings of whale anc birds, and on Speech (vowels subset of TIMIT). We demonstrate that CNN classification benefits from low level layers FCT pretraining. We conclude on the perspectives of tonotopic FCT machine listening and inter-species transfer learning.\n1 1(t-tc)2 2 j2(c t-tc)2+fc(t-tc) Jtc,fc,log(t),c t\nThe parameter space is basically of infinite dimension. Similarly to continuous wavelet transform however, it is possible to use some a priori knowledge in order to create a finite bank-filter. For example, wavelets are generated by knowing the number of wavelets per octave and the number of octave to decompose. As a result, we used the same motivation in order to reduce the number of possible Chirplets required. The goal here is not to compute an invertible transform, but rather provide a redundant transformation highlighting transient structures which are not the same tasks as discussed in (Coifman et al.]1992| Meyer1993| |Coifman et al.]1994). As a result, we keep the same overall framework as for wavelets with the Q and J parameters. For example parameters for bird songs in this paper are J = 6 and Q = 16 with a sampling rate (SR) of 44100Hz, and J = 4 and Q = 16 on speech and Orca with SR=16 kHz). Finally, since we are interested in frequency modulations, we compute the ascendant and descendant chirp filters as one being the symetrized version of the other. As a result, we use a more straightforward analytical formula defined with a starting frequency Fo. an ending frequency F1. and the usual wavelet like parameters being the\nA chirplet can be seen as a complex sinus with increasing or decreasing frequency over time modulated. by a Gaussian window to have a localized support in the time and Fourier domain. It is a broad class of filters which includes wavelets and Fourier basis as special cases. As a result, and as presented in (Mann & Haykin, 1991f |1992), the Chirplet transform is a generalization of many known time-frequency representations. We first present briefly the wavelet transform framework to extend it to Chirplets. Given an input signal x one can compute a wavelet transform (Mallat 1999) through the application of multiple wavelets x. A wavelet is an atom with localized support. in time and frequency domain which integrates to 0. The analytical support of the wavelets is not compact but they are very well localized. It can be considered compact in the applied case where roundoff error lead to O quickly after moving around the center frequency. The whole filter bank is derived from a mother wavelet wo and a set of dilation coefficients following a geometric. progression defined as A = {21+j/Q, j = 0, .., JQ - 1} with J being the number of octave to. decompose and Q the number of wavelets per octave. As a result, one can create the filter-bank as the collection (o() := x, X E A}. After application of the filter-bank, one ends up with a time-scale. representation, or scalogram, Ux(, t) := (x * )(t)] where the complex modulus was applied in order to remove the phase information and contract the space. It is clear that a wavelet filter-bank is completely characterized by its mother wavelet and the set of scale parameters. Generalizing this framework for Chirplets will be straightforward by now allowing a nonconstant frequency for each filter. As for wavelets, filters are generated from a Gaussian window determining the time support. however the complex sinus has nonconstant frequency over time with center-frequency fc. Since the. scope of the parameters leads infinitely many different possible filters, we have to restrain ourselves, and thus create only a fixed Chirplet filter-bank allowing fast computations. The parameters defining these filters include the time position tc, the frequency center fc, the duration t and the chirp rate c:.\nA ={2.01+i/Q = 0. ...Jx Q-1} Fs Fo = 2A Fs A"}, {"section_index": "3", "section_name": "LOW COMPLEXITY FCT ALGORITHM AND IMPLEMENTATION", "section_text": "We give here our code of Fast Chirplet Transform (FCT), taking advantage of the a priori knowledge. for the filter-bank creation and the fast convolution algorithm Therefore, we first create the Chirple with the ascendant and descendant versions in once (see Annexe Algo 1)\nThen we generate the whole filter-bank (see Algo 2 in annexe) with the defined X and hype. parameters.\nFinally, we use the scattering framework (Bruna & Mallat 2013] Anden & Mallat2014): we apply a local low-pass filter to the obtained representation. In fact, the scattering coefficients Sx resuli from a time-averaging on the time-frequency representation Ux bringing local and up to global. time-invariance. This time-averaging is computed through the application of the filter, usually a. Gabor atom with specified standard deviation and such that.\n(t)dt = 1.\nThe third step in our FCT consists in the reduction of the convolution task. The asymptotic complexit of the Chirplet transform is O(N.log(N)) with N being the size of the input signal. This is the same asymptotic complexity as for the continuous wavelet transform and the scattering network However, it is possible to reach lower asymptotic complexity simply by a division of the convolutior task. usually the convolutions are carried through application of an element-wise multiplication o the signal and the filter in the frequency domain and then compute the inverse Fourier transforn to end up with x x. However, if we denote by M the length of the filter x it is possible tc instead perform multiple times this operation on different overlapping chunks of the signal to ther concatenate the results to obtain at the end the same convolution result but now in O(N. log(M) Finally a last improvement induced by this approach is to allow easy tackling of signals with a lengtl just above a power of 2 which otherwise would require to be padded in order to obtain a FFT witl real O(N. log(N)) complexity through the Danielson-Lanczos lemma (Press[2007). Applying this scheme allowed to compute the convolutions between 3 to 4 times faster. The variations came fron the distance between N and the closest next power of 2 depending on the desired chunk size.\nWe validate the efficiency of FCT on real bioacoustic recordings. We processed on 10 medium speed CPUs of 4 years old, 100 hours of recording of LifeClef bird challenge (16 kHz Sampling Rate (SR), 16 bits) in 2 days. Second, we processed in 7 days the equivalent of 1 month of\nbandwidth. Finally the hyperparameter p defining the polynomial order of the chirp is constant for the whole bank-filter generation. For example, the case p = 1 leads to a linear chirp, p = 2 to a quadratic chirp. The starting and ending frequencies are chosen to approximately cover one octave and are directly computed from the A parameters which define the scales. Finally, following the scattering. network inspiration from (Bruna & Mallat2013), in order to remove unstable noisy pattern, we apply. a low-pass filter (a Gaussian blurring) and thus we increase the SNR of the representation..\nAs a result, one computes these coefficients as: Sx(, t) = (|x x| + ) (t), where x is a Chirplet with parameters and $. Similarly, we perform local time-averaging on the Chirplet representation in the same manner.\n0 10 20 30 40 50 60 70 0 500 1000 1500 0 20 40 60 80 100 120\n10 20 30 40 50 60 70 0 1000 2000 3000 4000 5000 20 40 60 80 100 120 0 1000 2000 3000 40 10 20 30 40 50 60 10 00 1000 1500 20 60 80 100 120 500 1000 Fioure with ucuolHHT c\nFigure 1: Top: Chirplet of Orca call with p=3, j=4, q=16, t=0.001, s=0.01, with usual FFT spectrogram below, Sampling Rate (SR) 22 kHz, 16 bits. Waves and Chirplets of Orca are: http : // sabiod univ-t1n. fr/orca1ab Bottom: same on bird calls from Amazonia (BIRD10 data set). SR 16 kHz, 16 bits.\nChirplet Example (=1.5=100.0,c=0.0) Chirplet Example (=0.5,=80.0,c=0.0003) Waveform Waveform Real Real Im Im Spectrogram Spectrogram Chirplet Example (=0.5,=500.0,c=9e-05) Chirplet Example=1.5,=200.0,c=0.0003 Waveform Waveform Rea Real Im Spectrogram Spectrogran\nFigure 2: Some FCT displayed in the physical domain and in the time-frequency domain through a spectrogram. The first one reduces to a wavelet since the chirp rate is 0. One can see the importance of the time duration and the chirp rate and well as the center frequency depending on what one wishes to capture.\nnrbhgj aethwy 10F 10 20 20 30 30 40 40 50 50 60 60 100 200 300 400 500 600 700 800 900 1000 100 200 300 400 500 600 700 800 900 1000 aksucy niptbr 10 10 20 20 30 30 40 40 50 50 60 60 100 200 300 400 500 600 700 800 900 1000 100 200 300 400 500 600 700 800 900 1000\nFigure 3: FCT of 4 species of amazonian birds LifeClef 2015 challenge including BIRD10 dataset available online. The call patterns are the high SNR (red) regions. The species international codes are, from top to bottom, right to left: nnbhgj, aethwv, aksucy, nipfbr."}, {"section_index": "4", "section_name": "5 ENHANCING CNN BIOACOUSTIC REPRESENTATION WITH FCT", "section_text": "A strategy for CNN fine-tuning can be to retrain a classifier on top of a CNN on a new dataset, or to fine-tune the weights of a pretrained network by continuing the backpropagation. It is possible to\nfine-tune all the layers of the CNN or to freeze some of the earlier, later or central layers, and to onl fine-tune some portion of the network. As the features propagate deeper and deeper in the networl layers, they become increasingly invariant and discriminative (Seltzer 2013). Thus usually only the higher level are fine-tuned, the earlier features of a CNN contain more generic features that should be useful to many tasks. As denoted in later layers of the CNN becomes progressively more specific tc. the details of the classes contained in the original dataset..\nIn this paper we adapt our parametric Chirplet decomposition to a specific acoustic domain with a specific CNN. We compare a CNN trained on raw audio to one trained on Mel and Chirplet. The bes1 model is the one trained on parametric Chirplet. Second, we show that the CNN can be enhanced by pretraining Chirp in low level layer..\nThe first demonstration is conducted on complex Bird songs. We use the BIRD10 subset of LifeClef 2016 bird classification challenge. It was used as ENS Ulm data challenge 2016, and contains 3 species in a total of 15 minutes of recordings (SR 44100 Hz, 16 bits), and is available (.wav, Mel and FCTfeatures) athttp://sabiod.univ-tln.fr/workspace/BIRD10\nWe train 3 CNNs (LeCun & Bengio 1995) on the Lasagne Theano platform. Thebase line CNN is trained from the raw audio. A second CNN, with similar topology (see an nexe) is trained on a simple log of the simple 64 channels Mel scale of FFT spectrum ( http://pydoc.net/Python/librosa/0.2.0/librosa.feature/ ). We overlap by 90% the time windows. A third CNN is trained on our FCT. The parameters of both CNN are similar, with 64 frequency bands each (we remove top and bottom band from the Chirplet to set to 64 bands only). Then the input laye is 64 x 86, the Conv layer has 20 filters of size 8 x 10. All activation functions are relu. We maxpool 2 x 2, follow the 20 filters of size 8 x 10, maxpooling, dense layer (200), dropout at 10%, with a final softmax dense layer with 3 classes and same dropout. Each CNN is trained by cross-entropy, L2 reg. with a learning rate set of 0.001.\nBird10, net 1, Ir=1e-4 1.0 0.9 0.8 0.7 0.6 MAPP 0.5 mel, valid, (map) 0.4 mel, train,(map) mel, test, (map) audio, valid, (map). 0.3 audio, train, (map) audio, test, (map) chirplets, valid,(map) 0.2 chirplets,train,(map) chirplets, test, (map). 0.1 0 200 400 600 800 1000 1200 140 Fnoch\nFigure 4: The Mean Average Precision on BIRD10 of the CNNs on Mel, raw audio. or FCT. The training conditions are the same on the three CNNs, and they have similar size and topology (see Annexe). The CNN trained on FCT is slightly better than on Mel or raw audio. and is learning faster\nThe Fig. 4 gives the MAP of these two CNNs having similar hyperparameters. The CNN on FCT. gives the best MAP with 61.5% at epoch 280 compared to later epoch (820) for Mel with a similar MAP of 61%. Audio is slower and weaker (58% MAP at epoch 1140)..\n0.9 0.8 0.7 0.6 YAPP 0.5 mel, valid, (map). 0.4 mel, train,(map) mel, test, (map) audio,valid,(map) 0.3 audio,train,(map) audio,test,(map) chirplets, valid, (map) 0.2 chirplets, train, (map). chirplets, test, (map). 0.1 0 200 400 600 800 1000 1200 1400 Enoch"}, {"section_index": "5", "section_name": "5.2 ENHANCING BIRDS CLASSIFICATION STACKING PRETRAINED CHIRPNET CNN", "section_text": "In order to test the efficiency of the FCT, we pretrain a CNN to encode audio to Chirplets (a.k.a. the Audio2Chirp CNN) and a CNN to convert parametric Chirplet to classes (a.k.a. the Chirp2Class CNN). The topology of these CNNs (Tab. 2, 3) is set for reasonable time of training. We also speed up the training with shorter time overlap of the time windows (only 30% instead of 90% in the previous experimentation). We then decrease the average MAP, however the objective here is to compare the gain in MAP and time of convergence in stacked Chirplet deep representations..\nWe then simply stack at low level layer the audio2chirp with the chirp2class CNN to build a complete. audio2class CNN. We train it from random initialization, or from pretrained CNN. Note that the. random seed in all the experimentation of this paper is fixed to allow fair comparisons. Results are reported in Tab. 1 for each of the stacked CNN, with the epoch giving the best MAP on the dev. set,. and the corresponding MAP on the test set. Results demonstrate that the pretraining of low level. layers by FCT enhances CNN. More details are given in Annexe.."}, {"section_index": "6", "section_name": "Model", "section_text": "Table 1: Summary of the CNN enhanced by our FCT representation, on BIRD. For each model, we detail the time of convergence on dev. and corresponding Mean Average Precision on test set.\nIn this section we run the same demonstration on the subset of speech vowels of all the TIMIT. acoustic-phonetic corpus|JS et al.|(1993): 3,696 training utterances (sampled at 16kHz) from 462 speakers. The cross-validation set consists of 400 utterances from 50 speakers. The core test set of the 8 vowels subset was used to report the results: 192 utterances from 24 speakers, excluding the. validation set. There are 61 hand labeled phonetic symbols but the experiments in this paper run on the time windows of 31Oms centered on each of the 8 vowels of TIMIT (= iy, ih, eh, ae, aa, ah, uh uW).\nDue to similar bioacoustic voicing dynamics of the two species (near 4 Hz), we simply set the FCT parameters for vowel to the one used for Orca presented above (p = 3, j = 4,q = 16,t = 0.001,s =- 0.01). The time windows are set to 310 ms as recommended in[Palaz et al.[(2013)\nThe results of the different training stages of the audio2chirp and chirp2class and stacked model are given in Tab.2 and Annexe. We run due to lack of time the experiment only on vowel classification, which does not really allow comparison with other papers, however this seminal work only aims to study the relative gain between CNN pretrained or not by FCT.\nThe results demonstrate that FCT pretraining of the audio2class model is improved by 2.3% of relative gain of accuracy while the training time is decreased by 26%\nIn this paper we propose for the first time at our knowledge the definition and implementation of Fast Chirplet Transform (FCT). Due to its low complexity, FCT can be computed as fast as FFT.\n55 (+7.8%)\n380 (-28%)\n60 F 60 C 50 T 50 40 40 C H 30 30 A N 20 20 N 10 10 S 0.0 0.5 1.0 1.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0\nF C 50 T C 40 H 30 A N 20 N E 10 L s 0 0.O\n7000 7000 6000 6000 5000 5000 H 4000 4000 z 3000 3000 2000 2000 1000 1000 0.0 0.5 1.0 1.5 2.0 2.5 0 3.0 3.5 4.0 0.0 0.5 1.0 1.5 Time units Time units\nFigure 5: FCT (top) versus Fourier spectrogram (bottom) of two utterances of Speech vowel (TIMIT) (p = 3, j = 4,q = 16,t = 0.001,s = 0.01).\njob 117{12,13},lr=1e-3 100 95 90 85 80 75 70 65 60 55 ACuur 50 45 40 35 30 25 audio,test,(accuracy 20 audio,valid,accuracy audio,train,(accuracy) 15 audio,test, (accuracy) 10 audio,valid,accuracy 5 audio, train, (accuracy) 0 : 0 20 40 60 80 100 120 140 Epoch\nFigure 6: Training stacked CNN. Blue: random initialization of audio2chirp(O) and chirp2class(O) Red: Initialization with optimal audio2chirp(*) and chirp2class(*).\nSecond we show that FCT pretraining accelerates CNN. For Bird10 data set, we have 280 epochs. using FCT versus 820 on Mel features, or 1140 on raw audio for same MAP score. The stacked CNN. with the chirpnet in low level layer also decreases training from 530 epochs to 380 epochs. while\nFast Chirplet Transform (top) versus Fast Fourier Transform (bottom) on Speech (TIMIT\n6000 6000 5000 5000 H 4000 4000 Z 3000 3000 2000 2000 1000 1000 0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 0.0 0.5 1.0 1.5"}, {"section_index": "7", "section_name": "Model", "section_text": "Table 2: Summary of the CNN enhanced by our FCT representation on vowel (TIMIT): time c convergence and vowel accuracy on TIMIT test set..\nit increases MAP by 4 points (Tab. 1). The experiment on Vowels demonstrates a training of 30 epochs on FCT, versus 60 on raw audio (for same 65% accuracy level), and an increase of 1.5 poin of accuracy (Tab. 2).\nThree main perspectives are then opened. Future work will consist on sparse Chirpnet inspired from. tonotopic net Strom(1997), auditory nerve and cortex topologyPironkov et al.(2015). The acoustic vibrations are transmitted to the base of the cochlea, thus each region of the basilar membrane are excited by different frequencies. The higher frequencies excite areas closer to the cochlea base. whereas lower frequencies are closer to the apex. This implies that neurons connected to a specific zone of the basilar membrane will be simultaneously stimulated inducing tonotopic representation..\nA second perspective is to integrate Chirplet computation into the CNN training itself, as a constrained embedded layer, in a framework similar to a Wavelet Neural Network (Adeli & Jiang2006) but with Chirplet activation functions"}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Hojjat Adeli and Xiaomo Jiang. Dynamic fuzzy wavelet neural network model for structural systen identification. Journal of Structural Engineering, 132(1):102-111, 2006.\nJoan Bruna and Stephane Mallat. Invariant scattering convolution networks. IEEE transactions on pattern analysis and machine intelligence, 35(8):1872-1886. 2013.\nThese gains may be due to the sparsity of the Chirplet, and the denoising step in the FCT. These. experiences bring to light the problem of deep learning for small and biased dataset for which a full learning strategy is sub-optimal due to local optimum convergence. As a result, FCT prior knowledge. can be used to mitigate this drawback by reducing the complexity of the deep-net architecture..\nLast, we currently work on transfer learning of Chirpnet from animal to speech (and reverse), in order to generalize a deep Chirpnet representation of the animal communication systems.\nWe thank colleagues from ENS Paris Data Team with S. Mallat, and P. Flandrin, for fruitful discussions on Scattering and Chirplet. We thank YLC and YB for advises on CNN. We thank V. Tassan for cleaning the code. We used Theano, Lasagne Librosaand Pysoundfile\nRonald R Coifman, Yves Meyer, and Victor Wickerhauser. Wavelet analysis and signal processing In In Wavelets and their Applications. Citeseer, 1992\nGarofolo JS, LF Lamel, and al. Timit acoustic-phonetic continuous speech corpus. In Linguistic date consortium, Philadelphia, 1993\nStephane Mallat. A wavelet tour of signal processing. Academic press, 1999\nSteve Mann and Simon Haykin. Adaptive chirplet transform: an adaptive generalization of the wavelet transform. Optical Engineering, 31(6):1243-1256, 1992\nYves Meyer. Wavelets-algorithms and applications. Wavelets-Algorithms and applications Society for Industrial and Applied Mathematics Translation., 142 p., 1, 1993.\nDimitri Palaz, Ronan Collobert, and Mathew Magimai-Doss. Estimating phoneme class conditiona probabilities from raw speech signal using convolutional neural networks. CoRR, abs/1304.1018. 2013. URLhttp://arxiv.org/abs/1304.1018\nGueorgui Pironkov, Stephane Dupont, and Thierry Dutoit. Investigating sparse deep neural networks for speech recognition. In IEEE ASRU Workshop, pp. 124-129, 2015.\nYann LeCun and Yoshua Bengio. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995, 1995.\nSteve Mann and Simon Haykin. The chirplet transform: A generalization of gabor's logon transform In Vision Interface, volume 91, pp. 205-212, 1991.\nEduardo Mercado, Catherine E Myers, and Mark A Gluck. Modeling auditory cortical processing as an adaptive chirplet transform. Neurocomputing, 32:913-919, 2000.\nNikko Strom. A tonotopic artificial neural network architecture for phoneme probability estimation In Automatic Speech Rec. and Understanding IEEE Wkp, pp. 156-163, 1997"}, {"section_index": "9", "section_name": "A BIRD DATASET", "section_text": "The experiment is conducted on BIRD10, an online data set http: //sabiod. univ-t1n. fr. work space/BIRD1 0/which is a subset of the training LIFEClef 2016 challenge on bird classifi. cation. BIRD10 contains 454 audio files (22050 Hz SR, 16 bits) from 10 bird classes, split in 0.5s segments. 20% of the training set was used as the validation set..\nwhere the energy and the spectral flatness are computed on 50% overlapping frames of 256 samples\nThis naive algorithm performed quite well on a manually labelled dataset of bird vocalizations (precision=0.89, recall=0.57 for er=0.2 and sfw=0.3) after a quick grid search on the two parameters\nThe first experiment consisted in running similar CNNs to compare the performance of using raw audio and two time-frequency representations as the input: a standard log-amplitude Mel spectrum and the Chirplet representation described in the first part of this paper. In this experiments the segments were overlapping by 90%. The topologies of the networks are given in Tab. 2. The cost function is the cross-entropy, learning rate = 0.oo01 = L2 regularisation coefficient. The Mel spectrum is computed from 64 bands between 0 and 11025 Hz (=SR/2). Both Mel spectrum and Chirplets were normalized by Z-score.\nTable 3: CNN topologies for the 3 different inputs\nOnly segments with detected bird activity were kept, assuming a bird sound to have prominent energy and to be mostly harmonic. This bird detection is for a given segment:\ni f (energy_ratio > energy_threshold and spectral_flatness_weighted_mean < spectral_flatness_threshol bird detected = True e1 se bird detected = False\nInput Topology conv_1: 20 filters of shape (1, 400) (nonlinearity: relu) pool_1: (1, 4) max pooling. conv 2: 20 filters of shape (1, 100) (nonlinearity: relu) Audio, shape (1, 11025) pool_2: (1, 4) max pooling dense_1: 400 units (nonlinearity: relu, 10% dropout). dense_2: 10 units (nonlinearity: softmax, 10% dropout) conv_1: 20 filters of shape (8, 20) (nonlinearity: relu) pool_1: (2, 2) max pooling conv_2: 20 filters of shape (8, 20) (nonlinearity: relu) Log-amplitude Mel spectrum, shape (64, 80) pool_2: (2, 2) max pooling dense_1: 200 units (nonlinearity: relu, 10% dropout) dense_2: 10 units (nonlinearity: softmax, 10% dropout) conv_1: 20 filters of shape (8, 20) (nonlinearity: relu) pool_1: (2, 2) max pooling conv_2: 20 filters of shape (8, 20) (nonlinearity: relu) Chirplets (chirp2class), shape (80, 110) pool_2: (2, 2) max pooling dense_1: 200 units (nonlinearity: relu, 10% dropout) dense_2: 10 units (nonlinearity: softmax, 10% dropout)\nIn all experiments, a given topology is always initialized using the same set of random parameters unless specified otherwise. The value * (resp. O) after the name of the net refers to the pretrained nei (resp. random initialization).\nThe chirp encoder, aka audio2chirp, aims at training a net to get a Chirplet-like representation. It is a simple CNN taking audio as input, Chirplets as output and minimizing the square error. It converge. easily in 180 epochs. The topology of the audio2chirp net is given Tab. 4.\nAudio, shape (1, 11025)\nTable 4: CNN topology of the chirp encoder (audio2chirp)\nFigure 7: Training stacked random initialized CNNs: audio2chirp(O) and chirp2class(O)\nFigure 8: Training stacked pretrained CNNs: audio2chirp(*) and chirp2class(*)\njob 11531, lr=1e-4 1.0 0.9 0.8 0.7 0.6 MAP 0.5 0.4 0.3 0.2 audio,train,(map) 0.1 audio,test,(map audio, valid, (map) 0.0 0 100 200 300 400 500 600 Epoch\njob 11531,lr=1e-4 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 audio,train,map 0.1 audio,test,(map) audio, valid, (map) 0.0 0 100 200 300 400 500 600 Epoch\njob 11532, Ir=1e-4 L.0 0.9 ).8 0.7 0.6 ).5 ).4 0.3 0.2 audio, train, (map) 0.1 audio,test,(map) audio,valid,map) 0.0 0 100 200 300 400 500 Epoch\niob 11532, Ir=1e-4 1.0 0.9 0.8 0.7 0.6 MAP 0.5 0.4 0.3 0.2 audio,train,(map) 0.1 audio,test,(map audio, valid, (map) 0.0 100 200 300 400 500 0 600 Epoch\nFigure 9: Training stacked pretrained CNNs audio2chirp(*) and chirp2class(*) but freezing chirp2class(*) (no weight update).\nFigure 10: Training stacked pretrained CNNs audio2chirp(*) and chirp2class(*) but freezing au dio2chirp(*) (no weight update).\njob 11533, Ir=1e-4 1.0 0.9 0.8 0.7 0.6 MAP 0.5 0.4 0.3 0.2 audio,train, (map) 0.1 audio,test,(map) audio, valid, (map) 0.0 0 100 200 300 400 500 600 Epoch\njob 11533,Ir=1e-4 1.0 0.9 0.8 0.7 0.6 MAPP 0.5 0.4 0.3 0.2 audio, train, (map) 0.1 audio,test,map audio, valid, (map) 0.0 0 100 200 300 400 500 600 Epoch\njob 11548, Ir=1e-4 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 audio,train, (map) 0.1 audio, valid, (map). audio, test, (map) 0.0 0 200 400 600 800 1000 1200 1400 16 Epoch\njob 11548, Ir=1e-4 1.0 0.9 0.8 0.7 0.6 0.4 0.3 0.2 audio,train,map) 0.1 audio, valid, (map audio, test, (map) 0.0 0 200 400 600 800 1000 1200 1400 1600 Epoch\nFigure 11: Training stacked CNNs : pretrained audio2chirp(*) and chirp2class(O) and freezing audio2chirp (no weight update)\nFigure 12: Training stacked CNN from pretrained CNN: Initialized with optimal audio2chirp(*) and chirp2class(0).\njob 11571, Ir=1e-4 1.0 0.9 0.8 0.7 0.6 MAP 0.5 0.4 0.3 0.2 audio, test, (map) 0.1 audio,valid,(map) audio,train,map) 0.0 0 200 400 600 800 1000 1200 1400 1 Epoch\njob 11571, lr=1e-4 1.0 0.9 0.8 whWrM wV 0.7 0.6 MAPP 1E 0.5 0.4 0.3 0.2 audio,test, (map) 0.1 audio, valid, (map). audio, train, (map). 0.0 0 200 400 600 800 1000 1200 1400 1600 Epoch\njob 11581, Ir=1e-4 1.0 0.9 0.8 0.7 M 0.6 0.5 0.4 0.3 0.2 audio, test, (map) 0.1 audio, valid, (map). audio, train, (map) 0.0 0 200 400 600 800 1000 1 Epoch\njob 11581, Ir=1e-4 1.0 0.9 0.8 0.7 M 0.6 MAP 0.5 0.4 0.3 0.2 audio,test,(map) 0.1 audio,valid,(map audio, train, (map) 0.0 0 200 400 600 800 1000 120 Epoch\nIn all experiments, each CNN is initialized using the same random seed. The symbol \"*\" refers to the optimal trained parameters of a net\nInput Topology conv_1: 40 filters of shape (1, 1001) (nonlinearity: relu) pool_1: (1, 4) max pooling conv_2: 40 filters of shape (1, 501) (nonlinearity: relu) pool_2: (1, 4) max pooling audio2chirp, shape (1, 4960) conv_3: 40 filters of shape (1, 101) (nonlinearity: relu) pool_3: (1, 4) max pooling dense_1: 3136 units (nonlinearity: relu, 10% dropout) reshape 1: 3136 -> (64, 49) conv_1: 20 filters of shape (8, 10) (nonlinearity: relu) pool_1: (2, 2) max pooling conv_2: 20 filters of shape (8, 10) (nonlinearity: relu) chirp2class, shape (64, 49) pool_2: (2, 2) max pooling dense_1: 200 units (nonlinearity: relu, 10% dropout) dense 2: 8 units (nonlinearity: softmax, 10% dropout)\naudio2chirp, shape (1, 4960\nchirp2class, shape (64, 49)\nTable 5: CNN topologies for TIMIT vowel experiments\nFigure 13: Trained and loss audio2chirp (TIMIT)\njob 11697, lr=1e-3 0.019 0.018 0.017 0.016 0.015 0.014 0.013 0.012 0.011 SSO 0.010 0.009 0.008 0.007 0.006 0.005 0.004 0.003 audio, valid, (loss) 0.002 audio, test, (loss) 0.001 audio, train, (loss) : 0.000 : 0 50 100 150 200 250 Epoch\nAlgo 1: Chirplet Generation INPUT: F0,F1,Fs,sigma,p OUtpuT: coefficients_upward, coefficients_downward. if (p) : w=cos(2*pi*((F1-F0)/((p+1) *sigma**p) *t**p+F0) *t) e1 se : w=cos(2*pi*((F0*(F1/F0) **(t/sigma)-F0) *sigma/1og(F1/F0) coefficients_upward=w*exp(-((t-\\sigma/2.0) **2)/(2*sigma**2) coefficients_downward=flipud(coefficients_upward) .\ncos(2*pi*((F1-F0)/((p+1)*sigma**p)*t**p+F0) *t)\nThis code. in GPL licence (c) DYNI team. is in Github :\nclass Chirplet:\nsmallest time bin among the chirplet \"\" g1obal smallest time bins\ni nit (self .samplerate.F0.F1.sigma polynome_degree)\nsize data. = len(data)\nd ef smooth_up(self ,input_signal , sigma, end_smoothing) :"}] |
B1TTpYKgx | [{"section_index": "0", "section_name": "ON THE EXPRESSIVE POWER OF DEEP NEURAL NET- WORKS", "section_text": "Maithra Raghu\nGoogle Brain and Cornell University\nCornell University\nStanford University\nWe study the expressive power of deep neural networks before and after training. Considering neural nets after random initialization, we show that three natural measures of expressivity all display an exponential dependence on the depth of the network. We prove, theoretically and experimentally, that all of these mea- sures are in fact related to a fourth quantity, trajectory length. This quantity grows exponentially in the depth of the network, and is responsible for the depth sen- sitivity observed. These results translate to consequences for networks during. and after training. They suggest that parameters earlier in a network have greater. influence on its expressive power - in particular, given a layer, its influence on. expressivity is determined by the remaining depth of the network after that layer.. This is verified with experiments on MNIST and CIFAR-10. We also explore the effect of training on the input-output map, and find that it trades off between the. stability and expressivity of the input-output map.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "To aim for a more precise understanding, we must disentangle factors influencing their effectiveness. trainability, or how well they can be fit to data; generalizability, or how well they perform on nove. examples; and expressivity, or the set of functions they can compute..\nAll three of these properties are crucial for understanding the performance of neural networks. In-. deed, for success at a particular task, neural nets must first be effectively trained on a dataset, which. has prompted investigation into properties of objective function landscapes (Dauphin et al.]2014 Goodfellow et al.[2014] Choromanska et al.[2014), and the design of optimization procedures. specifically suited to neural networks (Martens and Grosse2015). Trained networks must also be. capable of generalizing to unseen data, and understanding generalization in neural networks is also. an active line of research: (Hardt et al.[2015) bounds generalization error in terms of stochastic gradient descent steps, (Sontag1998 Bartlett and Maass2003] Bartlett et al.]1998) study gener- alization error through VC dimension, and (Hinton et al.]2015) Iooks at developing smaller models. with better generalization.\nIn this paper, we focus on the third of these properties, expressivity - the capability of neural networks to accurately represent different kinds of functions. As the class of functions achievable by a neural network is dependent on properties of its architecture, e.g. depth, width, fully connected convolutional, etc; a better understanding of expressivity may greatly inform architectural choice and inspire more tailored training methods.\nPrior work on expressivity has yielded many fascinating results by directly examining the achiev able functions of a particular architecture. Through this, neural networks have been shown to be\nStanford University and Google Brain\nNeural network architectures have proven \"unreasonably effective\" (LeCun][2014] Karpathy]2015) on many tasks, including image classification (Krizhevsky et al.|2012), identifying particles in high energy physics (Baldi et al.] 2014), playing Go (Silver et al.2016), and modeling human student learning (Piech et al.J|2015). Despite their power, we have limited knowledge of how and why neural networks work, and much of this understanding is qualitative and heuristic..\nuniversal approximators (Hornik et al.1989] Cybenko 1989), and connections between boolean and threshold networks and ReLU networks developed in (Maass et al.] 1994) Pan and Srikumar 2015). The inherent expressivity due to increased depth has also been studied in (Eldan and Shamir 2015, Telgarsky]2015} Martens et al.] 2013]Bianchini and Scarselli]2014), and (Pascanu et al. 2013; Montufar et al.2014), with the latter introducing the number of linear regions as a measure of expressivity.\nThese results, while compelling, also highlight limitations of much of the existing work on ex pressivity. Much of the work examining achievable functions relies on unrealistic architectural. assumptions, such as layers being exponentially wide (in the universal approximation theorem) Furthermore, architectures are often compared via hardcoded' weight values - a specific function that can be represented efficiently by one architecture is shown to only be inefficiently approximated. by another.\nComparing architectures in such a fashion limits the generality of the conclusions, and does not. entirely address the goal of understanding expressivity - to provide characteristic properties of a typical set of networks arising from a particular architecture, and extrapolate to practical conse-. quences.\nRandom networks To address this, we begin our analysis of network expressivity on a family o networks arising in practice - the behaviour of networks after random initialization. As randon initialization is the starting point to most training methods, results on random networks provide natural baselines to compare trained networks with, and are also useful in highlighting properties of trained networks (see Section 3). The expressivity of these random networks is largely unexplored In previous work (Poole et al.2016) we studied the propagation of Riemannian curvature through random networks by developing a mean field theory approach, which quantitatively supports the conjecture that deep networks can disentangle curved manifolds in input space. Here, we take a more direct approach, exactly relating the architectural properties of the network to measures ol expressivity and exploring the consequences for trained networks\nMeasures of Expressivity In particular, we examine the effect of the depth and width of a net. work architecture on three different natural measures of functional richness: number of transitions activation patterns, and number of dichotomies.\nTransitions: Counting neuron transitions is introduced indirectly via linear regions in (Pascanu. et al.|2013), and provides a tractable method to estimate the degree of non-linearity of the computed function.\nActivation Patterns: Transitions of a single neuron can be extended to the outputs of all neurons ir all layers, leading to the (global) definition of a network activation pattern, also a measure of non. linearity. Network activation patterns directly show how the network partitions input space (intc. convex polytopes), through connections to the theory of hyperplane arrangements..\nDichotomies: We also measure the heterogeneity of a generic class of functions from a particular architecture by counting dichotomies, 'statistically dual' to sweeping input in some cases. This. measure reveals the importance of remaining depth in expressivity, in both simulation and practice.\nConnection to Trajectory Length All three measures display an exponential increase with depth. but not width (most strikingly in Figure|4). We discover and prove the underlying reason for this - all three measures are directly proportional to a fourth quantity, trajectory length. In Theorem(1) we show that trajectory length grows exponentially with depth (also supported by experiments, Figure. 1) which explains the depth sensitivity of the other three measures..\nConsequences for Trained Networks Our empirical and theoretical results connecting transitions and dichotomies to trajectory length also suggest that parameters earlier in the network should have. exponentially greater influence on parameters later in the network. In other words, the influence on expressivity of parameters, and thus layers, is directly related to the remaining depth of the network after that layer. Experiments on MNIST and CIFAR-10 support this hypothesis - training only earlier layers leads to higher accuracy than training only later layers. We also find, with experiments. on MNIST, that the training process trades off between the stability of the input-output map and its. ex pressivity.\nTrajectory Length, k =32 Trajectory Length, o? =4 105 102 k=1 104 =1 + k=4 02 =4 k=16 103 k=64 3 =16 10 k =256 102 =64 101 100 100 0 2 4 6 8 10 12 0 2 4 6 8 10 12 Depth Depth (a) (b) Perturbation Growth Perturbation Growth 12 20 =1 10 % =16 15 8 o3 =64 10 k =2 pxg k =32 5 k =512 0 0 -2 0 50 100 150 200 250 300 0 100 200 300 400 500 600 Width (c) (d)\nFigure 1: The exponential growth of trajectory length with depth, in a random deep network with. hard-tanh nonlinearities. A circular trajectory is chosen between two random vectors. The image of. that trajectory is taken at each layer of the network, and its length measured. (a,b) The trajectory length vs. layer, in terms of the network width k and weight variance o?, both of which determine. its growth rate. (c,d) The average ratio of a trajectory's length in layer d + 1 relative to its length ir. layer d. The solid line shows simulated data, while the dashed lines show upper and lower bounds. (Theorem1). Growth rate is a function of layer width k, and weight variance o3."}, {"section_index": "2", "section_name": "2 GROWTH OF TRAJECTORY LENGTH AND MEASURES OF EXPRESSIVITY", "section_text": "In this section we examine random networks, proving and empirically verifing the exponentia. growth of trajectory length with depth. We then relate trajectory length to transitions, activatior patterns and dichotomies, and show their exponential increase with depth.."}, {"section_index": "3", "section_name": "2.1 NOTATION AND DEFINITIONS", "section_text": "Let Fw denote a neural network. In this section, we consider architectures with input dimension m n hidden layers all of width k, and (for convenience) a scalar readout layer. (So, Fw : Rm -> R Our results mostly examine the cases where is a hard-tanh (Collobert and Bengio]2004) or ReLt nonlinearity. All hard-tanh results carry over to tanh with additional technical steps.\nh(d) = W(d) z(d) +b(d) y(d+1) =\nDefinitions Say a neuron transitions when it switches linear region in its activation function (i.e. for ReLU, switching between zero and linear regimes, for hard-tanh, switching between negative saturation, unsaturated and positive saturation). For hard-tanh, we refer to a sign transition as the neuron switching sign, and a saturation transition as switching from being saturated between 1. The Activation Pattern of the entire network is defined by the output regions of every neuron. More. precisely, given an input x, we let A(Fw, x) be a vector representing the activation region of every hidden neuron in the network. So for a ReLU network Fw, we can take A(Fw,x) E {-1, 1}nk. with -1 meaning the neuron is in the zero regime, and 1 meaning it is in the linear regime. For\nWe use u ) to denote the ith neuron in hidden layer d. We also let x = z(0) be an input, h(d) be the hidden representation at layer d, and $ the non-linearity. The weights and bias are called w(d) and b(d) respectively. So we have the relations.\nhard-tanh network Fw, we can (overloading notation slightly) take A(Fw, x) E {-1, 0, 1}nk. The. use of this notation will be clear by context. Given a set of inputs S, we say a dichotomy over S is a labeling of each point in S as 1..\nWe assume the weights of our neural networks are initialized as random Gaussians, with appropriate the analysis below, we sweep through a one dimensional input trajectory x(t). The results hold for almost any such smooth x(t), provided that at any point x(t), the trajectory direction has some. non-zero magnitude perpendicular to x(t).\nWe first prove how the trajectory length grows, and relate it to neuron transitions\nWe prove (with a more exact lower bound in the Appendix):\nO w 03 + 0?)1/4 o3 + o?+ k\nThis bound is tight in the limits of large ow and k. An immediate Corollary for oy = 0, i.e. no bias iS\nCorollary 1. Bound on Growth of Trajectory Length Without Bias For Fw with zero bias, we hav\nl(x(t)\nThe theorem shows that the image of a trajectory in layer d has grown exponentially in d, with the. scaling w and width of the network k determining the base. We additionally state and prove a sim ple O(od) growth upper bound in the Appendix. Figure[1|demonstrates this behavior in simulation and compares against the bounds. Note also that if the variance of the bias is comparatively too large. i.e. , >> w, then we no longer see exponential growth. This corresponds to the phase transitior described in (Poole et al.2016).\nThe analysis is complicated by the statistical dependence on the image of the input z(d+1) (t). So we. instead form a recursion by looking at the component of the difference perpendicular to the image of the input in that layer, i.e.. . For a typical trajectory, the perpendicular component k-1 of the total trajectory length, and our derived growth rate thus provides preserves a fraction a close lower bound, as demonstrated in Figure[1(c,d).\nObservation 1. The number of sign transitions in a network Fw is directly proportional to the length of the latent image of the curve, z(n) (t)\nThe proof can be found in the Appendix. A rough outline is as follows: we look at the expected growth of the difference between a point z(d) (t) on the curve and a small perturbation z(d) (t + dt), from layer d to layer d + 1. Denoting this quantity z(d) (t), we derive a recurrence relating 8z(d+1) (t) and ||&z(d) (t)|| which can be composed to give the desired growth rate.\nWe intuit a reason for this observation as follows: note that for a network Fw with n hidden layers. the linear, one dimensional, readout layer outputs a value by computing the inner product W(n) z(n). The sign of the output is then determined by whether this quantity is > O or not. In particular, the decision boundary is a hyperplane, with equation W(n) z(n) = 0. So, the number of transitions the. output neuron makes as x(t) is traced is exactly the number of times z(n)(t) crosses the decision. boundary. As Fw is a random neural network, with signs of weight entries split purely randomly. between 1, it would suggest that points far enough away from each other would have independent signs, i.e. a direct proportionality between the length of z(n) (t) and the number of times it crosses. the decision boundary.\nWe can also prove this in the special case when ow is very large. Note that by Theorem|1] very larg Ow results in a trajectory growth rate of\nn (k, 0w,0b,n) = O\nTheorem 2. Number of transitions in large weight limit Given Fw, in the very large ow regime, the number of sign transitions of the network as an input x(t) is swept is of the order of g(k, O, Ob, n)\nWe can generalize the 'local' notion of expressivity of a neuron's sign transitions to a 'global. measure of activation patterns over the entire network. We can formally relate network activatior patterns to specific hyperplane arrangements, which allows proof of three exciting results.\nTheorem 3. Regions in Input Space Given a network Fw with with ReLU or hard-tanh activations, input space is partitioned into convex regions (polytopes), with Fw corresponding to a differeni linear function on each region.\nThis results in a bijection between transitions and activation patterns for 'well-behaved' trajectories see the proof of Theorem|3|and Corollary2|in Appendix.\nTransitions vs. Length 109 k=8,02 =2 108 k=64, =2 107 106 k=8,0?=8 105 k=64,0 =8 104 k=512,03 =8 103 k=8,0? =32 102 k=64, =32 101 k=512,0? =32 100 100 101 102 103 104 105 106 10' 108 109 Length / k\nFigure 2: The number of transitions is linear in trajectory length. Here we compare the empirical. number of sign changes to the length of the trajectory, for images of the same trajectory at different. layers of a hard-tanh network. We repeat this comparison for a variety of network architectures. with different network width k and weight variance o,.\nLarge ow also means that for any input (bounded away from zero), almost all neurons are saturated Furthermore, any neuron transitioning from 1 to -1 (or vice versa) does so almost instantaneously In particular, at most one neuron within a layer is transitioning for any input. We can then show that in the large ow limit the number of transitions matches the trajectory length (proof in the Appendix, via a reduction to magnitudes of independent Gaussians):\nFinally, returning to the goal of understanding expressivity, we can upper bound the expressive power of a particular architecture according to the activation patterns measure:.\nLayer 0 Layer 1 Layer 2 1 0 0 0 -1 1 -1 0 1 -1 0 1 -1 0 1 xo xo\nFigure 3: Deep networks with piecewise linear activations subdivide input space into convex poly topes. Here we plot the boundaries in input space separating unit activation and inactivation for al units in a three layer ReLU network, with four units in each layer. The left pane shows activatio. boundaries (corresponding to a hyperplane arrangement) in gray for the first layer only, partitioning the plane into regions. The center pane shows activation boundaries for the first two layers. Insid every first layer region, the second layer activation boundaries form a different hyperplane arrange ment. The right pane shows activation boundaries for the first three layers, with different hyperplan arrangements inside all first and second layer regions. This final set of convex regions correspond t different activation patterns of the network - i.e. different linear functions.\nDichotomies vs. Remaining Depth Dichotomies vs. Width 105 105 k =2 + k =128 + k =8 + k =512 104 104 k =32 103 103 d, =1 d, =11 102 102 d, =3 + d, =13 d, =5 + d, =15 101 101 d, =7 + d, =17 d, =9 100 1012 100 0 2 4 6 8 14 16 18 0 100 200 300 400 500 600 Remaining Depth d, Width k a) (b)\nFigure 4: The number of functions achievable in a deep hard-tanh network by sweeping a single layer's weights along a one dimensional trajectory is exponential in the remaining depth, but in- creases only slowly with network width. Here we plot the number of classification dichotomies over s = 15 input vectors achieved by sweeping the first layer weights in a hard-tanh network along a. one-dimensional great circle trajectory. We show this (a) as a function of remaining depth for several widths, and (b) as a function of width for several remaining depths. All networks were generated with weight variance o?, = 8, and bias variance o? = 0.\nTheorem 4. (Tight) Upper bound for Number of Activation Patterns Given a neural network Fw inputs in Rm, with ReLU or hard-tanh activations, and with n hidden layers of width k, the number of activation patterns grows at most like O(kmn) for ReLU, or O((2k)mn) for hard-tanh.\nA natural extension is to study a class of functions that might arise from a particular architecture One such class of functions is formed by sweeping the weights of a network instead of the input More formally, we pick random matrices, W, W', and consider the weight interpolation W cos(t) + W' sin(t), each choice of weights giving a different function. When this process is applied to just the first layer, we have a statistical duality with sweeping a circular input.\nSo far, we have looked at the effects of depth and width on the expressiveness (measured through transitions and activations) of a generic function computed by that network architecture. These measures are directly related to trajectory length, which is the underlying reason for exponential depth dependence.\nDichotomies vs. Remaining Depth 105 Layer swept = 1 Layer swept = 4 104 Layer swept = 8. Layer swept = 12 103 All dichotomies. 102 101 0 2 4 6 8 10 12 14 16 Remaining Depth d,.\nFigure 5: Expressive power depends only on remaining network depth. Here we plot the number of. dichotomies achieved by sweeping the weights in different network layers through a 1-dimensional. great circle trajectory, as a function of the remaining network depth. The number of achievable dichotomies does not depend on the total network depth, only on the number of layers above the layer swept. All networks had width k = 128, weight variance o?, = 8, number of datapoints. s = 15, and hard-tanh nonlinearities. The blue dashed line indicates all 2s possible dichotomies for this random dataset.\nTrain Accuracy Against Epoch Test Accuracy Against Epoch 1.0 lay2 lay 3 0.9 lay4 lay 5 lay 6 0.8 lay 7 lay 8 lay 9 0.7 0.6 0.5 0.4 ....- 0.3 0 100 200 300 400 50 100 200 300 400 500 Epoch Number Epoch Number\nFigure 6: Demonstration of expressive power of remaining depth on MNIST. Here we plot trair and test accuracy achieved by training exactly one layer of a fully connected neural net on MNIST The different lines are generated by varying the hidden layer chosen to train. All other layers are kept frozen after random initialization. We see that training lower hidden layers leads to bette. performance. The networks had width k = 100, weight variance o?, = 2, and hard-tanh nonlin- earities. Note that we only train from the second hidden layer (weights W(1)) onwards, so that the number of parameters trained remains fixed. While the theory addresses training accuracy and not generalization accuracy, the same monotonic pattern is seen for both.\nGiven this class of functions, one useful measure of expressivity is determining how heteroge neous this class is. Inspired by classification tasks we formalize it as: given a set of inputs S = {x1,.., xs} C Rm, how many of the 2s possible dichotomies does this function class pro. duce on S?\nFor non-random inputs and non-random functions, this is a well known question upper bounded by. the Sauer-Shelah lemma (Sauer. [1972). We discuss this further in Appendix |D.1 In the randon setting, the statistical duality of weight sweeping and input sweeping suggests a direct proportior. to transitions and trajectory length for a fixed input. Furthermore, if the x; E S are sufficiently uncorrelated (e.g. random) class label transitions should occur independently for each x; Indeed we show this in Figure4(more figures, e.g. dichotomies vs transitions and observations, are includec. in the Appendix).\nObservation 2. Depth and Expressivity in a Function Class. Given the function class F as above the number of dichotomies expressible by F over a set of random inputs S by sweeping the firsi layer weights along a one dimensional trajectory W(0) (t) is exponential in the network depth n.\nTrain Accuracy Against Epoch Test Accuracy Against Epoch 0.6 lay 2 lay 3 lay 4 lay 5 0.5 lay 6 lay 7 lay 8 0.4 AACuur 0.3 0.2 0 100 200 300 400 5000 100 200 300 400 500 Epoch Number Epoch Number\nhard-tanh hard-tanh\nTable 1: List and location of key theoretical and experimental results\nRemaining Depth The results from Section2] particularly those linking dichotomies to trajectory. length, suggest that earlier layers in the network might have more expressive power. In particular, the remaining depth of the network beyond the layer might directly influence its expressive power. We see that this holds in the random network case (Figure 5), and also for networks trained on. MNIST and CIFAR-10. In Figures[6l7|we randomly initialized a neural network, and froze all the layers except for one, which we trained..\nTraining trades off between input-output map stability and expressivity. We also look at the effect of training on measures of expressivity by plotting the change in trajectory length and number. of transitions (see Appendix) during the training process. We find that for a network initialized with large Ow, the training process appears to stabilize the input-output map - monotonically decreasing. trajectory length (Figure [8) except for the final few steps. Interestingly, this happens at a faster rate in the vicinity of the data than for random inputs, and is accomplished without reducing weight. magnitudes.\nFor a network closer to the boundary of the exponential regime o?, = 3, where trajectory length growth is still exponential but with a much smaller base, the training process increases the trajectory length, enabiling greater expressivity in the resulting input-output map, Figure[9\nFigure 7: We repeat a similar experiment in Figure[6|with a fully connected network on CIFAR-10 and mostly observe that training lower layers again leads to better performance. The networks had width k = 200, weight variance o?, = 1, and hard-tanh nonlinearities. We again only train from the. second hidden layer on so that the number of parameters remains fixed..\nTrajectory Length during training MNIST inputs Random inputs. 102 101 0 2 4 6 8 10 12 0 2 4 6 8 10 12 Layer number Layer number\nFigure 8: Training acts to stabilize the input-output map by decreasing trajectory length for Ou large. The left pane plots the growth of trajectory length as a circular interpolation between twc MNIST datapoints is propagated through the network, at different train steps. Red indicates the start of training, with purple the end of training. Interestingly, and supporting the observation or remaining depth, the first layer appears to increase trajectory length, in contrast with all later layers suggesting it is being primarily used to fit the data. The right pane shows an identical plot but for ar interpolation between random points, which also display decreasing trajectory length, but at a slower rate. Note the output layer is not plotted, due to artificial scaling of length through normalization The network is initialized with o?, = 16. A similar plot is observed for the number of transitions (see Appendix.)\nTrajectory Length during training MNIST inputs Random inputs 102 101 1 : : 1 1 / 0 2 4 6 8 10 12 0 2 4 6 8 10 12 Layer number Layer number\nFigure 9: Training increases expressivity of input-output map for , small. The left pane plots the growth of trajectory length as a circular interpolation between two MNIST datapoints is propagated. through the network, at different train steps. Red indicates the start of training, with purple the end of training. We see that the training process increases trajectory length, likely to increase the. expressivity of the input-output map to enable greater accuracy. The right pane shows an identical plot but for an interpolation between random points, which also displays increasing trajectory length but at a slower rate. Note the output layer is not plotted, due to artificial scaling of length through normalization. The network is initialized with o?, = 3..\nIn this paper, we studied the expressivity of neural networks through three measures, neuron tran. sitions, activation patterns and dichotomies, and explained the observed exponential dependence or. depth of all three measures by demonstrating the underlying link to latent trajectory length. Having. explored these results in the context of random networks, we then looked at the consequences fo. trained networks (see Table[1). We find that the remaining depth above a network layer influences. its expressive power, which might inspire new pre-training or initialization schemes. Furthermore. we see that training interpolates between expressive power and better generalization. This relation. between initial and final parameters might inform early stopping and warm starting rules..\nWe thank Samy Bengio, Ian Goodfellow, Laurent Dinh, and Quoc Le for extremely helpful discus sion."}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Yann LeCun. The unreasonable effectiveness of deep learning. In Seminar. Johns Hopkins University, 2014 Andrej Karpathy. The unreasonable effectiveness of recurrent neural networks. In Andrej Karpathy blog, 2015 Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural\nNorbert Sauer. On the density of families of sets. Journal of Combinatorial Theory, Series A, 13(1):145-147,. 1972. D. Kershaw. Some extensions of w. gautschi's inequalities for the gamma function. Mathematics of Computa-. tion, 41(164):607-611, 1983. Andrea Laforgia and Pierpaolo Natalini. On some inequalities for the gamma function. Advances in Dynamical Systems and Applications, 8(2):261-267, 2013. Richard Stanley. Hyperplane arrangements. Enumerative Combinatorics, 2011. Vladimir Naumovich Vapnik and Vlamimir Vapnik. Statistical learning theory, volume 1. Wiley New York."}, {"section_index": "5", "section_name": "Appendix", "section_text": "Here we include the full proofs from sections in the paper\nProof of Theorem 1 We prove this result for Fw with zero bias for technical simplicity. The result also translates over to Fw with bias with a couple of technical modifications\nParallel and Perpendicular Components: Given vectors x, y, we can write y = y + yu where y. is the component of y perpendicular to x, and yy is the component parallel to x. (Strictly speaking. these components should also have a subscript x, but we suppress it as the direction with respect to. which parallel and perpendicular components are being taken will be explicitly stated.)\nThis notation can also be used with a matrix W, see Lemma|1\nBefore stating and proving the main theorem, we need a few preliminary results\nW=w+w+w++w.\nl|Wx=0 Wx=0 yTWj =0 yTW=0\nT1W=0 yTW=0\ni.e. the row space of W is decomposed to perpendicular and parallel components with respect to (subscript on right), and the column space is decomposed to perpendicular and parallel component of y (superscript on left)\nIf we define x = Vx and ~ = Uu. then we see that\nl|Wx=0 1W x=0 yT^Wj =0 yTW=0\nas x, y have only one non-zero term, which does not correspond to a non-zero term in the compo. nents of W in the equations.\nThen, defining W = UT'll W V, and the other components analogously, we get equations of the form\nlWx=UT|W Vx=UTW_x = 0\nObservation 3. Given W, x as before, and considering W, W with respect to x (wlog a unit vector) we can express them directly in terms of W as follows: Letting W(i) be the ith row of W, we have\ni.e. the projection of each row in the direction of x. And of course\nThe motivation to consider such a decomposition of W is for the resulting independence between different components, as shown in the following lemma\nIn the following two lemmas, we use the rotational invariance of Gaussians as well as the chi distri bution to prove results about the expected norm of a random Gaussian vector.\nLemma 3. Norm of a Gaussian vector Let X E Rk be a random Gaussian vector, with X, iid\nT((k+1)/2) E [x| = 0V I(k/2)\nWe will find it useful to bound ratios of the Gamma function (as appear in Lemma|3) and so introduc the following inequality, from (Kershaw|1983) that provides an extension of Gautschi's Inequality\nTheorem 5. An Extension of Gautschi's Inequality For 0 < s < 1, we have\n1|2 1 1-s T(x+1) 1 1 x 2 T(x+ s) 2\n7 W W = (W(k))T . x)\nW=W-W\nLemma 2. Independence of Projections Let x be a given vector (wlog of unit norm.) If W is a random matrix with Wi; ~ N(O, o2), then W and W_ with respect to x are independent random variables.\n(a) We use the rotational invariance of random Gaussian matrices, i.e. if W is a Gaussian matrix, iid entries N(0, o2), and R is a rotation, then RW is also iid Gaussian, entries N(0, o2). (This follows easily from affine transformation rules for multivariate Gaussians.) Let V be a rotation as in Lemma1 Then W = WvT is also iid Gaussian, and furthermore, W and W partition the entries of W, so are evidently independent. But then W = W VT and W = W VT are also independent. (b) From the observation note that W and W. have a centered multivariate joint Gaussian distribution (both consist of linear combinations of the entries W; in W.) So it suffices to show that W and W have covariance 0. Because both are centered Gaussians, this is equivalent to showing IE(< W, W >) = 0. We have that E(< Wj,W >) =E(W|WT) =E(W|wT)-E(W|WT\nE(< W,W>)=E(WW)=E(WW) )-E(WWI\nProof. We use the fact that if Y is a random Gaussian, and Y, ~ N(0, 1) then [Y follows a chi distribution. This means that E(X/o) = 2F((k + 1)/2)/T(k/2), the mean of a chi distribution with k degrees of freedom, and the result follows by noting that the expectation in the lemma is multiplied by the above expectation.\nLemma 4. Norm of Projections Let W be a k by k random Gaussian matrix with iid entries N(0, o2), and x, y two given vectors. Partition W into components as in Lemma|1\\and let x be a nonzero vector perpendicular to x. Then\nA more formal proof can be seen as follows: let the pdf of X be fx7 (.). Then we wish to show\nx fx(x)dx > x| fx(x)dx x\n(l|x- |I+|I-x- ll) fx(x)dx > l|2x|fx(x)dx z (l|x|I +l-x|l) fx(x)dx\nN(0, o2), and x, y two given vectors. Partition W into components as in Lemma1|and let x be a nonzero vector perpendicular to x. Then (a) I(k/2) E [|wx|]=|x|oV2 (k - 1)/2 (b) If 1A is an identity matrix with non-zeros diagonal entry i iff i E A C [k], and [A] > 2 then I(|A|/2) 1/2 E [||1AWx|] |x||oV2 I((|A| - 1)/2) Proof. (a) Let U, V, W be as in Lemma[1As U, V are rotations, W is also iid Gaussian. Furthermore for any fixed W, with a = Va, by taking inner products, and square-rooting. we see thatWa = ||W a||. So in particular E[||Wx|] =E Wx But from the definition of non-zero entries of W+, and the form of x (a zero entry in the first coordinate), it follows that - Wx has exactly k -- 1 non zero entries, each a centered Gaussian with variance (k - 1)o2 ||x||2. By Lemma[3] the expected norm is as in the statement. We then apply Theorem5|to get the lower bound. (b) First note we can view 14W = 1W. (Projecting down to a random (as W is random) subspace of fixed size [A| = m and then making perpendicular commutes with making perpendicular and then projecting everything down to the subspace.) So we can view W as a random m by k matrix, and for x, y as in Lemma[1(with y projected down onto m dimensions), we can again define U,V as k by k and m by m rotation matrices respectively, and W = UwvT, with analogous properties to Lemma 1. Now we can finish as in part (a), except that -W_x may have only m - 1 entries, (depending on whether y is annihilated by projecting down by1 A) each of variance (k - 1)o2 ||x+|\nT(k/2) k 3 E [wx]=||x||oV] x|V2 ((k-1)/2 2 4\nr(|A|/2) A 3 E [1s+wx]|x|oV] 1x ((|A] 1)/2 2 4\nIE -Wx+ = E Wx\nE(X - D > E(XT\nProof. The inequality can be seen intuitively geometrically: as X has diagonal covariance matrix. the contours of the pdf of |X are circular centered at 0, decreasing radially. However, the contours of the pdf of X - are shifted to be centered around , and so shifting back to 0 reduces. the norm.\nNow we can pair points x, -x, using the fact that fx(x) = fx(-x) and the triangle inequality on the integrand to get\nProof. We first prove the zero bias case, Theorem 1 To do so, it is sufficient to prove that.\nas integrating over t gives us the statement of the theorem\nrepresentation is not saturated. Letting W, denote the ith row of matrix W, we now claim that\n((Wd);8z(d)+(W} d d A iEA.(d) W\nIndeed, by Lemma2|we first split the expectation over W(d) into a tower of expectations over the two independent parts of W to get\n(d+1\nthe norm over c ) with the sum in the term on the right hand side of the claim\nk 8zj (d+1) 8z1 Ew(d) d W(d)\n(where the indicator in the right hand side zeros out coordinates not in the active set.\nTo see this, first note, by definition\nd+1) W(d) (d).14 W\nd+1 K E\nW(d) =W d d\nTill now, we have mostly focused on partitioning the matrix w(d). But we can also set Sz(d) expression in (**), we derive a recurrence as below:.\nWe then split the column space of W(d) = -w(d) + |w(d), where the split is with respect to z(d+1) have an important relation:\nd+1\nLW(d)sz(d) = W(d)sz(d) _ y(d) ^(d+1)\\^(d+1\nNow note that for any index i E A, the right hand sides of (1) and (2) are identical, and so the vectors on the left hand side agree for all i E A. In particular,.\nWe would like a recurrence in terms of only perpendicular components however, so we first drop the \"w(d), \"w(d) (which can be done without decreasing the norm as they are perpendicular to the remaining terms) and using the above claim, have.\niEA d W\nIE L - W(d) 2 iEA d W\nThe outer expectation on the right hand side only affects the term in the expectation through the size norm only if |Aw(d)| 2 (else we cannot project down a dimension), and for |Aw(d)| 2,\n2\n8z d (d\nS\n((w(d)+\"w LW(d iEA d\nEw(d) So using Lemma[5|we have d We can then apply Lemma4|to get ) W(d) iEA\n/2 ) W(d) (d) iEA (d) iEA. d) W W\nThe outer expectation on the right hand side only affects the term in the expectation through the size\n-Vkp1\nBut by using Jensen's inequality with 1//x, we ge\nwhere the last equality follows by recognising the expectation of a binomial(k - 1, p) random vari able. So putting together, we get\nd+1 W(d 0 )p\nSo this becomes\nok 2+(k-1\nFinally, we can compose this, to get\nResult for non-zero bias In fact, we can easily extend the above result to the case of non-zerc bias. The insight is to note that because &z(d+1) involves taking a difference between z(d+1) (t + dt). and z(d+1) (t), the bias term does not enter at all into the expression for &z(d+1). So the computations. above hold, and equation (a) becomes.\nk-1+0w Jou\nWe use the fact that we have the probability mass function for an (k, p) binomial random variable to bound the /j term:\n1 1 (k-1)p+1 pj-1(1-p)k-j\n<1)>P(A<1)>\nwhere the last inequality holds for o 1 and follows by Taylor expanding e-x2/2 around 0. Simi- larly, we can also show that p\nd+1 Vok d+ c:|8x(t)|l 2+(k-1)\nwith the constant c being the ratio of ox(t) to ox(t)]. So if our trajectory direction is almost orthogonal to x(t) (which will be the case for e.g. random circular arcs, c can be seen to be ~ 1 by splitting into components as in Lemma[1] and using Lemmas[3|[4[)\ndrawn from W(0, ?)). So equation (b) becomes\nd+1 (d) 8z 4 + k\nwhere the second step follows from (Laforgia and Natalinil 2013), and holds for k > 1"}, {"section_index": "6", "section_name": "Proof of Theorem 2", "section_text": "Proof. For op = 0\nx 2 P arctan(k TT\nX\n1 2+V27\nStatement and Proof of Upper Bound for Trajectory Growth Replace hard-tanh with a linear. = (W(d) z(d)); + b. This provides an upper bound on the norm. We also then recover a chi distribution with k terms, each with standard deviation y\n((k+1)/2) Ow IE 1 F(k/2) k2\n1 k+1 2 0w k\nAs we are in V .(d) as sensitive to v. U1 A sufficient condition for this to happen is if |W1| |j+i Wj1|. But X = Wi1 ~ N(0, o27k) and ji Wj1 = Y' ~ N(0, (k - 1)o2/k). So we want to compute P(|X| > |Y'I). For ease of computation, we instead look at P([X| > [YD), where Y ~ N(0, o2).\nBut this is the same as computing P(|X|/(Y] > 1) = P(X/Y < -1) + P(X/Y > 1). But the ratio of two centered independent normals with variances o?, o? follows a Cauchy distribution, with parameter 1/2, which in this case is 1/k. Substituting this in to the cdf of the Cauchy distribution. we get that\nneurons in the layer below. Using this, and the fact the while u. night fip very quickly fron\nLet T(d) be a random variable denoting the number of transitions in layer d. And let T(d) be a. random variable denoting the number of transitions of neuron i in layer d. Note that by linearity of\nNow, assume we have partitioned our input space into convex polytopes with hyperplanes from layers d-1. Consider v,d) and a specific polytope R,. Then the activation pattern on layers d-1 .(d) some constant term, comprising of the bias and the output of saturated units. Setting this expression to zero (for ReLUs) or to 1 (for hard-tanh) again gives a hyperplane equation, but this time, the equation is only valid in R, (as we get a different linear function of the inputs in a different region.) also constant on R;. The theorem then follows.\nCorollary 2. Transitions and Output Patterns in an Affine Trajectory For any affine one dimensional trajectory x(t) = xo +t(x1 - xo) input into a neural network Fw, we partition R 3 t into intervals every time a neuron transitions. Every interval has a unique network activation pattern on Fw.\nGeneralizing from a one dimensional trajectory, we can ask how many regions are achieved over the entire input - i.e. how many distinct activation patterns are seen? We first prove a bound on the number of regions formed by k hyperplanes in Rm (in a purely elementary fashion, unlike the proof presented in (Stanley2011)\nTheorem 6. Upper Bound on Regions in a Hyperplane Arrangement Suppose we have k hyper planes in Rm - i.e. k equations of form Q;x = i. for Q, E Rm, , E R. Let the number of regions (connected open sets bounded on some sides by the hyperplanes) be r(k, m). Then\nProof. Let the hyperplane arrangement be denoted H, and let H E H be one specific hyperplane Then the number of regions in H is precisely the number of regions in H - H plus the number of\noy the independence of these two events, E =E11,1)]ET But the firt on the right hand side is O(1/) by (c), so putting it all together, E [T(d+1) /kE. n(d)\nProof. We show inductively that Fw partitions the input space into convex polytopes via hyper- sidering all such hyperplanes over neurons in the first layer, we get a hyperplane arrangement in the input space, each polytope corresponding to a specific activation pattern in the first hidden layer\nThis implies that any one dimensional trajectory x(t), that does not 'double back' on itself (i.e reenter a polytope it has previously passed through), will not repeat activation patterns. In particular after seeing a transition (crossing a hyperplane to a different region in input space) we will never return to the region we left. A simple example of such a trajectory is a straight line:\nm k r(k,m) < i=0\nregions in H H. (This follows from the fact that H subdivides into two regions exactly all of the regions in H H, and does not affect any of the other regions.)\nIn particular, we have the recursive formula\nr(k,m)=r(k-1,m)+r(k-1,m-1)\nwhere the last equality follows by the well known identity\na a a+1 b 6+\nThis concludes the proof\nWith this result, we can easily prove Theorem4as follows:\nProof. First consider the ReLU case. Each neuron has one hyperplane associated with it, and so b Theorem[6[ the first hidden layer divides up the inputs space into r(k, m) regions, with r(k, m) O(km\nNow consider the second hidden layer. For every region in the first hidden layer, there is a different activation pattern in the first layer, and so (as described in the proof of Theorem 3) a different hyperplane arrangement of k hyperplanes in an m dimensional space, contributing at most r(k, m regions.\nIn particular, the total number of regions in input space as a result of the first and second hidden layers is r(k, m) * r(k, m) < O(k2m). Continuing in this way for each of the n hidden layers. gives the O(kmn) bound.\nA very similar method works for hard tanh, but here each neuron produces two hyperplanes, result ing in a bound of O((2k)mn)"}, {"section_index": "7", "section_name": "D.1 UPPER BOUND FOR DICHOTOMIES", "section_text": "The Vapnik-Chervonenkis (VC) dimension of a function class is the cardinality of the largest set of points that it can shatter. The VC dimension provides an upper (worst case) bound on the gener-. alization error for a function class (Vapnik and Vapnik]1998). Motivated by generalization error. VC dimension has been studied for neural networks (Sontag1998] Bartlett and Maass] 2003). In (Bartlett et al.]1998) an upper bound on the VC dimension v of a neural network with piecewise. polynomial activation function and binary output is derived. For hard-tanh units, this bound is.\nwhere Wis the total number of weights, n is the depth, and k is the width of the network. The VC dimension provides an upper bound on the number of achievable dichotomies |F| by way of the\nWe now induct on k + m to assert the claim. The base cases of r(1, 0) = r(0, 1) = 1 are trivial, and assuming the claim for < k + m - 1 as the induction hypothesis, we have.\nm m i=0 i= d-1 i=0 m-1 K k 0 i=0\nv =2|W|nlog(4e|W|nk) +2|W|n2log2+ 2n\nv =2|W|nlog(4e|W|nk) +2|W|n2log2+ 2n\nSauer-Shelah lemma (Sauer 1972)\nBy combining Equations 4|and 5 an upper bound on the number of dichotomies is found, with a growth rate which is exponential in a low order polynomial of the network size\nOur results further suggest the following conjectures\nConjecture 1. As network width k increases, the exploration of the space of dichotomies increas ingly resembles a simple random walk on a hypercube with dimension equal to the number of input S|\nThis conjecture is supported by Figure [10] which compares the number of unique dichotomies achieved by networks of various widths to the number of unique dichotomies achieved by a ran dom walk. This is further supported by an exponential decrease in autocorrelation length in function. space, derived in our prior work (Poole et al.2016).\nConjecture 2. The expressive power of a single weight W(d) at layer d in a random network F and for a set of random inputs S, is exponential in the remaining network depth dr = (n - d). Here expressive power is the number of dichotomies achievable by adjusting only that weight.\nThat is, the expressive power of weights in early layers in a deep hard-tanh network is exponentially greater than the expressive power of weights in later layers. This is supported by the invariance tc. layer number in the recurrence relations used in all proofs directly involving depth. It is also directly supported by simulation, as illustrated in Figure[5] and by experiments on MNIST and CIFAR10 as illustrated in Figures6\nWe implemented the random network architecture described in Section2.1 In separate experiments we then swept an input vector along a great circle trajectory (a rotation) for fixed weights, and swep weights along a great circle trajectory for a fixed set of inputs, as described in Section|2.4 In both cases, the trajectory was subdivided into 106 segments. We repeated this for a grid of network widths k, weight variances o?, and number of inputs s. Unless otherwise noted, , = 0 for all experiments. We repeated each experiment 10 times and averaged over the results. The simulation results are discussed and plotted throughout the text.\nThe networks trained on MNIST and CIFAR-1O were implemented using Keras and Tensorflow, anc trained for a fixed number of epochs with the ADAM optimizer.\nDichotomies vs. Transitions 105 k =2 k =8 104 k =32 k =128 10 k =512 All dichotomies 10 Random walk. 10 100 100 101 102 103 104 105 Transitions\nFigure 10: Here we plot the number of unique dichotomies that have been observed as a function of the number of transitions the network has undergone. Each datapoint corresponds to the number of transitions and dichotomies for a hard-tanh network of a different depth, with the weights in the first layer undergoing interpolation along a great circle trajectory W(0) (t). We compare these plots to a random walk simulation, where at each transition a single class label is flipped uniformly at random. Dichotomies are measured over a dataset consisting of s = 15 random samples, and all networks had weight variance o?, = 16. The blue dashed line indicates all 2s possible dichotomies.\ne|S] U F U\nTransitions Count during Training MNIST data Random data 20 16 14 15 12 cennr 10 Traannns 10 8 6 5 4 2 : 0 0 0 2 4 6 8 10 12 0 2 4 6 8 10 12 Layers\nFigure 11: An identical plot to Figure|8|but for transition count\nTrain Accuracy Against Epoch Test Accuracy Against Epoch 0.6 lay 2 lay 3 0.5 lay 4 lay 5 lay 6 lay 7 0.4 lay 8 Aeeunrey 0.3 0.2 0.1 0.0 0 100 200 300 400 5000 100 200 300 400 500 Epoch Number Epoch Number\nFigure 12:We repeat the experiment in Figure 6|for a convolutional network trained on CIFAR 10. The network has eight convolutional hidden layers, with three by three filters and 64 filters ir each layer, all with ReLU activations. The final layer is a fully connected softmax, and is trained ir addition to the single convolutional layer being trained. The results again support greater expressive power with remaining depth. Note the final three convolutional layers failed to effectively train, anc performed at chance level.\nWe also have preliminary experimental results on Convolutional Networks. To try and make the comparisons fair, we implemented a fully convolutional network (no fully connected layers except for the last layer).\nWe also include the plot showing the effect of training on number of transitions for interpolated MNIST and interpolated random points"}] |
BJK3Xasel | [{"section_index": "0", "section_name": "NONPARAMETRIC NEURAL NETWORKS", "section_text": "George Philipp. Jaime G. Carbonel\nCarnegie Mellon University. Pittsburgh. PA 15213. USA\ngeorge.philipp@email.de; jgc@cs.cmu.edu"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Automatically choosing a neural network model for a given task without prior information is. challenging problem. Formally, let O be the space of all models considered. The goal of mode. selection is then, usually, to find the value of the hyperparameter 0 E O that minimizes a certaii criterion c(0), such as the validation error achieved by the model represented by 0 when traine to convergence. Because O is large, structured and heterogeneous, c is complex, and gradient of c are generally not available, the most popular methods for optimizing c perform zero-order. black-box optimization and do not use any information about c except its value for certain value of 0. These methods select one or more values of 0, compute c at those values and, based oj. the results, select new values of 0 until convergence is achieved or a time limit is reached. Th most popular such methods are grid search, random search (e.g.Bergstra & Bengio(2012)) an Bayesian optimization using Gaussian processes (e.g.Snoek et al.(2012)). Others utilize randon forests (Hutter et al.]2009), deep neural networks (Snoek et al.f2015) and recently Bayesian neura. networks (Springenberg et al.]2016) and reinforcement learning (Zoph & Le]2017).\nThese black-box methods have two drawbacks. (A) To obtain each value of c, they execute a full network training run. Each run can take days on many cores or multiple GPUs. (B) They do no1 exploit opportunities to improve the value of c further by altering 0 during each training run. In this paper, we present a framework we term nonparametric neural networks for selecting network size. We dynamically and automatically shrink and expand the network as needed to select a good network size during a single training run. Further, by altering network size during training, the network ultimately chosen can achieve a higher accuracy than networks of the same size that are trained from scratch and, in some cases, achieve a higher accuracy than is possible by black-box methods.\nThere has been a recent surge of interest in eliminating unnecessary units from neural networks either during training or after training is complete. This strategy is called pruning. Alvarez 8 Salzmann (2016) utilize an l2 penalty to eliminate units and Molchanov et al. (2017) compare a variety of strategies, whereas Figurnov et al.(2016) focuses on thinning convolutional layers i the spatial dimensions. While some of these methods even allow some previously pruned units t be added back in (e.g.Feng & Darrell(2015)), all of these strategies require a high-performing network model as a starting point from which to prune, something that is generally only available ir well-studied vision and NLP tasks. We do not require such a starting point in this paper.\nIn section|2] we introduce the nonparametric framework and state its theoretical soundness, which we prove in section7.1 In section[3] we develop the machinery for training nonparametric networks,"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Automatically determining the optimal size of a neural network for a given task without prior information currently requires an expensive global search and train ing many networks from scratch. In this paper, we address the problem of auto matically finding a good network size during a single training cycle. We intro duce nonparametric neural networks, a non-probabilistic framework for conduct- ng optimization over all possible network sizes and prove its soundness when network growth is limited via an lp penalty. We train networks under this frame- work by continuously adding new units while eliminating redundant units via an 2 penalty. We employ a novel optimization algorithm, which we term \"Adaptive Radial-Angular Gradient Descent'\"' or AdaRad. and obtain promising results.\nincluding a novel normalization layer in section [3.2] CapNorm, and a novel training algorithm ir section[3.3] AdaRad. We provide experimental evaluation and analysis in section4 further relevant literature in section5|and conclude in section|6\nFor the purpose of this section, we define a parametric neural network as a function f(x) = L.(L-1.(..02.(1.(xW1)W2)..)Wt) of a do-dimensional row vector x, where Wi E Rdi-1*di, 1 l < L are dense weight matrices of fixed dimension and o : R -> R, 1 l L are fixed non-linear transformations that are applied elementwise, as signified by the .Q operator. The number of layers L is also fixed. Further, the weight matrices are trained by solving the mini- mization problem minw=(w): |D] (x,y)ED e(f(W, x), y) + (W), where D is the dataset, e is an error function that consumes a vector of fixed size d1, and the label y, and is the regularizer.\nWe define a nonparametric neural network in the same way, except that the dimensionality of the weight matrices is undetermined. Hence, the optimization problem becomes.\n1 min min e(f(W,x),y)+2(W (x,y)ED\nNote that the dimensions do and d1, are fixed because the data and the error function e are fixed. The parameter value now takes the form of a pair (d, W)..\nThere is no guarantee that optimization problem |1|has a global minimum. We may be able to reduce the value of the objective further and further by using larger and larger networks. This would be problematic, because as networks become better and better with regards to the objective, they would become more and more undesirable in practice. It turns out that in an important case, this degeneration does not occur. Define the fan-in regularizer Nn. and the fan-out regularizer Nout as\nL di Nin(W,,p) X>>H|[Wi(1,j),Wi(2,j),,Wi(d-1,J)]l I l=1 j=1 L di-1 =||[Wi(i,1),Wi(i,2),,Wi(i,di)|lp Nout(W,\\,p) l=1 i=1\nIn plain language, we either penalize the incoming weights (fan-in) of each unit with a p-norm,. or the outgoing weights (fan-out) of each unit. We now state the core theorem that justifies our. formulation of nonparametric networks. The proof is found in the appendix in section|7.1\nTheorem 1. Nonparametric neural networks achieve a global training error minimum at some finit dimensionality when is a fan-in or a fan-out regularizer with X > 0 and 1 < p < oo\nTraining nonparametric networks is more difficult than training parametric networks, because the. space over which we optimize the parameter (d, W) is no longer a space of form Rd, but is an. infinite, discrete union of such spaces. However, we would still like to utilize local, gradient-based. search. We notice, like (Wei et al.]2016), that there are pairs of parameter values with different. dimensionality that are still in some sense \"close\" to one another. Specifically, we say that two. parameter values (d1, W1) and (d2, W2) are f-equivalent if Vx E Rdo, f(W1, x) = f(W2, x). where not necessarily d = d2. During iterative optimization, we can \"jump\"' between those two. parameter values while maintaining the output of f and thus preserving locality. We define a zero. unit as any unit for which either the fan-in or fan-out or both are the zero vector. Given any parameter. value, the most obvious way of generating another parameter value that is f-equivalent to it is to. add a zero unit to any hidden laver l where (0) = 0 holds. Further. if we have a parameter value\nThus, we will use the following strategy for training nonparametric networks. We use gradient-base. methods to adjust W while periodically adding and removing zero units. We use only nonlinearitie that satisfy o(0) = 0. It should be noted that while adding and removing zero units leaves the outpu of f invariant, it does change the value of the fan-in and fan-out regularizers and thus the value o the objective. While it is possible to design regularizers that do not penalize such zero units, this i. highly undesirable as it would stifle the regularizers ability to \"reign in' the growth of the networl. during training.\nWhen a new zero unit is added, we must choose its fan-in and fan-out. While one of the two weigh vectors must be zero, the other can have an arbitrary value. We make the simple choice of initializing the other weight vector randomly. Since we are going to use the fan-in regularizer, we will initialize the fan-out to zero and the fan-in randomly. This will give each new unit the chance to learn and become useful before the regularizer can shrink its fan-in to zero. If it does become zero nonetheless the unit is eliminated."}, {"section_index": "3", "section_name": "3.1 SELE-SIMILAR NONLINEARITIES", "section_text": "For layers 1 through L - 1, it is best to use nonlinearities that satisfy o(cs) = co(s) for all c E R>0 and s E R. We call such nonlinearities self-similar. ReLU (Dahl et al.]2013) is an example of this. Self-similarity also implies o(0) = 0\nRecall that the fan-in and fan-out regularizers shrink the values of weights during training. This in turn affects the scale of the values to which the nonlinearities are applied. (These values are called pre-activations.) The advantage of self-similar nonlinearities is that this change of scale does not affect the shape of the feature.\nIn contrast, the impact of a nonlinearity such as tanh on pre-activations varies greatly based on their scale. If the pre-activations have very large absolute values, tanh effectively has a binary output If they have very small absolute values, tanh mimics a linear function. In fact, all nonlinearities that are differentiable at O behave approximately like a linear function if the pre-activations have sufficiently small absolute values. This would render the unit ineffective. Since we expect some units to have small pre-activations due to shrinkage, this is undesirable..\nBy being invariant to the scale of pre-activations, self-similar nonlinearities further eliminate the need to tune how much regularization to assign to each layer. This is expressed in the following. proposition which is proved in section|7.2.\nProposition 1. If all nonlinearities in a nonparametric network model except possibly o1, are self similar, then the objective function 1 using a fan-in or fan-out regularizer with different regulariza tion parameters X1, .., L for each layer is equivalent to the same objective function using the single . A) t for each laver. up to rescaling of weights. regularization parameter \\\nTo be able to reduce the network size during training, we must produce zero units and, it turns out. the fan-in and fan-out regularizers naturally produce such units as they induce sparsity, i.e. they cause individual weights to become exactly zero. This is well studied under the umbrella of sparse regression (see e.g.Tibshirani(1996)). The cases p = 1 and p = 2 are especially attractive because it is computationally convenient to integrate them into a gradient-based optimization framework via a shrinkage / group shrinkage operator respectively (see e.g.Back & Teboulle(2006)). Further, p = 1 and p = 2 differ in their effect on the parameter value. p = 1 sets individual weights to zero and thus leads to sparse fan-ins and fan-outs and thus ultimately to sparse weight matrices. A unit can only become a zero unit if each weight in its fan-in or each weight in its fan-out has been set to zero individually. p = 2, on the other hand, sets entire fan-ins (for the fan-in regularizer) or fan-outs (for the fan-out regularizer) to zero at once. Once the resulting zero units are removed, we obtain dense weight matrices. (For a basic comparison of 1-norm and 2-norm regularizers, see Yuan & Lin (2006) and for a comparison in the context of neural networks, see[Collins & Kohli](2014).) While there is recent interest in learning very sparse weight matrices (e.g.Guo et al. (2016)), current hardware is geared towards dense weight matrices (Wen et al.]2016). Hence, for the remainder of this paper, we will focus on the case p = 2. Further, we will focus on the fan-in rather than the fan-out regularizer.\nRecently,Ioffe & Szegedy(2015) proposed a strategy called batch normalization that quickly be came the standard for keeping feed-forward networks well-conditioned during training. In our ex periments, nonparametric networks trained without batch normalization could not compete with parametric networks trained with it. Batch normalization cannot be applied directly to nonparamet- ric networks with a fan-in or fan-out regularizer, as it would allow us to shrink the absolute value of individual weights arbitrarily while compensating with the batch normalization layer, thus negating the regularizer. Hence, we make a small adjustment which results in a strategy we term cappea batch normalization or CapNorm. We subtract the mean of the pre-activations of each hidden unit but only scale their standard deviation if that standard deviation is greater than one. If it is less thar one, we do not scale it. Also, after the normalization, we do not add or multiply the result with a free parameter. Hence, CapNorm replaces each pre-activation z with ma- and o is the standard deviation of that unit's pre-activations across the current mini-batch.\nTable 1: Computational cost of efficient implementations of various algorithms, per mini-batch and weight. Operations that do not scale with the number of weights are not included. Operations associated with the computation of the gradient of the loss term (e.g. lines|7|and|8|in algorithm 1] as well as unit addition and removal (e.g. lines[18|to[24|in algorithm[1) are not included as they do not vary between algorithms.\nSGD, no l2 shrinkage SGD with l2 shrinkage AdaRad, no l2 shrinkage AdaRad with l shrinkage RMSprop, no l2 shrinkage RMSprop with l2 shrinkag\nparam., nonparan param., nonparar param., nonparar param., nonparar param. param.\nThe staple method for training neural networks is stochastic gradient descent. Further, there ar. several popular variants: momentum and Nesterov momentum (Sutskever et al.. 2013), AdaGra (Duchi et al.]2011) and AdaDelta (Zeiler)2012), RMSprop (Tieleman & Hinton 2012) and Adan (Kingma & Ba2015). All of these methods center around two key principles: (1) averaging th. gradient obtained over consecutive iterations to smooth out oscillations and (2) normalizing eac. component of the gradient so that each weight learns at roughly the same speed. Principle (2) turn. out to be especially important for nonparametric neural networks. When a new unit is added, i does not initially contribute to the quality of the output of the network and so does not receive muc gradient from the loss term. If the gradient is not normalized, that unit may take a very long time t learn anything useful. However, if we use a fan-in regularizer, we cannot normalize the component. of the gradient outright as in e.g. RMSprop, as we would also have to scale the amount of shrinkag. induced by the regularizer accordingly. This, in turn, would cause the fan-in of new units to becom. zero before they can learn anything useful..\nWe resolve this dilemma with a new training algorithm: Adaptive Radial-Angular Gradient Descen (AdaRad), shown in algorithm[1] Like in all the algorithms cited above, we begin each iteratior by computing the gradient G of the loss term over the current mini-batch (line [8). Then, for each 1 l L and 1 j di, we decompose the sub-vector [Gi(1, j), Gi(2,j), .., G(d-1,j)] into a component parallel to its corresponding fan-in [Wi(1, j), Wi(2, j), .., W(di-1, j)] and a component orthogonal to it (line 11). Out of the two, we normalize only the orthogonal component (line 14 while the parallel component is left unaltered. Finally, the normalized orthogonal component of each sub-vector is added to its corresponding fan-in in radial-angular coordinates instead of cartesiar coordinates (line|16). This ensures that it does not affect the length of the fan-in. Like the paralle component, we leave the induced shrinkage unaltered. Note that l2 shrinkage acts only to shorter the length of each fan-in, but does not alter its direction. Hence, AdaRad with an l2 regularizer applies a normalized shift to each fan-in that alters its direction but not its length (angular shift), as well as an un-normalized shift that includes shrinkage that alters the length of the fan-in but not its direction (radial shift, lines[15and17).\nAdaRad has two step sizes: One for the radial and one for the angular shift, dr and a respectively This is desirable as they both control the behavior of the training algorithm in different ways. The radial step size controls how long it takes for the fan-in of a unit to be shrunk to zero, i.e. the time a unit has to learn something useful. On the other hand, the angular step size controls the general speed of learning and is tuned to achieve the quickest possible descent along the error surface.\nLike RMSprop and unlike Adam, AdaRad does not make use of the principle of momentum. We have developed a variant called AdaRad-M that does. It is described in the appendix in section7.3\nUsing AdaRad over SGD incurs additional computational cost. However, that cost scales more. gracefully than the cost of, for example, RMSprop. AdaRad normalizes at the granularity of fan-ins. instead of the granularity of individual weights, so many of its operations scale only with the number of units and not with the number of weights in the network. In Table[1] we compare the costs of SGD,.\nCost per mini-batch and weight\nFigure 1: Architecture of the nonparametric networks used in the experiments. Activations flow. rightward, gradients flow leftward. In color, we show how each element corresponds to our defini-. tion of a neural network in section 2] CapNorm does not fully fit our definition of nonlinearity as it requires information from multiple datapoints to compute its value. Hence, theorem|1|and propo sition|1|do not technically apply. However, CapNorm is a benign operation that does not lead to. problems in practice."}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "We evaluated our framework using the network architecture shown in Figure[1with ReLU nonlin earities and CapNorm, and using AdaRad as the training algorithm. We used two hidden layer. (L = 3) and started off with ten units in each hidden layer and each fan-in initialized randomly witl expected length 1. We add one new unit with random fan-in of expected length 1 and zero fan-ou to each layer every epoch. While this does not lead to fast convergence - we have to wait until tens or hundreds of units are added - we believe that growing nets from scratch is a good test case fo1 investigating the robustness of our framework. After the validation error stopped improving, we this allows each new unit ~ 50 epochs to train before being eliminated by shrinkage, assuming the length of the fan-in is not altered by the gradient of the loss term."}, {"section_index": "5", "section_name": "4.1 PERFORMANCE", "section_text": "In this section, we investigate our two core questions: (A) Do nonparametric networks converge t a good size? (B) Do nonparametric networks achieve higher accuracy than parametric networks?\nWe evaluated our framework using three standard benchmark datasets - the mnist dataset, the rect. angles images dataset and the convex dataset (Bergstra & Bengio2012). We started by training. nonparametric networks. Through preliminary experiments, we determined a good starting angula step size for all datasets. We chose to start with ag = 30 and repeatedly divided a by 3 when the. validation error stopped improving. By varying the random seed, we trained 10 nets each for several. values of the regularization parameter per dataset and then chose a typical representative from. among those 10 trained nets. Results are shown in black in figure2 Values of A are 3 * 10-3, 10-3 and 3 * 10-4 for MNIST, 3 * 10-5 and 10-6 for rectangles images and 10-5 and 10-8 for convex.\nLabe Cross -Entropy W OL Data Linear CapNorm ReLU Linear CapNorm Softmax L-1 repetitions\nAdaRad and RMSprop. Further, RMSprop has a larger memory footprint than AdaRad. Compared to SGD, it requires an additional cache of size equal to the number of weights, whereas AdaRad only requires 2 additional caches of size equal to the number of units\nWhen training parametric networks, we replaced CapNorm with batch normalization, either with or without trainable free mean and variance parameters. We trained the network using one of the following algorithms: SGD, momentum, Nesterov momentum, RMSprop or Adam. Further experi- mental details can be found in the appendix in section|7.4\nMNIST rectangles images convex 0.03 0.25 0.32 0.028 (108;54) 0.026 0.245 0.3 (227(227) 0.024 0.24 0.28 (328;340) 0.022 0.02 0.235 0.26 (544;607) (1113;1004 0.018 0.23 0.016 (56;10) 0.014 (61;16) [103;21) (343;164) (84;79) 0.012 (323;204) (144;46) (694;169) (89;104) 0.01 0.22 0.2 0 10 20 30 40 50 60 70 80 0 5 10 15 20 25 30 35 40 0 50 100 150 200 Number of parameters (*104) Number of parameters (*104) Number of parameters (*104)\nFigure 2: Test classification error of trained networks. Nonparametric networks are shown in black.. parametric networks in red and blue. Error bars indicate the range over 10 random reruns of the same. setting. For parametric networks, the square represents the median test error over those 10 runs. For. nonparametric networks, the square represents the test error and size of a single representative run. that was close to the median in both size and error. In brackets below or above each plotted point we show the number of units in the two hidden layers..\nThen, we trained parametric networks of the same size as the chosen representatives. The top per formers after an exhaustive grid search are shown in red in figure 2l Finally, we conducted ar exhaustive random search where we also varied the size of both hidden layers. The top performers. are shown in blue in the same figure..\nWe obtain different results for the three datasets. For mnist, nonparametric networks substantially outperform parametric networks of the same size. The best nonparametric network is close in per- formance to the best parametric network, while being substantially smaller (144 first layer units versus 694). For rectangles images, nonparametric networks underperform parametric networks of the same size when X is large and outperform them when X is small. Here, the best nonparametric network has the globally best performance, as measured by the median test error over 10 random reruns, using substantially fewer parameters than the best parametric network.\nWhile results for the first two datasets are very promising, nonparametric networks performed badly on the convex dataset. Parametric networks of the same size perform substantially better and alsc have a smaller range of performance across random reruns. Even if the model found by training nonparametric networks were re-trained as a parametric network, the apparent tendency of nonpara metric networks to converge to relatively small sizes hurts us here as we would still miss out on a significant amount of performance.\nWe also conducted experiments with AdaRad-M, but found that performance was very similar to tha. of AdaRad. Hence, we omit the results. Similarly, we found no significant difference in performance between parametric networks trained with RMSprop and those trained with Adam..\nIn this section, we analyze in detail a single training run of a nonparametric network. We chose mnist as dataset, set X = 3 * 10-4 and lowered the angular step size to 10 as we did not use step size annealing. We trained for 1oo0 epochs while adding one unit to each hidden layer per epoch, ther trained another 1000 epochs without adding new units. The final network had 193 units in the first hidden layer and 36 units in the second hidden layer. The results are shown in figure|3\nIn part (A), we show the validation classification error. As a comparison, we trained two parametric. networks with 193 and 36 hidden units for 1000 epochs, once using SGD and the same step size and. X as the nonparametric network, and once using optimal settings (RMSprop, = 300, X = 0). It is. not suprising that the parametric networks reach a good accuracy level faster, as the nonparametric. network must wait for its units to be added. Also, the parametric network benefits from an increased step size - in this case a = 300. This was true throughout our experimental evaluation..\n(A) Validation classification error (B) Training cross-entropy error (C) Size of hidden layers 0.05 0.5 250 290 0.04 0.4 200 0.03 0.3 150 Sq!un 0.02 0.2 100 NP, X = 3 * 10-4. Q = 10 JO JequnN P, \\ = 3 * 10-4 NP, X = 3 * 10-4. , Q = 10 Qg = 10 P, X = 0, = 300 0.01 P, = 3 * 10-4. , Q = 10 0.1 50 P, \\ = 0, = 300 1st hidden layer 2nd hidden layer 0 0 0 0 100 200 300 400 500 0 100 200 300 400 500 0 250500750 1000 1250 1500 Epoch Epoch Epoch (D) Life lengths of units in 1st hidden layer (E) Lengths of fans in 1st hidden layer (F) Lengths of fans in 2nd hidden layer 2000 2 1500 1.5 1.5 I sypoda Jo JuqunN 1000 1 500 0.5 0.5 0 0 0 0 250 500 750 1000 0 250 500 75010001250 0 250 500 75010001250 Epoch Epoch Epoch\n1500 1000 500 0\nFigure 3: Detailed statistics of a nonparametric training run. See main text for details\nIn (B), we show the training cross-entropy error for the same training runs. Interestingly, parametr. networks reach an error very close to zero. In fact, the unregularized network reaches a value 10-6 and the regularized network reaches a value of ~ 10-4. Both made zero classificatio. mistakes on the training set after training. In contrast, the nonparametric network did not have. near-zero training cross-entropy error. Towards the end of training, it still misclassified around 3. out of 50.o0o training examples. However, this did not harm its performance on the validation est set. In fact, the validation error of nonparametric networks tended to improve slowly for mar epochs, whereas unregularized parametric networks (which were the best parametric networks whe. early stopping is used) tended to have a slightly increasing validation error in the long run..\nIn (C), we show the size of the two hidden layers during training. These curves are very typical of all training runs we examined. For the first ~ 50 epochs, no units are eliminated. This is because. epochs to be eliminated, assuming no impact from the gradient of the loss term. If the layer requires a relatively large number of units, it will keep growing linearly for a while and then either plateau or shrink slightly. Once we no longer add units after 1o00 epochs, both layers shrink linearly by 50 units over ~ 50 iterations, as the units that were added roughly between epochs 950 and 1000 are eliminated in succession. Overall, this process shows the value of controlling Qo and Qr. independently, as we can manage the \"overhead'\" of extraneous units present during training while still ensuring an ideal speed of learning. In (D), we show the length of time individual units in the first hidden layer were present during training. On the x axis, we show the epoch during which a given unit was added. On the y axis, we show the number of epochs the unit was present. Green bars represent units that survived until the end, while black bars represent units that did not. As one might expect, units were more likely to survive the earlier they were added. Units that did not survive were eliminated in ~ 50 epochs. The same graph for the second hidden layer is shown in fi gure4\nIn (E) and (F), we show the lengths of fan-ins (blue) and fan-outs (red) of units in the hidden layers For each layer, we depict the following units in dark colors: three randomly chosen units that were. initially present as well as units that were added at epochs 0, 25, 50, 100, 200, 300, .., 1000. In addition, in light colors, we show three units that were added late but not eliminated. We see a\n0.2 NP, X = 3 * 10-4 Q = 10 P, =3*10 Q = 10 0.1 P=0a=300 300 400 50\nTable 2: Test classification error of various models trained on the poker dataset\nAlgorithm Starting net size Final net size Error Logistic regression (ours) 49.9% Naive bayes (OpenML) 48.3% Decision tree (OpenML) 26.8% 10-3 10-10-10-10 23-24-15-4 0.62% 10-5 10-10-10-10 94-135-105-35 0.022% Nonparametric net. 10-6 10-10-10-10 210-251-224-104 0.001% 10-7 10-10-10-10 299-258-259-129 0% 23-24-15-4 unchanged 0.20% 94-135-105-35 unchanged 0.003% Parametric net 210-251-224-104 unchanged 0.003% 299-258-259-129 unchanged 0.002%\nconsistent pattern for individual units. First, their length decreases linearly as the CapNorm layer filters the component of the gradient parallel to the fan-ins as long as the standard deviation of the pre-activations o exceeds 1. During this period, the unit learns something useful and so the fan-out increases in length. When finally < 1, the parallel component of the gradient starts to slow down the decay and, if the unit has become useful enough, reverses it. If the decay is not reversed, the unit is eliminated. If it is reversed, both fan-in and fan-out will attain a length comparable to those of well-established units.\nFrom a global perspective, we notice that fan-ins in the first layer have lengths much less than 1. This is because first layer units encode primarily AND functions of highly correlated input features. meaning weights of small magnitude are sufficient to attain = 1. In contrast, lengths of fan-ins in the second layer are more chaotic. We found this is because o = 1 is generally NOT attained in the second layer. In fact, the network compensated for lower activation values in the second layer by assigning fan-ins of stable lengths between 3.5 and 4.5 to the 10 output units. The network can assign these lengths dynamically without altering the output of the network because ReLU is self-similar, as described in section|3.1"}, {"section_index": "6", "section_name": "4.3 SCALABILITY", "section_text": "Finally, we wanted to verify whether nonparametric networks could be applied to a large dataset We visited OpenML http: / /www. openm1. org/ a website containing many datasets as well as the performance of various machine learning models applied to those datasets. We applied non parametric networks to the largest classification dataset|'|on OpenML meeting our standards This was the poker dataset http: //www. openm1. org/d/354 It is a binary classification dataset with 1.025.010 datapoints and 14 features per datapoint. We had no prior information about this dataset. In general, we think that nonparametric networks are most useful in cases with no prior information and thus no possibility of choosing a good parametric model a priori.\nWe made the following changes to the experimental setup for poker: (i) we used 4 hidden layers. instead of 2 (ii) we added a unit every tenth of an epoch instead of every epoch and (iii) we multiplied one order of magnitude larger than mnist, and we wanted to approximately preserve the rate of unit addition and elimination per mini-batch. Those changes were made a priori and were not based on examining their performance\nAfter some exploration, we set the starting angular step size for nonparametric networks to 10. We trained nonparametric networks for various values of , obtaining nets of different sizes. We then trained parametric networks of those same sizes with RMSprop, where the step size was chosen by validation, independently for each network size.\n2our standards were: at least 10 published classification accuracy values; no published classification accu racy values exceeding 95%; no extreme label imbalance\nThe results are shown in Table[2 Both parametric and nonparametric networks perform very well achieving less than 1% test error even for small networks. The nonparametric networks had a highet error for larger values of X and a slightly lower error for smaller values of X. In fact, the best. nonparametric network made no mistake on the test set of 100.o00 examples. For comparison, we. show that linear models perform roughly as well as random guessing on poker. Also, the best result. published on OpenML, achieved by a decision tree classifier, vastly underperforms our 4-hidden. layer networks.\nTo achieve convergence, networks required many more mini-batches on poker than they did on the smaller datasets used in section 4.1 However, since units were added to the nonparametric networks at roughly the same rate per mini-batch, the time it took those networks to converge to a stable network size (as in Figure[3C) was a much smaller fraction of the overall training time under poker compared to the smaller datasets. Thus, the downside of increased training time as shown in Figure|3A incurred when networks are built gradually was ameliorated.\nSeveral strategies have been introduced to address the drawbacks of black-box model selection Maclaurin et al.[(2015) indeed calculate the gradient of the validation error after training with re spect to certain hyperparameters, though their method only applies to specific networks trained with very specific algorithms.(Luketina et al.]2016) and (Larsen et al.] 1998) train certain hyperpa rameters jointly with the network using second order information. Such methods are limited tc continuous hyperparameters and are often applied specifically to regularization hyperparameters. Several papers try to speed up the global model search by estimating the validation error of trained networks without fully training them. Saxe et al.(2011) use the validation error with randomly ini- tialized convolutional layers as a proxy.Klein et al.(2017) predict the validation error after training based on the progress made during the first few epochs.\nSeveral papers have achieved increased performance by growing networks during training. Our main inspiration was Wei et al.(2016), who utilize a notion similar to our f-equivalence, though they enlarge their network in a somewhat ad-hoc way. The work of Chen et al.(2016) is similar, but focuses on convergence speed.Pandey & Dukkipati](2014) transform a trained small network into a larger network by multiplying weight matrices with large, random matrices..\nThe performance of a network of given size can be improved by injecting knowledge from othe. nets trained on the same task.Ba & Caruana[(2014) use the predictions of a large network on a dataset to train a smaller network on those predictions, achieving an accuracy comparable to the. large network.Hinton et al.(2015) compress the information stored in an ensemble of networks. into a single network. Simonyan & Zisserman(2015) train very deep convolutional networks by initializing some layers with the trained layers of shallower networks. Romero et al. (2015) train. deep, thin networks utilizing hints from wider, shallower networks..\nBayesian neural networks (e.g.McKay(1992),De Freitas( (2003)) use a probabilistic prior instead of a regularizer to control the complexity of the network. Gaussian processes can been used to mimick \"infinitely wide\"' neural networks (e.g.Williams(1997),Hazan & Jaakkola(2015)), thus eliminating the need to choose layer width and replacing it with the need to choose a kernel. Compared to these. and other Bayesian approaches, we work within the popular feed-forward function optimization paradigm, which has advantages in terms of computational and algorithmic complexity..\nAdding units to a network one at a time is an idea with a long history.Ash (1989) adds units to a. single hidden layer, whereas[Gallant(1986) builds up pyramid and tower structures and Fahlman & Lebiere (1990) effectively create a new layer for each new unit. While these papers provided inspi-. ration to us, the methods they present for determining when to add a new unit requires training the network to convergence first, which is impractical in modern settings. We circumvent this problem by adding units agnostically and providing a mechanism for removing unnecessary units..\nWe introduced nonparametric neural networks - a simple, general framework for automatically. adapting and choosing the size of a neural network during a single training run. We improved. the performance of the trained nets beyond what is achieved by regular parametric networks of the. same size and obtained results competitive with those of an exhaustive random search, for two of three datasets. While we believe there is room for performance improvement in several areas - e.g.. unit initialization, unit addition schedule, additional regularization and starting network size - we see this paper as validation of the basic concept. We also proved the theoretical soundness of the. framework.\nIn future work, we plan to extend our framework to include convolutional layers and to automatically choosing the depth of networks, as done by e.g. Wen et al.[(2016). Part of our motivation to develop. nonparametric networks was to control the layer size via a continuous parameter. We want to make use of this by tuning A during training, either by simple annealing or in a comprehensive framework such as the one introduced in Luketina et al.[(2016). We want to use nonparametric networks to learn more complicated network topologies for e.g. semi-supervised or multi-task learning. Finally. we plan to investigate the possibility of sampling units with different nonlinearities and training an ever-growing network for lifelong learning."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "ei Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In NIPs, 2014\nJames Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. JMLR, 13 281-305, 2012\nTianqi Chen, Ian Goodfellow, and Jonathon Shlens. Net2net: accelerating learning via knowledge transfer. In ICLR, 2016.\nJuan F. De Freitas. Bayesian methods for neural networks. PhD thesis, Trinity College, University of Cambridge, 2003.\nScott Fahlman and Christian Lebiere. The cascade-correlation learning architecture. In NIPs, 1990\nJiashi Feng and Trevor Darrell. Learning the structure of deep convolutional networks. In ICCV 2015.\nMichael Figurnov, Aijan Ibraimova, Dmitry Vetrov, and Pushmeet Kohli. Perforatedcnns: Acceler ation through elimination of redundant convolutions. In NIPs, 2016.\nStephen Gallant. Three constructive algorithms for network learning. In Conference of the Cognitiv Learning Society, 1986\nYiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network durgery for efficient dnns. In NIPS 2016.\nGeorge E. Dahl, Tara N. Sainath, and Geoffrey E. Hinton. Improving deep neural networks for lvcsr using rectified linear units and dropout. In ICASSP, 2013..\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12:2121-2159, 2011.\nGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXi preprint arXiv:1503.02531, 2015.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training b. reducing internal covariate shift. In ICML, 2015.\nDiederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In ICLR 2015.\nAaron Klein, Stefan Falkner, Jost Tobias Springenberg, and Frank Hutter. Dsd: Dense-sparse-dense training for deep neural networks. In ICLR, 2017.\nJan Larsen, Claus Svarer, Lars Nonboe Andersen, and Lars Kai Hansen. Adaptive regularization in neural network modeling. Neural Networks: Tricks of the Trade, 2nd Ed., 7700:111-130, 1998\nJelena Luketina, Mathias Berglund, Klaus Greff, and Raiko Tapani. Scalable gradient-based tuning of continuous regularization hyperparameters. In ICML, 2016.\nDougal Maclaurin, David Duvenaud, and Ryan P. Adams. Gradient-based hyperparameter optimiza tion through reversible learning. In ICML, 2015..\nDavid McKay. A practical bayesian framework for backpropagation networks. Neural Computation 4:448-472, 1992\nPavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Pruning convolutiona neural networks for efficient inference. In ICLR. 2017.\nGaurav Pandey and Ambedkar Dukkipati. Learning by stretching deep networks. In ICML, 2014\nAndrew Saxe, Pang Wei Koh, Zhenghao Chen, Maneesh Bhand, Bipin Suresh, and Andrew Ng Computing with infinite networks. In ICML, 2011..\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.\nJasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical bayesian optimization of machin learning algorithms. In N1PS, 2012\nJasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram Md. Mostofa Ali Patwary, Prabhat, and Ryan P. Adams. Scalable bayesian optimization using deep neural networks. In ICML, 2015.\nJost Tobias Springenberg, Aaron Klein, Stefan Falkner, and Frank Hutter. Bayesian optimization with robust bayesian neural networks. In NIPS, 2016\nIlya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initializa tion and momentum in deep learning. In ICML, 2013.\nRobert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58:267-288, 1996\nTijmen Tieleman and Geoffrey Hinton. Lecture 6.5 - rmsprop, coursera: Neural networks for ma chine learning. 2012\nTao Wei, Changhu Wang, Yong Rui, and Chang Wen Chen. Network morphism. In ICML, 2016\nAdriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. In ICLR, 2015.\nChristopher K. I. Williams. Computing with infinite networks. In NIPs, 1997.\nMing Yuan and Yin Lin. Model selection and estimation in regression with grouped variables Journal of the Royal Statistical Society, Series B, 68:49-67, 2006..\nMatthew D. Zeiler. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701 2012.\nBarret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. In ICLR 2017.\nFirst. we restate the theorem formally\n1 E(d, W) = e(f(W,x),y)+Q(W,X,p) |D (x,y)ED\nMost commonly used nonlinearities are admissible under this theorem as long as o(0) = 0, i.e the sigmoid non-linearity is not admissible, but the tanh non-linearity is. Note that nonlinearities away from zero, are allowed to grow at an almost arbitrary pace. For example, polynomial or ever exponential nonlinearities are possible. Note that the first condition on nonlinearities is technically implied by the other two as long as o(0) = 0, though we will not prove this.\nThe conditions for the error function cover the two most popular choices: cross-entropy coupled with softmax (as in Figure[1) - and the square of the l, distance\nWe will prove this theorem through a sequence of lemmas. Throughout this process, all inputs tc the main theorem are considered fixed and fulfilling their respective conditions\nProof. Let d be fixed. Let B = E(d, O), where O is the value of W of dimensionality d where all individual weights are set to zero. Then let W b be the space of all W of dimensionality d which have at least one individual weight with absolute value greater than B. Clearly, E(d, W) > B for all W E W b. Since Rd\\Wg is compact and E is continuous, there exists a point Wmin that is a minimum of E inside Rd\\ W . Further, Rd\\ W g contains at least one point, namely O, for which\noL, do, dL E Z+ finite datasets D of points (x, y) with x E Rdo and y E Y for some set Y sets of nonlinearities { : R -> R, 1 l L} where each oi fulfils the following conditions: -There exists a function b1,i : R>o -> R>o such that for all S E R>o, -S s S, we have 0i(s) b1.i(S) *s. -It is left- and right-differentiable everywhere - There exists a function b2.1 : R>o -> R>o such that for all S E R>o, -S s S, we have (s)] b2,(S) and (s)] b2,(S), where the superscripts indicate directional derivatives. error functions e : (Rdr Y) -> R that fulfils the following conditions: -It is non-negative everywhere. -It is differentiable with respect to its first argument everywhere - There exists a function b3 : R>o > R>o such that for all S E R>o, v E Rdr and dU : X > 0 and 1 < p < . N E{Sin,Nout} have that 1 E(d, W e(f(W,x),y) +N(W,\\,p) (4) D (x,y) ED\nNow, some definitions:\nLemma 2. Under the conditions of theorem[1|and the additional condition that the i are differen tiable everywhere, if is the fan-in regularizer, then for all B, the set of values of d for which there exist proper B-local minima is bounded..\nLemma 3. Under the conditions of theorem1and the additional condition that the 1 are differ entiable everywhere, if is the fan-out regularizer, then for all B, the set of values of d for which there exist proper B-local minima is bounded..\nLemmas[2|and[3]are the core segments of the overall proof. Here we show that that very large nets have no \"good\"' local minima..\nProof of lemma2 Throughout this proof, we consider B fixed\nWe call a parameter value (d, W) a local minimum of E iff it is a local minimum in its second component, W. We call a local minimum of E B-locally minimal for some B e R iff the value of E at that minimum does not exceed B. We call the proper dimensionality of W the dimensionality obtained when eliminating from W all units which have a zero fan-in or a zero fan-out or both. We call a parameter value (d, W) proper if d is the proper dimensionality of W. We also call a local minimum with such a parameter value proper. Denote (d1,.., di) by d<l and (W1,.., Wi) by W<l. o D ={(x(0),y(0)),(x(1),y(1)),,(x(N),y(N))} We denote intermediate computations of the neural network f(W, x) as follows:\nxo := x Zl:=xl-1Wj 1<l<L X1:=1.(zl) 1<lL f(W,x) = xL\nde(f(W,x),y) gl := 0<l<L dxl de(f(W,x),y) hi : 1<l<L dzl de(f(W,x),y) Gi := 1<l<L dWi\nVector and matrix indeces are written in brackets. For example, the j'th component of z n is denoted by z(m) (j). We denote by square brackets a vector and by its subscript the index the vector is over, e.g Vili is a vector over index i.\nFirst, we notice that it is sufficient to prove the bounds exist for a specific datapoint. The uniform. bound across all datapoints is then simply the maximum of the individual bounds. Denote by (x, y). an arbitrary fixed datapoint throughout the proof of the above claims. Also, notice that the claims are trivially true if there are no proper B-local minima. Hence, throughout the proof of the claims,. we assume there exists at least one such minimum.\nWe will prove the claims jointly by induction. The order of the induction follows the order of. computation of the neural network. Our starting case will be xo, followed by Z1, f1 and x1 etc\nThe starting case is obvious as xo = x is fixed and does not depend on the parameter (d, W). Hence we can choose Bx,o =|x||1\nNow assume we have B [x1-1i < Bx.1-1. Then\nsup Z11 (d, W)proper B-locally minimal sup ||x1-1Wi||1 = (d, W)proper B-locally minimal < sup ||x1-1Wi||1 (d,W),I|xi-1||1<Bx,-1,XjF1|[Wi(i,j)]i||p<B sup( sup sup ||x1-1Wi||1)) l|(uTWi|1)) < sup( sup sup di sup( sup sup |uT[Wi(i,j)]i\ndi sup( |uT[Wi(i,j)]i[)) sup sup\nsup( sup sup sup di sup( < sup sup d1 cBr,1 < sup( sup dsi`c0,jL1c<R j=1 B Bx,l-1 < sup d<l B Bx,l-1\nsup( sup sup sup\nA line-by-line explanation of the above is as follows:\nsup l|zi|1 (12) (d, W)proper B-locally minimal. sup |x1-1Wi||1 (13) (d, W)proper B-locally minimal. l|x1-1Wi||1 (14) < sup (d,W),l|xi-1|1<Bx,l-1,AjF1||[Wi(i,j)]i||p<B sup( sup sup l|x(-1W(||1)) (15) d<1`Wz,XjL1|[Wi(i,j)]|lp<B`W<|x1-1|1<Bx,l-1 |uTWi||1)) < sup( sup sup (16) ( 2.1 |uT[Wi(i,j)]i|)) (17) sup( sup sup di sup( sup |uT[Wi(i,j)];D0I8) sup sup d1 sup( sup [uT[Wi(i,j)]i|)0l9) sup sup di |uTv())) sup( sup sup sup (20) di < sup( sup (21) sup di < sup( sup (22) C;Bx. dgi`cj0,jF1cj<R j=1 BBx,l-1 < sup (23) d<l BBx,l-1 (24) above is as follows.\na |uT[Wi(i,j)]i|)0l8 sup( sup sup sup d<i cj0,jL\nB Bx,l-1 And therefore, we may choose B, as required\nNow consider the other inductive steps. Assuming we have a valid Bz.t, we have at all proper B. local minima o(zi(j))] b2,i(Bz,t) because zi(j)] zii Bz,1 and hence we can choose Bdo,l = b2,(Bz,l) as required. Finally, at all proper B-local minima, xi|[1 = oi.(zi)l[i = choose Bx.1 = b1.(Bz.1)Bz.1. This completes the proof of claims 1(a)-(c)..\nAgain. we can restrict our attention to a single datapoint and again. we will prove these claims by induction, but going backwards along the flow of the gradient. The starting case is gL. At all proper B-local minima, E B, so specifically e(x, y) B and therefore we have ||gL|| = dx L required.\n14 Relaxing the conditions on (d, W) by replacing proper B-local minimality by two condi- tions that proper B-local minimality implies. The first condition is the induction hypothe- sis. The second condition follows because E B and so specifically n(W) B and so specifically Nin(Wi) < B 15 Breaking up the supremum into three stages. We drop components of d and W that are immaterial to the value of the objective of the supremum. 16] We further relax the innermost sup by no longer requiring that xt-1 be the intermediate output of some neural network but an arbitrary vector of fixed size and limited length W<1 then becomes immaterial. bv its definition\nAt any proper parameter value, all fan-ins are non-zero. Therefore is differentiable with re spect to W, and therefore E is differentiable with respect to W. Hence, at any proper B\n1 de N d[Wi(i,j)]i n de dxt(j) N dxi(j) d[Wi(i,j)]i n dxi(j) dzi(j) N dzi(j) d[Wi(i,j)]i n gt(j)0(zi(j))xi-1||_p N n `gt(j)j(zi(j))xi-1||1 N n d1-1 1 |gi(j)0{(zi(j))x-1(i)| N i=1 d1-1 1 |gt(j)|o{(zi(j))|x1-1(i) < N i=1 n 1 gi(j)||(zi(j))|||x-1||1 N n 1 |gi(j)|Bdo,lBx,l-1 N n\nNote that this claim is tantamount to proving the hypothesis of the Lemma\ndi B2 N di |9(n) n (i) n=1 i=1 N di d1+1 W+1(i,j)h{+)(j)| 1 C n=1i=1 j=1 N d di+1 LLL |Wi+1(i,j)h{+1(j)] n=1 i=1 j=1\n(25) 1 de (26) N d[Wi(i,j)]i n 1 de dx1(j) (27) N dxt(j) d[Wi(i,j)]i n 1 dxi(j) dzi(j) (28) N dzi(j) d[Wi(i,j)]i n 1 gl(j)0{(zi(J))xi-1||_p (29) N n 1 gi(j)0{(zi(j))x-1|1 < (30) N n d1-1 1 |gi(j)0{(z(j))x1-1(i)| (31) N i=1 n di-1 1 |g(j)|o{(z(j))|xi-1(i)| < (32) N i=1 n 1 Z ) |gi(j)||}(zi(j))x1-1|1 (33) n 1 < |gi(j)|Bdo,lBx,l-1 (34) N n N\nClaim 4: There exist constants Di, 1 l < L such that at all proper B-local minima, for all 1 < l < L, we have d < D\nWe will prove this by induction going backwards. Since dt is fixed, we can simply pick Dt = dL Now assume we have a valid Di+1. Then at all proper B-local minima we have\nTherefore, by the box principle, there exists an n' and j' such that d=1 |Wi+1(i, j')hi?(')| di B? . So further, we have d1+1 N:\nd1+1NV di |Wi+1(i,j')h} < i=1 di |h?(j)||Wi+1(i,j')| i=1 Bn,l+1||[Wi+1(i,j)]i||1 p-1 < Bnl+1d, P l|[Wi+1(i,j')]ill] -1 < Bh,l+1d P B\nBBn,+1Nd+1)P and so we can choose Di - BBn,+1NDi+1)P, which com- And therefore, dj <. B2 B2 pletes the proof..\nFor brevity, we will only give a summary of the proof of lemma[3] as it is very similar to the proof of lemma2\nSketch of proof of lemma[3 As in the previous proof, we consider B fixed.\nThe arguments mirror those of Claim 3 and 4 from the previous proof, but with the role of activatior and gradient reversed. Also, here, Claim 4 is proved by induction along the order of feed-forwarc execution.\nAs in the previous proof, we proceed by induction along the order of feed-forward execution of. the neural network. However, we use the arguments we used for Claims 2(a)-(b) in the previous proof. This is because the fan-out regularizer \"appears\" like a fan-in regularizer when the direction. of signal flow is reversed.\nAs in the previous proof, we proceed by induction along the flow of the gradient. However, we use the arguments we used for Claims 1(a)-(c) in the previous proof..\nThis is the stronger version of the previous lemmas where we only use directional differentiability of. the oj instead of actual differentiability. The proof is a rather tedious extension of the previous two proofs and not very instructive, which is why we broke out the differential case as its own lemmas.. Following this lemma, we immediately prove the main theorem..\nFirst, we define a signature S with dimensionality d as a binary sequence of vectors. Then, for all d. x' E Rdo, W' of dimensionality d and S of dimensionality d, we define a linearized neural network that is linearized at point x' with weights W' and signature S f[S,w',x'] as follows: First, obtain x] and z for 1 l L by evaluating f(W', x') as usual. Then, for each oi used in f, define a vector of functions o| and with dS dS St(j) = 1. Finally, we obtain f[S,w',x'l from f by, for each 1 l L, replacing o that is applied elementwise with o\nIn plain language, we linearize a neural network at a point by evaluating it at that point and replac. ing each nonlinearity by a straight line as indicated by the value and directional derivative of that nonlinearity wherever it is evaluated, where the direction of the derivative used is governed by S\nSimilarly, define a partialy linearized neural network f[St,w',x'] in the same fashion, except only layers l and above are linearized. Finally, define a partially linearized neural network f[St,,w',x'] in the same fashion, except only layers l and above as well as unit i in layer l - 1 are linearized.\nVsWu-1e(f(W,x),y) VsW1-1e(OL.(OL-1.(xL-2WL-1)WL),y) de V sW1 .(0L-1.(xL-2WL-1)W)) dx L d L de (j)VsWt-1(OL.(OL-1.(xL-2WL-1)WL))(j) dx L j= de dx L de (j*(zL(j))VsW1-1OL-1.(xL-2WL-1)WL)(j dx L dL-1 de (j)0(zL(j)) Wt(i,j)VsWL-1(0L-1.(xL-2WL-1 dx L j=] i=1 de .*0*.(zL))W'VsW1-1(OL-1.(xL-2WL-1))7 dx L 20\nVsWu-1e(f(W,x),y) VsW1-e(OL.(OL-1.(xL-2WL-1)WL),y) de VsW1-1(0L(0L-1.(xL-2WL-1)W1)) dx L dL de j)VsWL-1(0L.(OL-1.(xL-2WL-1)WL))(j) dx L de (j)V8zz=Vswr-1[0L-1(xL-2Wr-1)Wz](OL(zL))(j) dx L de dx L (j)*(zL(j))VsW1-1(0L-1.(xL-2WL-1)WL)(j) d L-1 de (j)o*(zL(j)) )`Wz(i,j)VsW1-1(0L-1.(xL-2WL-1))(i) dx L j=1 i=1 de *0*.(zL))W'V8WL-1(OL-1.(xL-2WL-1))7 dx L\ne is composed of functions that are differentiable or directionally differentiable with respect to W so e itself is directionally differentiable with respect to W. Specifically, let us analyze the directional. derivative of e with respect to some perturbation of WL-1.\naL de *0*(zL))W(j)*-1(zL-1(j))VsW-1(xL-2W-D(j dx L j=1 dL- 1 d L - 2 de .*0*.(zL))W)(j)*-1(zL-1(j)) >~xL-2(i)VsWt-iWL-1(i,3 dx L j=1 i=1 d L- d L-2 de *0t.(zL))W)(j)*-1(zL-1(j) >`xL-2(i)8WL-1(i,j) dX L j=1 i=1 de *0*.(zL))W).*0*-1(zL-1))xL-2).SWL-1(i,j) dx L\nHere, a * superscript is a \"wildcard\" that can stand for a left or a right derivative. When combined with the .O elementwise operation, it can mean a different derivative (left or right) for each element\nWe use the chain rule for directional derivatives (lines47]50 and[54), the linearity of the directional derivative (lines 52Jand 55), the fact that the directional derivative of a differentiable function is the dot product of its gradient with the perturbation (line|48), and the fact that the directional derivative of a left- and right-differentiable scalar function is either the product of its left derivative with the perturbation or the product of its right derivative with the perturbation (lines|51|and|54).\nWe notice that the final expression in line|57|is the same expression we would obtain if the i were. differentiable, except with a * instead of a' superscript. Now the linearized neural networks come. into play. We can choose a signature S that matches the wildcards in the above directional derivative\nf[S,w,x] and f are identical at x and the backward evaluation picks out the correct left and right. derivatives. In fact, it is sufficient to choose a partially linearized network with signature S>L-1 to achieve the above identity\nSo far, we have investigated the directional derivative with respect to SW1-1. However, the same arguments hold for all 1 < l < L. We can expand Vsw,e(f(W,x),y) in the same way, except we repeat the transformation from line|48|to line|53|L - l times. Hence, we have what we will call claim 0.\nClaims 2(a)-(b) are changed as follows:\nThe proof is as before, where derivatives of the o1 are again replaced by a left or right derivative as indicated by the signature\nClaim 3 and its proof change somewhat\nNow, we refer back to the proof of lemma2] Claims 1(a)-(c) hold as before, except in Claim 1(c) we replace|o'(z(r)(j))] Bdo,1 with |o+(z(m)(j))! Bdo,1 and|o~(z(n)(j)| Bdo,1. Further, (16) note that claims 1(a)-(c) also hold for neural networks that are linearized at the respective B-local. minimum and datapoint at which they are evaluated..\nAt all proper B-local minima, we have for all l, i and j\n0 VsWi(i,j)E e(f(W,x(n)),y(n))+S(W)) n=1 N 1 VsWi(i,)e(f(W, )+ VsWi(i,j)2(Wi) N n=1 1Z N W.x d2(Wi) 8W(i dWi(i,j) dWi(i,j) n=1\nA A ,n,W,x` dWi(i,j) n=1 N 1 (n)[S}),n,W, 91 1i_ N n=1 N 1 (n)[S}j,n W. |g1 (i)l N n=1 N 1 (n)[Sij,? 1l_ p N n=1 N 1 Bdo,ul[l9i (n)[S>l+1,W (i)|]i||_p N n=1S>l+1 N 1 Bdo,ul[ I9i (n)[S>l+1,W,a < Dlxj b)(i)|]i|1 N n=1S>l+1 di-1 N 1 Bdo, |9i (n)[S>l+1,W,x(n] (i) N i=1 n=1S>l+1 N 1 o,1Bx,l-1|9] (n)[S>l+1,W,x : Bd N n=1S>l+1\nHere, sw,(i,j) stands for the directional derivative with respect to a change in the scalar value Wi(i,j). Because this is a special case of a directional derivative with respect to sW,, we can use Claim 0 to obtain line 62[ Note that to use Claim 0, we have to choose a different partial signature for each value of i, j, and n, which we indicate by superscript. Specifically, we now choose sWi(i,j) = -1 and corresponding signatures.Then, for all\n) dWi dS(Wi) dS(Wi) Since > 0, we have dWi(i,j) dWi(i,j) def dS(Wi) So in particular for all l and j we have 1 dWi(i,j) dWi(i,j) i,j,n,W,x(r Vl S2W1 _- = X. So further we have: dWi(i,j) dW(i,j (63) N n,W 1 (64) N dWi(i,j) N 1 (j))x{1(i)]i||,z (65) N N 1 j))|x{1(i)|]i||_e1 < (66) qj N N 1 (n)[sij,n .W.x (i)|]i|_p (67) < 91 N N 1 Bdo,i|[ (n)[S>l+1,W,x(n)] (j)|x{)(i)|i||,e1 < 91 (68) N n=1 S>l+1 N 1 Bdo,1l|[ (n)[S>l+1,W,x < Dlxj )(i)|li (69) 91 N n=1 Sl+1 di-1 N 1 Bdo, (n)[S>l+1,W,x(n) j)||x{b (i) (70) \\91 N i=1 n=1Sl+1 N 1 LL (n)[S>l+1,W,x Bx,l-1 (71) gj N n=1 S>l+1\nProof of theorem[1 Clearly, E is bounded below by zero. Therefore, it has a greatest lower bound which we call B. Denote (t, t, .., t) by dt. If d is assumed to be fixed at dt, E has a global minimum by lemma|1] Let Wt denote one such global minimum. Let Et denote the value of E at (dt, Wt).\nNow let d' and d\" be two arbitrary values of d with d, > d' for all O < l L. (Denote such a. relation by d' > d\".) Then, any value that E can attain with d = d\" it can attain with d = d because we can change any d\"'-dimensional value of W into a d'-dimensional value by adding. d' d' units with zero fan-in and fan-out to each layer without changing E. In particular, this. implies that (E)t is a decreasing sequence because dt+1 dt. Since it is also bounded below by B, it converges. Call its limit C.\nAssume C > B. Then there exists some (d', w') with E(d', w') < C. However, any value that E can attain with d = d' it can attain with d = d, where t' = maxy d', because dt, d'. Therefore. C > E(d', W) > E, > C. Contradiction. Therefore, C = B..\nNow assume that for some t, W? has a unit that has zero fan-in but not zero fan-out, or vice versa. Then by setting the non-zero fan to zero, the output of f is unchanged for all x E Rdo and the value of is reduced. Therefore, we reduce E, which contradicts the fact that (dt, W) is a global minimum of E when d is fixed to dt. Therefore, all units in W; that have zero fan-in also have zero fan-out, and vice versa.\nLet dproper be the proper dimensionality of Wt and Wproper be the result of removing all units with. zero fan-in or fan-out from Wt. Indeed, as we have shown, all units removed had both zero fan-in and fan-out. Assume (dproper, Wproper) is not a local minimum of E. Then there exists a W' of. we call w'. Since E is invariant under the addition and removal of units with both zero fan-in and zero fan-out, we have both E(dproper, w') = E(dt, W\") and E(dproper, wproper) = E(dt, Wt). Therefore, we have E(d, W\") < E(dt, Wt), which contradicts that Wt is a global minimum of proper Et-local minimum of E and therefore a proper Eo-local minimum of E..\nBut (E)t converges to B from above. Therefore ET = B, therefore E(dT, WT) = B and so E attains its greatest lower bound which means it attains a global minimum, as required.\nFrom lemma 4] we know that the set of proper Eo-local minima is bounded. Hence, the set {dproper, t 0} is bounded, i.e there exists some dmax with dmax dproper for all t. Hence, if. we denote max dmax by T, we have dr dproper for all t and therefore ET E(dproper, wproper) But E(dproper, wproper) = E(dt, W) = Et, and therefore Et ET for all t.\nProposition1 If all nonlinearities in a nonparametric network model except possibly 1 are self similar, then the objective function 1 using a fan-in or fan-out regularizer with different regulariza tion parameters X1, .., AL for each layer is equivalent to the same objective function using the single regularization parameter X = (II=1 At) for each layer, up to rescaling of weights.\nProof. Choose arbitrary Dosutive I. and let )i. We have\nf(W,x) OL.(0L-1.(..02.(01.(xW1)W2)..)WL) L OL.(0L-1...02.(01.(] xWi)W2)..)WL l=1 O L: xW1)W2)..)WL l=2 Wi)W2)..)WL L.0L-1...02.\nThe line-by-line explanation is as follows\n73 Insert the definition of f.. 74 Insert a multiplicative factor of value 1. 75 Utilize the self-similarity of 01.. 76 Repeat the previous step L - 2 times. . Utilize linearity..\nFurther. assuming we use a fan-in regularizer, we have.\nThe argument is equivalent for the fan-out regularizer\nWe find that the value of the objective is preserved when we replace all regularization parameters with the same value X = (II=1 t) and rescale Wi by . This completes the proof.\nL di x|[Wi(i,j)]i||p l=1 j=1 L di a1 l|[Wi(i,j)]i||p 1 l=1 j=1 L di Wi(i,j)]i||p l=1 j=1\nl[Wi+1(i,j)]|l]\nAlgorithm 2: AdaRad-M with l2 fan-in regularizer and the unit addition / removal scheme used ir this paper in its most instructive (bot not fastest) order of computation\nAdaRad-M is shown in algorithm[2] The main difference in comparison to AdaRad (see algorithn 1) is that, for each fan-in, we maintain an exponential running average of the orthogonal componen i(i, j)]; (line|16) which we use to compute the angular shift (line|17). Hence, AdaRad-M, like. Adam but unlike RMSprop and AdaRad, makes use of the principle of momentum..\n1500 1000 500 0 0\nFigure 4: Length of time individual units in the second hidden layer were present during training The x axis depicts the epoch at which a given unit was added.\nIn table 3] we show all hyperparameter values and related choices that were universal across all training runs and, unless specified otherwise, datasets\nOne issue of note is that the running average of the orthogonal component is not itself orthogonal. to the current value of the fan-in. Hence, if some multiple of it was added to the fan-in in radial- angular coordinates, it would change the length of the fan-in. This is undesirable as explained in section|3.3] Therefore, we take steps to the ensure that [i(i, j)]; is kept orthogonal to [Wi(i, j)] First, whenever we rotate [Wi(i, j)]; (line[19), we rotate [i(i, j)]; in the same manner (line20) Second, whenever a unit in layer l and hence rows of Wi+1 and $i+1 are deleted, we explicitly. re-orthogonalize them (line33)\n1. We conducted a grid search over X E {10-2,3 * 10-3,10-3,3 * 10-4,10-4,3 * 10-5,10-5,3 * 10-6,10-6,3 * 10-7,10-7,3 * 10-8,10-8} and E {1, 3,10, 30, 100, 300, 1.000, 3.000, 10.000, 30.000, 100.000} for nonparametric (NP) networks using AdaRad and a single random seed, for each of the mnist, rectangles-images. and convex datasets. By examining validation classification error (VCE) and other metrics (but not test error), we chose the single value Qo = 30 for all NP experiments from now. on. Further, we chose a few interesting values of X for each dataset. From now on, all. experiments were conducted independently for each dataset.. 2. We trained 10 NP networks for each chosen value of X, with 10 different random seeds Out of the 10 nets produced, we manually chose a single net as a typical representative by. approximating the median of both network size, measured in number of weight parameters,. and the test classification error (TCE) across the 10 runs. This representative, as well as the. range of sizes and TCEs are shown in black in figure2 3. For each chosen representative, we conducted a grid search for parametric (P) net-. works by fixing the size of the net to the size of the representative. The grid was over. E {1, 3, 10, 30, 100, 300, 1.000, 3.000, 10.000, 30.000, 100.000}, over training algo- rithm (one of SGD, momentum, Nesterov momentum, RMSprop, Adam), and over whether batch normalization layers had free trainable mean and variance parameters. We introduced. the last choice to more closely mimic CapNorm, which does not include free parameters.. We set X = 0 as l2 regularization is not compatible with regular (uncapped) batch normal-. ization. In preliminary experiments, networks trained with l2 regularization and no batch. normalization were not competitive. We used the same random seed as in step|1.\nHyperaparameter\nTable 3: Hyperparameters and related choices\nFor NP networks, we trained until the VCE had not improved for 100 epochs. Then, we rewound the last 100 epochs and kept training without adding units. After no units had been eliminated and the VCE had not improved for 100 epochs, we set to zero, rewound the last 100 epochs and kept training. After the VCE had not improved for 100 epochs, we rewound again and divided the angular step size by 3. After the VCE had not improved for 5 epochs, we rewound and divided the angular step size by 3 again. We kept doing this until the angular step size was too small to change the VCE\nFor P networks, we trained until the VCE had not improved for 100 epochs, then rewound and divided the step size by 3. We kept training until the VCE had not improved for 5 epochs, then. rewound again and divided the step size by 3. We kept doing this until the step size was too small to. change the VCE.\nHyperaparameter Value network architecture see figure[1 number of hidden layers (not poker). 2 number of hidden layers (poker). 4 Qr: radial step size for AdaRad (not poker). 50X Qr: radial step size for AdaRad (poker). 5A v: unit addition rate for AdaRad. 1 Vfreq: unit addition frequency for AdaRad (not poker). once per epoch. Vfreq: unit addition frequency for AdaRad (poker). ten times per epoch. Barith: arithmetic mixing rate for AdaRad, momentum, 0.1 Nesterov momentum and Adam quad: quadratic mixing rate for AdaRad, RMSprop and. 0.005 Adam e: numerical stabilizer for AdaRad, RMSprop and. 10-8 Adam number of starting units for NP networks. 10 per hidden layer. W: initial weights (P and NP). Wi(i,j) ~ N(0, fan-in [Wi(i, j)] for a newly added unit j Wi(i,j) ~ N(0, batch size 1000 batch sampling every epoch, batches are sam- pled without replacement type of validation (not poker) one random train-valid split for. each random seed type of validation (poker). one single random train-valid-. test split for all training runs. train-valid split (MNIST) 50.000 - 10.000 train-valid split (rectangles images) 10.000 - 2.000 train-valid split (convex) 7.000 - 1.000 train-valid-test split (poker) 800.000 - 125.010 - 100.000\n4. We chose the 10 best performing settings from the grid search by VCE and produced 10. reruns for each setting using the same 10 random seeds as in step[2 Then we chose the best. setting out of the 1O by median VCE. We depict the median as well as the range of TCE for. that best setting in red in figure[2] Note that the setting that had the lowest median TCE in all cases also had the lowest median VCE.. 5. We conducted a random search for P networks with 500 random settings. We chose a. uniformly from the interva1 [1, 100.000] in log scale. Training algorithm and type of batch. normalization were chosen uniformly at random from the same sets as in step 3. The size of each hidden layer was chosen uniformly at random between the size of the corresponding. layer in the largest NP representative, and 5 times that size. We used the same random seed. as in step1 6. We chose the 10 best settings by VCE and reran them 10 times, using the same 10 random. seeds as in step|2] By considering network size and median VCE, we chose 2 or 3 settings. to display in blue in figure [2] including the setting with the lowest median VCE. In each. case, the setting with the lowest median VCE also had the lowest median TCE.\nFor NP networks, we trained until the VCE had not improved for 1O epochs. Then, we rewound. the last 1O epochs and kept training without adding units. After no units had been eliminated and the VCE had not improved for 10 epochs, we set to zero, rewound the last 10 epochs and kept. training. After the VCE had not improved for 10 epochs, we rewound again and divided the angular. step size by 3. After the VCE had not improved for O.5 epochs, we rewound and divided the angular. step size by 3 again. We kept doing this until the angular step size was too small to change the VCE.\n1. We conducted a grid search over X E {10-3,3 * 10-4,10-4,3 * 10-5,10-5,3 * 10-6,10-6,3 * 10-7,10-7} and Qs E {1,10,100,1.000,10.000} for NP networks us- ing AdaRad, a single random seed and the poker data set. By examining VCE and other metrics (but not test error), we chose the single value ao = 10. For this value, we chose. several values of X. The size and TCE of the nets trained using those values of are shown in table2 2. For each trained NP network shown in table 2 we trained P networks of. the same size using RMSprop and each of the following step sizes:. C {1, 3, 10, 30, 100, 300, 1.000, 3.000, 10.000}. For each network size, the TCE of the net-. work with the lowest VCE is shown in table|2 For all network sizes, the network with the. lowest TCE also had the lowest VCE..\nFor P networks, we trained until the VCE had not improved for 10 epochs, then rewound and divided the step size by 3. We kept training until the VCE had not improved for O.5 epochs, then rewound again and divided the step size by 3. We kept doing this until the step size was too small to change the VCE."}] |
S1AG8zYeg | [{"section_index": "0", "section_name": "SENTENCE ORDERING USING RECURRENT NEURAL NETWORKS", "section_text": "Lajanugen Logeswaran, Honglak Lee & Dragomir Radey\nModeling the structure of coherent texts is a task of great importance in NLP. The task of organizing a given set of sentences into a coherent order has been. commonly used to build and evaluate models that understand such structure. In this work we propose an end-to-end neural approach based on the recently proposed. set to sequence mapping framework to address the sentence ordering problem. Our model achieves state-of-the-art performance in the order discrimination task. on two datasets widely used in the literature. We also consider a new interesting. task of ordering abstracts from conference papers and research proposals and. demonstrate strong performance against recent methods. Visualizing the sentence. representations learned by the model shows that the model has captured high. level logical structure in these paragraphs. The model also learns rich semantic. sentence representations by learning to order texts, performing comparably to. recent unsupervised representation learning methods in the sentence similarity and. paraphrase detection tasks"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Modeling the structure of coherent texts is one of the central problems in NLP. A well written piece of text has a particular high level logical and topical structure to it. The actual word and sentence choices as well as their transitions come together to convey the purpose of the text. Our overarching goal is to build models that can learn such structure by learning to arrange a given set of sentences to make coherent text.\nThe sentence ordering task finds several applications. Multi-document Summarization (MDS) and retrieval based question answering involve extracting information from multiple source documents and organizing the content into a coherent summary. Since the relative ordering about sentences that come from different sources can be unclear, being able to automatically evaluate a particular order and/or finding the optimal order is essential. Barzilay and Elhadad (2002) discuss the importance of an explicit ordering component in MDS systems. Their experiments show that finding an acceptable ordering can enhance user comprehension.\nModels that learn to order text fragments can also be used as models of coherence. Automated essay. scoring (Miltsakaki and Kukich2004 [Burstein et al.[[2010) is an application that can benefit from such a coherence model. Coherence is one of the key elements on which student essays are evaluated in standardized writing tests such as GRE (ETS). Apart from its importance and applications, our motivation to address this problem also stems from its stimulating nature. It can be considered as a jigsaw puzzle of sorts in the language domain..\nOur approach to the problem of modeling coherence is driven by recent successes in 1) capturing. semantics using distributed representations and 2) using RNNs for sequence modeling tasks.\nSuccess in unsupervised approaches for learning embeddings for textual entities from large text. corpora altered the way NLP problems are studied today. These embeddings have been shown to capture syntactic and semantic information as well as higher level analogical structure. These. methods have been adopted to learn vector representations of sentences, paragraphs and entire."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Recurrent Neural Networks (RNNs) have become the de facto approach to sequence learning and mapping problems in recent times. The Sequence to sequence mapping framework (Sutskever et al. 2014), as well as several of its variants have fuelled RNN based approaches to a wide variety of problems including language modeling, language generation, machine translation, question answering and many others.\nVinyals et al.(2015a) recently showed that the order in which tokens of the input sequence are fed tc. seq2seq models has a significant impact on the performance of the model. In particular, for problem. such as sorting which involve a source set (as opposed to a sequence), the optimal order to feec. the tokens is not clear. They introduce an attention mechanism over the input tokens which allow. the model to learn a soft input order. This is called the read, process and write (or set to sequence. framework. The read block maps the input tokens to a fixed length vector representation. The. process block is an RNN encoder which, at each time step, attends to the input token embeddings anc. computes an attention readout, appending it to the current hidden state. The write block is an RNN. which produces the target sequence conditioned on the representation produced by the process block.\nIn this work we propose an RNN based approach to the sentence ordering problem which exploits the. set to sequence framework. A word level RNN encoder produces sentence embeddings. A sentence level set encoder RNN iteratively attends to these embeddings (process block above) and constructs a. representation of the context. Initialized with this representation, a sentence level pointer network. RNN points to the next sentence candidates.\nThe most widely studied task relevant to sentence ordering and coherence modeling in the literature is the order discrimination task. Given a document and a permuted version of it, the task involves. identifying the more coherent ordering of the two. Our proposed model achieves state of the art performance on two benchmark datasets for this task, outperforming several classical approaches and more recent data-driven approaches.\nAddressing the more challenging task of ordering a given collection of sentences, we consider the novel and interesting task of ordering sentences from abstracts of conference papers and research grants. Our model strongly outperforms previous work on this task. We visualize the learned sentence representations and show that our model captures high level discourse structure. We provide visualizations that aid understanding what information in the sentences the model uses to identify the next sentence. We also study the quality of the sentence representations learned by the model by training the model on a large text corpus and show that these embeddings are comparable to recent unsupervised methods in capturing semantics.\nIn summary our key contributions are as follows\nCoherence modeling and sentence ordering The coherence modeling and sentence ordering tasks. have been approached by closely related techniques. Most approaches propose a measure of coherence. and formulate the ordering problem as finding an order with maximal coherence. Recurring themes from prior work include linguistic features, centering theory, local and global coherence.\nLocal coherence has been modeled by considering properties of a local window of sentences such as sentence similarity and sentence transition structure. Foltz et al. (1998) represent words using vectors of co-occurent counts and sentences as a mean of these word vectors. Sentence similarity is defined as the cosine distance between sentence vectors and text coherence is modeled as a normalized sum of similarity scores of adjacent sentences.Lapata (2003) represents sentences by vectors of\nWe propose an end to end trainable model based on the set to sequence framework to address the challenging problem of organizing a given collection of sentences in a coherent order.. We consider the novel task of understanding structure in abstract paragraphs and demonstrate state. of the art results in order discrimination and sentence ordering tasks.. We demonstrate that the proposed model is capable of learning semantic representations of. sentences that are comparable to recently proposed methods for learning such representations..\nlinguistic features and learn the transition probabilities from one set of features to another in adjacen. sentences. A popular model of coherence is the Entity-Grid model Barzilay and Lapata (2008) which. captures local coherence by modeling patterns of entity distributions in the discourse. Sentences are represented by the syntactic roles of entities appearing in the document and entity transitior frequencies in successive sentences are treated as features that are are used to train a ranking SVM These two approaches find motivation from ideas in centering theory (Grosz et al.| 1995) which state that nouns and entities in coherent discourses exhibit certain patterns..\nGlobal models of coherence typically use an HMM to model document structure. The content mode. proposed by Barzilay and Lee(2004) represents topics in a particular domain as states in an HMM State transitions capture possible presentation orderings within the domain. Words of a sentence are. modeled using a topic-specific language model. The content model has inspired several subsequen. work to combine the strengths of local and global models. Elsner et al.(2007) combine the entit model and the content model using a non-parametric HMM. Soricut and Marcu[(2006) use severa. models as feature functions and define a log linear model to assign probability to a given text. Louis. and Nenkova (2012) attempt to capture the intentional structure in documents using syntax as a prox for the communicative goal of a sentence. Syntax features such as parse tree production rules anc. constituency tags at a particular tree depth were used.\nUnlike previous approaches, we do not employ any handcrafted features and adopt an embedding based approach. Local coherence is taken into account by having a next sentence prediction com ponent in the model and global dependencies are naturally captured by an RNN. We demonstrate that our model is able to capture both logical and topical structure by evaluating its performance on different types of data.\nData-driven approaches Neural approaches have gained attention more recently.Li and Hovy(201 model sentences as embeddings derived from recurrent/recursive neural nets and train a feedforwar neural network that takes an input window of sentence embeddings and outputs a probability whic represents the coherence of the sentence window. Coherence evaluation is performed by sliding th window over the text and aggregating the score.Li and Jurafsky (2016) study the same model in larger scale task and also consider a sequence to sequence approach where the model is trained t generate the next sentence given the current sentence and vice versa. Chen et al.(2016) also propos a sentence embedding based approach where they model the probability that one sentence shoul come before another and define coherence based on the likelihood of the relative order of every pa of sentences. We believe these models are limited by the fact that they are local in nature and ou experiments show that exploiting larger contexts can be very beneficial.\nHierarchical RNNs for document modeling Word level and sentence level RNNs have been used ir a hierarchical fashion for modeling documents in prior work. Li et al.(2015b) proposed a hierarchica document autoencoder which has potential to be used in generation and summarization applications More relevant to our work is a similar model (but without an encoder) considered by|Lin et al.(2015 A sentence level RNN predicts the bag of words in the next sentence given the previous sentences and a word level RNN predicts the word sequence conditioned on the sentence level RNN hidder state. The model has a structure similar to the content model of Barzilay and Lee[(2004) with RNN playing the roles of the HMM and the bigram language model. Our model has a hierarchical natur. in that a sentence level RNN operates over words of a sentence and a document level RNN operate. over sentence embeddings.\nCombinatorial optimization with RNNs[Vinyals et al.(2015a) equip sequence to sequence model with the capability to handle input and output sets, and discuss experiments on sorting, languag. modeling and parsing. Their goal is to show that input and output orderings can matter in these tasks which is demonstrated using several small scale experiments. Our work exploits this framework tc address the challenging problem of modeling logical and hierarchical structure in text.Vinyals et al. (2015b) proposed pointer-networks, aimed at combinatorial optimization problems where the outpu. dictionary size depends on the number of input elements. We use a pointer-network that points tc. each of the next sentence candidates as the decoder..\nS1 S1 S2 S2 S3 S3 enc dec ... ... LSTM LSTM Sn t,i Sn =1 W1 Wn ht1 hte1 h'dec ht enc LSTM LSTM Cenc Cdec enc 'dec (a) Sentence Encoder (b) Encoder (c) Decoder"}, {"section_index": "3", "section_name": "3 APPROACH", "section_text": "Our proposed model is inspired by the way a human would solve this task. First, the model attempts. to read the sentences to capture the semantics of the sentences as well as the general context of the paragraph. Given this knowledge, the model attempts to pick the sentences one by one sequentially. till exhaustion.\nThe model is comprised of a sentence encoder RNN, an encoder RNN and a decoder RNN (figure 1). An RNN sentence encoder takes as input the words of a sentence s sequentially and computes an embedding representation of the sentence (Figure 1a). Henceforth, we shall use s to refer to a sentence or its embedding interchangeably. The embeddings {s1, S2, ..., sn} of a given set of n sentences constitute the sentence memory, available to be accessed by subsequent components.\nenc enc. Venc t,i enc);i E .....r enc. - Softmax(et 'enc enc n = t,i i=1 (htenc att\nThe decoder is a pointer network that takes a similar form with a few differences (equations68 Figure[1c). The LSTM takes the embedding of the previous sentence also as input: At training time the correct order of sentences (So, So2, ..., Son) = (x1, x2, ., xn) is known (o represents the correct. order) and xt-1 is used as the input. At test time the predicted assignment xt-1 is used instead. This makes concatenating the attention readout to the hidden state somewhat redundant (verified\nFigure 1: Model Overview: Illustration of the sentence encoder and single time-step computations in encoder and decoder. s;'s represent sentence embeddings derived from the sentence encoder. Attention weights are computed for the sentences based on their embeddings and the current hidden state. In the encoder an attention readout is concatenated with the LSTM output to form the next hidden state. The decoder uses the attention weights for prediction.\nThe encoder is identical to the originally proposed process block and is defined by equations[15 (See Figure[1b). Following the regular LSTM hidden state (htn', ctnct) update, the hidden state. hidden state for the next time step (Equation5). Attention probabilities are computed by composing. the hidden state with embeddings of the candidate sentences through a scoring function f and taking the softmax (Equations2l|3). This process is iterated for a number of times, called the number of read cycles. As described in[Vinyals et al.(2015a) the encoder has the desirable property of being invariant to the order in which the sentence embeddings reside in the memory. The LSTM used here does not take any inputs (input is clamned to zero..\nat position t, conditioned on the previous sentence assignments p(St = s;|S1, ..., St-1)\nWe consider two choices for the scoring functions f in our experiments. The first one is a single hidden layer feed-forward net that takes s, h as inputs and outputs a score\nWe also consider a bilinear scoring function\nCompared to the previous scoring function, this takes a generative approach of trying to regress. the next sentence given the current hidden state (W h + b) and enforcing that it be most similar to. the correct next sentence. We observed that this scoring function led to learning better sentence representations (section4.4)\nThe model is trained with the maximum likelihood objective.\nx log p(xt|x1,.. max xED t=1\nwhere D denotes the training set and each training instance is given by an ordered document o. sentences x = (x1, ..., x|!). We also considered an alternative structured margin loss which imposes. less penalty for assigning high scores to sentence candidates that are close to the correct sentence ir the source document instead of uniformly penalizing all incorrect sentence candidates. However, the. softmax output with cross entropy loss consistently performed better..\nk log p(S = 8p|S1 =Sp1,, Si-1 = Spi-1) i=1\nwhere S1, .., Sk are random variables representing the sentence assignment to positions 1 through k. The conditional probabilities are derived from the network. This is our measure of comparing the coherence of different renderings of a document. It is also used as a heuristic during decoding.\n'A subtle difference is that the final hidden state of the encoder henc has more dimensions than hdec and only the first part of the vector is copied (The attention readout is ignored for this time step)\nempirically), and hence it is omitted. The attention computation is identical to that of the encoder The initial state of the decoder LSTM is initialized with the final hidden state of the encoder as in sequence to sequence models|xo is a vector of zeros. Figure 1|illustrates the single time-step computation in the encoder and decoder.\nt.i edec = I1,....n adec = Softmax(edec)\nf(s,h) = W'tanh(W[s; h] + b) + b\nwhere W, b, W', b' are learnable parameters. This scoring function takes a discriminative approach in classifying the next sentence. Note that the structure of this scoring function is similar to the window network in Li and Hovy (2014). While they used a local window of sentences to capture context, this scoring function exploits the RNN hidden state to score sentence candidates.\nf(s,h)= sT(Wh+b\nWe define the coherence score of an arbitrary partial/complete assignment (Sp1, ..., Spk) to the first k sentence positions as\nTable 1: Statistics of data used in our experiments. For the first two datasets, the test set size* is the number o permutation pairs used for order discrimination experiments..\nDataset Length Statistics Data Split Types Min Mode Mean Max Train Val Test Accidents 6 11 11.6 19 100 1986* 5,140 - Earthquakes 3 7 10.4 31 100 1956* - 3,775 NIPS abstracts 2 7 6 15 2448 409 402 18,696 AAN abstracts 1 4 5 20 8569 962 2626 40,288 NSF abstracts 2 7 8.9 40 96070 10185 21580 373,909"}, {"section_index": "4", "section_name": "4.1 MODEL TRAINING", "section_text": "Learning . We used a batch size of 10 and the Adam optimizer (Kingma and Ba2014) with a bas learning rate of 5e-4 for all experiments. Early stopping is used for regularization"}, {"section_index": "5", "section_name": "4.2 ORDER DISCRIMINATION", "section_text": "Finding the optimal ordering is a difficult problem when a large number of sentences are required t be rearranged or when there is inherent ambiguity in the ordering of the sentences. For this reasor the ordering problem is commonly formulated as the following binary classification task. Given a reference paragraph and a permuted version of it, the more coherently organized one needs to b identified (Barzilay and Lapata2008)."}, {"section_index": "6", "section_name": "4.2.2 RESULTS", "section_text": "Table 2 compares the performance of our model against prior approaches. We compare results against traditional approaches in the literature as well as some recent data-driven approaches (See. section 2 for more details). The entity grid model provides a strong baseline on the ACciDENTS dataset, only outperformed by our model and Li and Jurafsky (2016) . On the EARTHQUAKE data the window approach of|Li and Hovy (2014) and Li and Jurafsky (2016) perform strongly. Our approach outperforms prior models on both datasets, achieving near perfect performance on the Earthquakes. dataset.\nFor all tasks discussed in this section we train the model with the same objective (equation[11) on the raining data relevant to the task. We used the single hidden layer MLP scoring function for the order discrimination and sentence ordering tasks. Models are trained end-to-end.\nModel parameters. We use pre-trained 300 dimensional Glo Ve word embeddings (Pennington et al. 2014). All LSTMs use a hidden layer size of 1000 and the MLP in section[9[has a hidden layer size of 500. The number of read cycles in the encoder is set to 10. The same model architecture is used across all experiments.\nPreprocessing. The nltk sentence tokenizer was used for word tokenization. The GloVe vocabulary. was used as the reference vocabulary. Any word not in the vocabulary is checked for a case insensitive match. If a token is hyphenated, we check if the constituent words are in the vocabulary. In the . AAN abstracts data (section|4.3.1), some words tend to have a hyphen in the middle because of. word hyphenation across lines in the original document. Hence we also check if stripping hyphens. produces a vocabulary word. If all checks fail, and a token appears in the training set above a certain. frequency, it is added to the vocabulary..\nWe consider data from two different domains that have been widely used for this task in previous work sinceBarzilay and Lee(2004);Barzilay and Lapata(2008). The AcciDENTs data (aka A1RPLANE data) is a set of aviation accident reports from the National Transportation Safety Board's database.. The EARTHQUAKEs data comprises newspaper articles from the North American News Text Corpus.. In each of the above datasets the training and test sets include 100 articles as well as approximately. 20 permutations of each article. Further statistics about the data are shown in table[1\nTable 2: Mean Accuracy comparison on the Accidents and Earthquakes data for the order discrimination task Reference results obtained from the respective publications.."}, {"section_index": "7", "section_name": "4.3 SENTENCE ORDERING", "section_text": "In this task we directly address the ordering problem. We do not assume the availability of a set of candidate orderings to choose from and instead attempt to find a good ordering from all possible permutations of the sentences.\nThe difficulty of the ordering problem depends on the nature of the text as well as the length o paragraphs considered. Evaluation on text from arbitrary text sources makes it difficult to interpre the results, since it may not be clear whether to attribute the observed performance to a deficien. model or ambiguity in next sentence choices due to many plausible orderings..\nACL Abstracts. A second source of abstracts we consider are papers from the ACL Anthology. Network (AAN) corpus (Radev et al.]2009) of ACL papers. At the time of retrieval, the corpus. had publications up to year 2013. We extracted abstracts from the text parses using simple keywor. matching for the strings 'Abstract' and Introduction'. Our extraction is successful for 12,157 articles. Most of the failures occur for older papers due to improper formatting and OCR issues. We us. all extracts of papers published up to year 2010 for training, year 2011 for validation and years. 2012-2013 for testing. We additionally merge words hyphenated at the edges of paragraph boundaries\nNSF Abstracts. We also evaluate our model on the NSF Research Award Abstracts dataset (Lichman. 2013). This dataset comprises abstracts from a diverse set of scientific areas in contrast to the previous two sources of data and the abstracts are also lengthier, making this dataset more challenging. Years 1990-1999 were used for training, 2000 for validation and 2001-2003 for testing. We capped the. parses of the abstracts to a maximum length of 40 sentences. Unsuccessful parses and parses of. excessive length were discarded. Further details about the datasets are provided in table[1\n2Experimentation with a random split yielded similar performance. We adopt this split so that future work can easily perform comparisons with our results..\nMethods ACCIDENTS EARTHQUAKES Barzilay and Lapata 2008 0.904 0.872 Louis and Nenkova 2012 0.842 0.957 Guinaudeau and Strube (2013) 0.846 0.635 i and Hovy 2014] - Recurrent 0.840 0.951 1 and Hovy 2014] - Recursive 0.864 0.976 1 and Jurafsky (2016) 0.930 0.992 Ours 0.944 0.997\nWhile these datasets have been widely used in the literature, they are quite formulaic in nature anc are no longer challenging. We hence turn to the more challenging task of ordering a given collection. of sentences to make a coherent document..\nText summaries are a suitable source of data for this task. They often exhibit a clear flow of ideas and have minimal redundancy. We specifically look at abstracts of conference papers and NSF research proposals. This data has several favorable properties. Abstracts usually have a particular high level format - They start out with a brief introduction, a description of the problem addressed and proposed approach and conclude with performance remarks. This would allow us to identify if the model is capable of capturing high level logical structure. Second, abstracts have an average length of about 10, making the ordering task more accessible. Furthermore, this also gives us a significant amount of data to train and test our models.\nNIPS Abstracts. We consider abstracts from NIPS papers in the past 10 years. We parsed 3280 abstracts from paper pdfs and obtained 3259 abstracts after omitting erroneous extracts. The dataset was split into years 2005-2013 for training and years 2014, 2015 respectively for validation and testing2\nTable 3: Comparison against prior methods on the abstracts data\nNIPS Abstracts AAN Abstracts NSF Abstracts Accuracy T Accuracy T Accuracy T Random 15.59 0 19.36 0 9.46 0 Entity Grid Barzilay and Lapata,2008 20.10 0.09 21.82 0.10 Seq2seq (Un1) (L1 and Juratsky2016 27.18 0.27 36.62 0.40 13.68 0.10 Window network (L1 and Hovy 2014 41.76 0.59 50.87 0.65 18.67 0.28 RNN Decoder 48.22 0.67 52.06 0.66 25.79 0.48 Proposed model 51.55 0.72 58.06 0.73 28.33 0.51"}, {"section_index": "8", "section_name": "4.3.2 METRICS", "section_text": "We use the following metrics to evaluate performance on this task. Accuracy measures how often the. absolute position of a sentence was correctly predicted. Being a too stringent measure, it penalizes. correctly predicted subsequences that are shifted. Another metric widely used in the literature is Kendall's tau (r), computed as 1 2 (number of inversions)/(), where the number of inversions. is the number of pairs in the predicted sequence with incorrect relative order and n is the length of. the sequence.Lapata (2006) discusses that this metric reliably correlates with human judgements..\nEntity Grid. Our first baseline is the Entity Grid model of Barzilay and Lapata(2008). We use the Stanford parser (Klein and Manning2003) to get constituency trees for all sentences in our datasets We derive entity grid representations for the parsed sentences using the Brown Coherence Toolkit|3|A ranking SVM is trained to score correct orderings higher than incorrect orderings as in the original work. We used 20 permutations per document as training data. Since the entity grid representation only provides a means of feature extraction we evaluate the model in the ordering setting as follows. We choose 1o00 random permutations for each document, one of them being the correct order, and pick the order with maximum coherence. We experimented with transitions of length at most 3 in the entity-grid.\nSequence to sequence. The second baseline we consider is a sequence to sequence model whicl is trained to predict the next sentence given the current sentence.Li and Jurafsky(2016) considei similar methods and our model is same as the uni-directional model in their work. These method. were shown to yield sentence embeddings that have competitive performance in several semantic tasks in Kiros et al.(2015).\nWindow Network. We consider the window approach of[Li and Hovy (2014) and Li and Jurafsky (2016) which demonstrated strong performance in the order discrimination task as our third baseline We adopt the same coherence score interpretation considered by the authors in the above work. In both the above models we consider a special embedding vector which is padded at the beginning of a paragraph and learned during training. This vector allows us to identify the initial few sentences during greedy decoding.\nRNN Decoder. Another baseline we consider is our proposed model without the encoder. The. decoder hidden state is initialized with zeros. We observed that using a special start symbol as for the other baselines helped obtain better performance with this model. However, a start symbol did noi help when the model is equipped with an encoder as the hidden state initialization alone was gooc enough.\nWe do not place emphasis on the particular search algorithm in this work and thus use beam search using the coherence score heuristic for all models. A beam size of 100 was used. During decoding sentence candidates that have been already chosen are pruned from the beam. All RNNs use a hidden layer size of 100o. For the window network we used a window size of 3 and a hidden layer size of 2000. We initialize all models with pre-trained GloVe word embeddings..\nhttps://bitbucket.org/melsner/browncoherence/overview\nFigure 2: t-SNE embeddings of representations learned by the model for sentences from the test set. The embeddings are color coded by the position of the sentence in the document it appears."}, {"section_index": "9", "section_name": "4.3.4 RESULTS", "section_text": "We assess the performance of our model against baseline methods in table[3] The window network performs strongly compared to the other baselines. Our model does better by a significant margin by exploiting global context, demonstrating that global context is important to be successful in this task\nWhile the Entity-Grid model has been fairly successful for the order discrimination task in the past. we observe that it fails to discriminate between a large number of candidates. One reason could be that the feature representation is fairly less sensitive to local changes in sentence order (such as. swapping adjacent sentences). We did not use coreference resolution for computing the entity-grids. due to the computational overhead. This could potentially improve results by a few percentage points. The computational expense of obtaining parse trees and constructing grids on a large amount of data. prohibited us from experimenting with this model on the NSF abstracts data..\nThe sequence to sequence model falls short of the window network in performance. Interestingly,Li and Jurafsky (2016) observe that the seq2seq model outperforms the window network in an order discrimination task on wikipedia data. However, the wikipedia data considered in their work has an order of magnitude more data that the datasets considered here, and that could have potentially helped the generative model. These models are also expensive during inference since they involve computing and sampling from word distributions.\nIn Figure3|we attempt to visualize the sentence representations learned by the sentence encoder in our model. The figure shows 2-dimensional t-SNE embeddings of test set sentences from each of the datasets color coded by their positions in the source abstract. This shows that the model learns high-level structure in the documents, generalizing well to unseen documents. The structure is less apparent in the NSF data which we presume is because of the data diversity and longer documents. While approaches based on the content model ofBarzilay and Lee[(2004) attempt to explicitly capture topics by discovering clusters in sentences, we observe that the neural approach implicitly discovers such structure."}, {"section_index": "10", "section_name": "4.4 LEARNED SENTENCE REPRESENTATIONS", "section_text": "One of the original motivations for this work is the question whether we can learn high qualit sentence representations by learning to model text coherence. To address this question we trained ou. model on a large dataset of paragraphs. We chose the BookCorpus dataset (Kiros et al.|2015) fo. this purpose. We trained the model with two key changes from the models trained on the abstracts data - 1) In addition to the sentences in the paragraph being considered, we added more contrastive sentences from other paragraphs as well. 2) We use the bilinear scoring function. These techniques helped obtain better representations when training on large amounts of data.\nTo evaluate the quality of the sentence embeddings derived from the model, we use the evaluation pipeline of Kiros et al.(2015) for tasks that involve understanding sentence semantics. These evaluations are performed by training a classifier on top of the embeddings derived from the model so that the performance is indicative of the quality of sentence representations. We consider the semantic\nFirst Sentence Last Sentence (a) NIPS Abstracts (b) AAN Abstracts (c) NSF Abstracts\nAANI A1\n(a) Sentence similarity Method r p MSE Purely supervised methods DT-RNN Tai et al.2015] 0.792 0.732 0.382 LSTM Ta1 et al.,]2015 0.853 0.791 0.283 DT-LSTM(Ta1 et al. 2015) 0.868 0.808 0.253 Classifier trained on sentence embeddings skip-bow[Kiros et al.2 2015 0.782 0.724 0.398 uni-skip (Kiros et al., 2015 0.848 0.778 0.287 Ordering model 0.807 0.742 0.356 + BoW 0.842 0.775 0.299 + uni-skip 0.860 0.795 0.270\nrelatedness and paraphrase detection tasks. Our results are presented in tables4a] 4b] Results for only uni-directional versions of different models are discussed here for a reasonable comparison\nSkip-thought vectors are learned by predicting both the previous and next sentences given the curren. sentence. Following suit, we train two models - one predicting the correct order in the forwarc direction and another in the backward direction. Note that the sentence level RNN is still uni. directional in both cases. The numbers shown for the ordering model were obtained by concatenating the representations obtained from the two models.\nConcatenating the above representation with the bag of words representation (using the fine-tuned word embeddings) of the sentence further improves performance4 We believe the reason to be that the ordering model can choose to pay less attention to specific lexical information and instead focus on the high level document structure. Hence the two representations can be seen as capturing complementary semantics. Adding the skip-thought embedding features as well improves performance further.\nOur model has several key advantages over the skip-thought model. The skip-thought model has a word-level reconstruction objective and requires training with large softmax output layers. This limits the size of the vocabulary and makes training very time consuming (they use a vocabulary size of 20k. and report 2 weeks of training). Our model achieves comparable performance and does not have such a word reconstruction component. We are able to train with a large vocabulary of 400k words and the above results were obtained with a training time of 2 days..\nA conceptual issue surrounding word-level reconstruction is that it forces the model to predic1 both the meaning and syntax of the target sentence. This makes learning difficult since there are numerous ways of expressing the same idea in syntax. In our model we instead let the model discover features from a sentence which are both predictive (of the next sentence) and predictable (from the previous sentences) and interpret these set of features as a meaning representation. We believe this is an important distinction and hope to study these models further in the context of learning syntax independent semantic representations of sentences."}, {"section_index": "11", "section_name": "5 CONCLUSION", "section_text": "In this work we considered the challenging problem of coherently organizing a given set of sentences. Our RNN based model performs strongly compared to baseline methods as well as prior work on sentence ordering and order discrimination tasks. We further demonstrated that the model captures high level document structure and learns useful sentence representations when trained on large amounts of data. Our approach to the ordering problem deviates from most prior work that use handcrafted features. However, exploiting linguistic features for next sentence classification can\n4We used the same hyperparameters that were used for the abstracts data to train our model. The skip-bow and uni-skip embeddings have dimensionality 640, 2400 respectively. Representations from the ordering model have dimensionality 2000, and adding BoW features gives 2600 dimensional embeddings.\nTable 4: Performance comparison for the semantic similarity (SICK dataset) and paraphrase detection (MSR paraphrase corpus) tasks. In each table the first section shows some best performing supervised methods in the literature. The second section shows models relevant to the skip-thought model. The third section shows our models.\n(b) Paraphrase detection Method Acc F1 Purely supervised methods. Socher et al.2011 76.8 83.6 Madnan1 et al.(2012) 77.4 84.1 J1 and Eisenstein (2013 80.4 86.0 Classifier trained on sentence embeddings skip-bow (Kiros et al.)2015 67.8 80.3 uni-skip Kiros et al... 2015 73.0 81.9 Ordering model. 72.3 81.0 + BoW 74.0 81.9 + uni-skip 74.9 82.5\npotentially further improve performance on the task. Entity distribution patterns can provide usefu features about named entities that are treated as out of vocabulary words. The ordering problem car be further studied at higher level discourse units such as paragraphs, sections and chapters"}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "X. Chen. X. Oiu. and X. Huang. Neural sentence ordering. arXiy pr eprint arXiv:1607.06952, 2016\nM. Lapata. Automatic evaluation of information ordering: Kendall's tau. Computational Linguistics 32(4):471-484, 2006.\nR. Barzilay and L. Lee. Catching the drift: Probabilistic content models, with applications to. generation and summarization. arXiv preprint cs/0405039, 2004. J. Burstein, J. Tetreault, and S. Andreyev. Using entity-based features to model coherence in. student essays. In Human language technologies: The 2010 annual conference of the North American chapter of the Association for Computational Linguistics, pages 681-684. Association. for Computational Linguistics, 2010.\nM. Elsner, J. L. Austerweil, and E. Charniak. A unified local and global model for discourse. coherence. In HLT-NAACL, pages 436-443, 2007. P. W. Foltz, W. Kintsch, and T. K. Landauer. The measurement of textual coherence with latent semantic analysis. Discourse processes, 25(2-3):285-307, 1998. B. J. Grosz, S. Weinstein, and A. K. Joshi. Centering: A framework for modeling the local coherence. of discourse. Computational linguistics, 21(2):203-225, 1995. C. Guinaudeau and M. Strube. Graph-based local coherence modeling. In ACL (1), pages 93-103,. 2013. Y. Ji and J. Eisenstein. Discriminative improvements to distributional sentence similarity. In EMNLP.. pages 891-896, 2013. Y. Ji, T. Cohn, L. Kong, C. Dyer, and J. Eisenstein. Document context language models. In. International Conference on Learning Representations, Poster Paper, volume abs/1511.03962, 2015. D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980,. 2014. R. Kiros. Y. Zhu, R. R. Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and S. Fidler. Skip-thought. vectors. In Advances in Neural Information Processing Systems, pages 3276-3284, 2015. D. Klein and C. D. Manning. Accurate unlexicalized parsing. In Proceedings of the 41st Annual. Meeting on Association for Computational Linguistics- Volume 1, pages 423-430. Association for. Computational Linguistics, 2003. M. Lapata. Probabilistic text structuring: Experiments with sentence ordering. In Proceedings of. the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pages 545-552.\nidentification. In Proceedings of the 2012 Conference of the North American Chapter of the. Association for Computational Linguistics: Human Language Technologies, pages 182-190. Association for Computational Linguistics, 2012. E. Miltsakaki and K. Kukich. Evaluation of text coherence for electronic essay scoring systems. Natural Language Engineering, 10(01):25-55, 2004. J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In. EMNLP, volume 14, pages 1532-43, 2014. D. R. Radev, M. T. Joseph, B. Gibson, and P. Muthukrishnan. A Bibliometric and Network Analysis of the field of Computational Linguistics. Journal of the American Society for Information Science and Technology, 2009. R. Socher, E. H. Huang, J. Pennin, C. D. Manning, and A. Y. Ng. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing. Systems, pages 801-809, 2011. R. Soricut and D. Marcu. Discourse generation using utility-trained coherence models. In Proceed-. ings of the COLING/ACL on Main conference poster sessions, pages 803-810. Association for. Computational Linguistics, 2006 I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In. Advances in neural information processing systems, pages 3104-3112, 2014. K. S. Tai, R. Socher, and C. D. Manning. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075, 2015. O. Vinyals, S. Bengio, and M. Kudlur. Order matters: Sequence to sequence for sets. arXiv preprint. arXiv:1511.06391, 2015a.\nO. Vinyals, S. Bengio, and M. Kudlur. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391, 2015a O. Vinyals, M. Fortunato, and N. Jaitly. Pointer networks. In Advances in Neural Information PrOC sinoSvstei 0674_2682.2015h\nTable 5: Visualizing salient words"}, {"section_index": "13", "section_name": "A WORD INFLUENCE", "section_text": "We attempt to understand what text level clues the model captures to perform the ordering task. Some techniques for visualizing neural network models in the context of text applications are discussed in Li et al.(2015a). Drawing inspiration from this work, we use gradients of prediction decisions with respect to the words of the correct sentence as a proxy for the salience of each word.\nFor each time step during decoding we do the following. Assume the sentence assignments for all previous time steps have been correct. let h be the current hidden state in this setting and s = (w1, ..., wn) be the correct next sentence candidate, the w; being its words. The score for this sentence is defined as e = f(s, h) (See equation|7). The importance of word w; in predicting s as gradients through the sentence encoder.\nTable[5|shows visualizations of a few selected abstracts. Words expressed in darker shades correspond to higher gradient norms. In the first example the model seems to be using the word clues 'first' 'second' and 'third'. A similar observation was made byChen et al.(2016) in their experiments. In the second example we observe that the model has paid attention to phrases such as 'We present', 'We argue' which are typical of abstract texts. The model has also focused on the word 'representation\n1.0 0.9 Ours 0.8 Window Network Seq2seq 0.7 0.5 0.6 ney 0.0 0.4 0.3 -0.5 0.2 Ours Window Network. 0.1 Seq2seq -1.0 0.0 3 4 5 6 7 8 9 10 11 12 13 14 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Paragraph length. Sentence position\n(a) t scores of order predictions on paragraphs of a (b) Accuracy of predicting the correct sentence at a given length. given position.\nFigure 3: Performance with respect to paragraph length and sentence position - NIPS abstracts test data\nappearing in the first two sentences. Similarly in the third example, the words 'salient' and 'dates have been attended to. In the last example, the words 'token', 'tokens', 'tokenization' have received attention. We believe that these observations link to ideas from centering theory which state thai entity distributions in coherent discourses adhere to certain patterns. The model has implicitly learned learned these patterns with no syntax annotations or handcrafted features."}, {"section_index": "14", "section_name": "8 PERFORMANCE ANALYSIS", "section_text": "Figure 3b|compares the average prediction accuracy for a given sentence position in the test set. It is interesting to observe that all models fair well in predicting the first sentence. The greedy decoding procedure also contributes to the decline in performance as we move right. Our model remains more robust compared to the other two methods.\nAnother trend to be observed is that as the context size increases (2 for next sentence generation, 3 fo window network, complete sentential history for our model) the performance decline is more gradua\nFigure|3a|shows the average t for the models on the NIPS abstracts test set for a given paragraph. length. The performance of local approaches dies down fairly quickly as we can expect and face difficulties handling lengthy paragraphs. Our model attempts to maintain consistent performance. with increasing paragraph size with a more gradual decline in performance..\nNIPS AAN NSF Entity Grid. 0.712 0.660 Seq2seq (Uni) 0.890 0.888 Window network 0.944 0.942 0.936 RNN Decoder 0.964 0.952 0.972 Proposed model 0.970 0.955 0.982\nTable 7: Accuracy (of predicting sentence position) and - metrics for the 1 of 100 classification task\nNIPS Abstracts AAN Abstracts NSF Abstracts Accuracy T Accuracy T Accuracy T Entity Grid 21.30 0.12 23.22 0.11 Seq2seq (Uni) 38.36 0.40 48.47 0.50 Window network 59.20 0.67 64.02 0.71 44.65 0.46 RNN Decoder 67.05 0.73 68.54 0.74 79.01 0.76 Proposed model 72.25 0.79 72.25 0.77 83.85 0.81\nClassify 1 of 2 (Pairwise ranking). We create a random permutation for each document (differen from the correct order) and compute the pairwise classification accuracy. The mean result over 100 such experiments is reported in table 6a\nClassify 1 of N. In this experiment we consider a pool of N = 100 permutations where one of them is the correct order and the rest are random permutations. We compute coherence scores of all orderings in the pool and pick the best one. Table|6b|reports how accurately each model identified the correct order. Table|7|computes the same metrics in table|3 for the orders that were chosen by the model.\nThe performance gaps are not significant for the first experiment. However, they are more pronounced. in the second experiment. The reason is because in the binary classification setting, a majority of. the permutations will be very different from the correct order, and the models can discriminate well for these cases. However, when choosing from a large set of permutations the models need to be. sensitive to permutations which deviate from the correct ordering by a small amount (Eg: only two. sentences out of place from correct order). Models incapable of discriminating at this finer level. perform tend to perform poorly..\nNote that the second experiment is closer to the practical task of finding an appropriate presentation ordering for a given collection of sentences. while taking the decoding algorithm out of the picture\nWe perform order discrimination experiments on the Penn Treebank dataset. The experimenta. protocol is identical to that of Ji et al. (2015). The standard train, dev, test split was used. The. vocabulary comprises the most 10,o00 frequent words and an additional special symbol representing low frequency words. A bootstrapping procedure is used for evaluation - 1o00 test sets are generated. by randomly sampling with replacement 155 documents from the test set and constructing a random permutation different from the correct order for each sampled document. We used the same mode. hyperparameters described in section|4.1 We did not use pre-trained word embeddings for fair com parison with the reference method. Performance statistics are computed over the 1o00 bootstrapped. test sets. Table|8|shows a performance comparison.\nA key difference of this dataset compared to the other datasets is that the documents are more oper domain and they are significantly longer. Our model performs strongly despite these differences.\n(b) Classify 1 of 100 - Accuracy\nNIPS AAN NSF Entity Grid. 0.070 0.102 Seq2seq (Uni) 0.171 0.290 Window network 0.385 0.470 0.300 RNN Decoder 0.516 0.541 0.647 Proposed model 0.566 0.581 0.710\nTable 8: Order discrimination on PTB dataset. Reference results re-printed fromJi et al.(2015\nAccuracy Mean (%) Standard deviation (%) Hierarchical RNNLM (Lin et al. 2015 75.32 4.42 DCLM (Ji et al.]2015 83.26 3.77 Proposed model 90.25 2.38\nTable 9: Impact of using contrastive sentences and the bilinear scoring function for learning sentence representa tions.\nContrastive Scoring Semantic Relatedness Paraphrase detection Sentences Function p MSE Acc r F1 No MLP 0.631 0.568 0.613 0.687 0.791 No Bilinear 0.650 0.609 0.588 0.681 0.785 Yes MLP 0.718 0.648 0.494 0.689 0.785 Yes Bilinear 0.787 0.727 0.387 0.712 0.804\nWe briefly discussed two techniques in section 4.4 that helped obtain better sentence representations\nAdding contrastive sentence candidates for the decoder The bilinear scoring function\nWe observe the trend that adding contrastive sentences or switching from the MLP to the bilinear scoring function produces a sharp improvement in the result for the semantic relatedness task Although the performance differences are less significant in the paraphrase detection task when the individual factors are changed, we observe an overall performance gain by using contrastive sentences. and the bilinear scoring function..\nThis confirms our intuition that adding contrastive sentences makes the task more challenging and leads to better representations being learned. Similarly, the bilinear scoring function takes a generative approach of trying to regress the next sentence as opposed to the MLP scoring function which treats the prediction task as purely discriminative, and hence encourages the learning of better representations."}, {"section_index": "15", "section_name": "D MODEL DETAILS", "section_text": "The LSTM update in equation|1|of the paper\nis as follows\nTable[9|shows results of ablative experiments which demonstrate the effect of these ideas. Each. experimental setting indicates whether or not contrastive sentences were used and the type of scoring. function employed. Note that these models were trained to predict the correct order of sentences. only in the forward direction (as opposed to the results reported in table4|where two models were. trained to predict the order in the forward and backward directions and the representations were concatenated). Models were chosen using the validation set of the relevant task..\nht,Ct = LSTM(ht-1,Ct-1\nit=o(W;ht-1+bi) ft=o(Wght-1+bf) Ot = o(W,ht-1+ bo) ct = tanh(Wcht-1 + bc) Ct =ft O Ct-1+it O Ct ht = Ot O tanh(ct)\nwhere Ws; f. f.o.c} are learnable parameters\nht,Ct = LSTM(ht-1,Ct-1,Xt-1)\nis given by the following\nit=o(Wniht-1+WxiXt-1+bi) ft=0(Wnfht-1+WxfXt-1+bf) Ot = 0(Wnoht-1+ Wxoxt-1+ bo) Ct = tanh(Wncht-1 + Wxcxt-1+ bc Ct = ft O Ct-1+it O Ct ht = 0t O tanh(ct A\nwhere W{hi.1 are learnable parameters VV"}] |
HJV1zP5xg | [{"section_index": "0", "section_name": "DIVERSE BEAM SEARCH: DECODING DIVERSE SOLUTIONS FROM NEURAL SEOUENCE MODELS", "section_text": "Ashwin K Vijayakumar1, Michael Cogswelll, Ramprasaath R. Selvaraju', Qing Sur Stefan Lee1, David Crandall2 & Dhruv Batra'\n{ashwinkv, cogswell, ram2l, sunqing, steflee}@vt.edu djcran@indiana.edu, dbatra@vt.edu\n1 Department of Electrical and Computer Engineering Virginia Tech, Blacksburg, VA, USA\n2 School of Informatics and Computing. Indiana University, Bloomington, IN, USA\nNeural sequence models are widely used to model time-series data. Equally ubiq. uitous is the usage of beam search (BS) as an approximate inference algorithm to decode output sequences from these models. BS explores the search space in a greedy left-right fashion retaining only the top B candidates. This tends to result. in sequences that differ only slightly from each other. Producing lists of nearly identical sequences is not only computationally wasteful but also typically fails to capture the inherent ambiguity of complex AI tasks. To overcome this prob- lem, we propose Diverse Beam Search (DBS), an alternative to BS that decodes a. list of diverse outputs by optimizing a diversity-augmented objective. We observe that our method not only improved diversity but also finds better top 1 solutions. by controlling for the exploration and exploitation of the search space. Moreover. these gains are achieved with minimal computational or memory overhead com- pared to beam search. To demonstrate the broad applicability of our method, we present results on image captioning, machine translation, conversation and visual. question generation using both standard quantitative metrics and qualitative hu-. man studies. We find that our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In the last few years, Recurrent Neural Networks (RNNs), Long Short-Term Memory networks (LSTMs) or more generally, neural sequence models have become the standard choice for modeling. time-series data for a wide range of applications including speech recognition (Graves et al., 2013),. machine translation (Bahdanau et al., 2014), conversation modeling (Vinyals & Le, 2015), image and video captioning (Vinyals et al., 2015; Venugopalan et al., 2015), and visual question answering. (Antol et al., 2015). RNN based sequence generation architectures model the conditional probability,. Pr(y|x) of an output sequence y = (y1, ..., yr) given an input x (possibly also a sequence); where. the output tokens yt are from a finite vocabulary, V..\nInference in RNNs. Maximum a Posteriori (MAP) inference for RNNs is the task of finding the. most likely output sequence given the input. Since the number of possible sequences grows as. V|T', exact inference is NP-hard - so, approximate inference algorithms like beam search (BS) are. commonly employed. BS is a heuristic graph-search algorithm that maintains the B top-scoring. partial sequences expanded in a greedy left-to-right fashion. Fig. 1 shows a sample BS search tree.\nLack of Diversity in BS. Despite the widespread usage of BS, it has long been understood that solutions decoded by BS are generic and lacking in diversity (Finkel et al., 2006; Gimpel et al."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Single engine train rolling down the tracks. A locomotive drives along the tracks amongst trees and bushes. An engine is coming down the train track. A steam locomotive is blowing steam. An old fashion train with steam coming out of its pipe. A black and red train moving down a train track\ni) The production of near-identical beams make BS a computationally wasteful algorithm, witl essentially the same computation being repeated for no significant gain in performance. ii) Due to loss-evaluation mismatch (i.e. improvements in posterior-probabilities not necessarily corresponding to improvements in task-specific metrics), it is common practice to deliberately throttle BS to become a poorer optimization algorithm by using reduced beam widths (Vinyals et al., 2015; Karpathy & Fei-Fei, 2015; Ferraro et al., 2016). This treatment of an optimizatior algorithm as a hyperparameter is not only intellectually dissatisfying but also has a significant practical side-effect - it leads to the decoding of largely bland, generic, and \"safe\"' outputs, e.g always saying \"I don't know' in conversation models (Kannan et al., 2016). iii) Most importantly, lack of diversity in the decoded solutions is fundamentally crippling in AJ problems with significant ambiguity - e.g. there are multiple ways of describing an image oi responding in a conversation that are \"correct\"' and it is important to capture this ambiguity by fnding several diverse plausible hvnothese\nOverview and Contributions. To address these shortcomings, we propose Diverse Beam Searc.. (DBs) - a general framework to decode a set of diverse sequences that can be used as an alternativ to BS. At a high level, DBS decodes diverse lists by dividing the given beam budget into groups an. enforcing diversity between groups of beams. Drawing from recent work in the probabilistic graph. ical models literature on Diverse M-Best (DivMBest) MAP inference (Batra et al., 2012; Prasa et al., 2014; Kirillov et al., 2015), we optimize an objective that consists of two terms - the sequenc likelihood under the model and a dissimilarity term that encourages beams across groups to differ. This diversity-augmented model score is optimized in a doubly greedy manner - greedily optimizing. along both time (like BS) and groups (like DivMBest)..\nOur primary technical contribution is Diverse Beam Search, a doubly greedy approximate infer ence algorithm to decode diverse sequences from neural sequence models. We report results on image captioning, machine translation, conversations and visual question generation to demonstrate the broad applicability of DBS. Results show that DBS produces consistent improvements on both task-specific oracle and other diversity-related metrics while maintaining run-time and memory re quirements similar to BS. We also evaluate human preferences between image captions generated by BS or DBS. Further experiments show that DBS is robust over a wide range of its parameter values and is capable of encoding various notions of diversity through different forms of the diversty term\nOverall, our algorithm is simple to implement and consistently outperforms BS in a wide range. of domains without sacrificing efficiency. Our implementation is publicly available at https : //github. com/ ashwinkalyan/dbs. Additionally, we provide an interactive demonstration\nBeam Search coming down the tracks A steam engine train travelling down train tracks. aveling_dow rack A steam engine train travelling down tracks. train track with A steam engine train travelling through a forest. train tracks dowr A steam engine train travelling through a lush green forest. veling tracks forest A steam engine train travelling through a lush green countryside rough A train on a train track with a sky background. Diverse Beam Search down traintracks A steam engine travelling down train tracks. traveling through_aforest A steam engine train travelling through a forest. An old steam engine train travelling down train tracks. train- travelingthrough down An old steam engine train travelling through a forest. train locomotive A black train is on the tracks in a wooded area. A black train is on the tracks in a rural area. Ground Truth Captions Single engine train rolling down the tracks... A locomotive drives along the tracks amongst trees and bushes. An engine is coming down the train track..\nFigure 1: Comparing image captioning outputs decoded by BS (top) and our method, Diverse Beam Search (middle) - we notice that BS captions are near-duplicates with similar shared paths in the search tree and minor variations in the end. In contrast, DBS captions are significantly diverse and similar to the variability in. human-generated ground truth captions (bottom)."}, {"section_index": "3", "section_name": "PRELIMINARIES: DECODING RNNS WITH BEAM SEARCH", "section_text": "The Decoding Problem. RNNs are trained to estimate the likelihood of sequences of tokens from a finite dictionary V given an input x. The RNN updates its internal state and estimates the conditional probability distribution over the next output given the input and all previous output tokens. We denote the logarithm of this conditional probability distribution over all tokens at time t as 0(yt) =- log Pr(yt|Yt-1, ..., Y1, x). To avoid notational clutter, we index 0(-) with a single variable yt, but it should be clear that it depends on all previous outputs, y[t-1]. We write the log probability of a partial solution (i.e. the sum of log probabilities of all tokens decoded so far) as O(y[t]): re(t] (yr). The decoding problem is then the task of finding a sequence y that maximizes O(y).\nAs each output is conditioned on all the previous outputs, decoding the optimal length-T sequence ir this setting can be viewed as MAP inference on a T-order Markov chain with nodes corresponding to output tokens at each time step. Not only does the size of the largest factor in such a graph grow as |V|T, but computing these factors also requires repetitively evaluating the sequence model. Thus, approximate algorithms are employed and the most prevalent method is beam search (BS).\nBeam search is a heuristic search algorithm which stores the top B highest scoring partial candidates. at each time step; where B is known as the beam width. Let us denote the set of B solutions held by BS at the start of time t as Y[t-1] = {y1,[t-1], :-. , Y B,[t-1]}. At each time step, BS considers all possible single token extensions of these beams given by the set Vt = Y[t-1] V and retains the B highest scoring extensions. More formally, at each step the beams are updated as.\nO(yb,[t]) s.t. Yi,[t] yj,[t] argmax 1 Vi#j [t\nThe above objective can be trivially maximized by sorting all B V members of Vt by their lo probabilities and selecting the top B. This process is repeated until time T and the most likel sequence is selected by ranking the B complete beams according to their log probabilities..\nWhile this method allows for multiple sequences to be explored in parallel, most completions tend to stem from a single highly valued beam - resulting in outputs that are often only minor perturbations of a single sequence (and typically only towards the end of the sequences).\nTo overcome this, we augment the objective in Eq. 1 with a dissimilarity term (Y(t1) that measures. the diversity between candidate sequences, assigning a penalty (YtD)[c to each possible sequence. completion c E V. Jointly optimizing this augmented objective for all B candidates at each time step. is intractable as the number of possible solutions grows with |V|B (easily 1060 for typical language. modeling settings). To avoid this, we opt for a greedy procedure that divides the beam budget B. into G groups and promotes diversity between these groups. The approximation is doubly greedy across both time and groups - so (Y(tj) is constant with respect to other groups and we can. sequentially optimize each group using regular BS. We now explain the specifics of our approach.\nDiverse Beam Search. As joint optimization is intractable, we form G smaller groups of beams. and optimize them sequentially. Consider a partition of the set of beams Y[t] into G smaller sets. B = 6 beams are divided into G = 3 differently colored groups containing B' = 2 beams each.\nConsidering diversity only between groups, reduces the search space at each time step; however inference remains intractable. To enforce diversity efficiently, we consider a greedy strategy tha steps each group forward in time sequentially while considering the others fixed. Each group car then evaluate the diversity term with respect to the fixed extensions of previous groups, returning the search space to B' V]. In the snapshot shown in Fig. 2, the third group is being stepped forwarc at time step t = 4 and the previous groups have already been completed. With this staggered beam front, the diversity term of the third group can be computed using these completions. Here we use\nWe begin with a refresher on BS, before describing our generalization, Diverse Beam Search For notational convenience, let [n] denote the set of natural numbers from 1 to n and let v[n] = V1, ..., Vn] index the first n elements of a vector v E Rm\ntime t 1 dnou) flock of birds flying over a flock of birds flying over the ocean. a flock of birds flying in a flock of birds flying over a beach - ? 7 dnouy birds flying over the water birds flying over the water in the sun. birds flying over an ocean birds flying the water near a mountain. - ? ? dnoug several birds are Modify scores to include diversity: several birds are flying over a body of water 0(the')+ X(birds', the','an)[the'] several birds fly 0(over') + A(birds', the','an)[over']. several birds flying over a body of water\nI dno.ug a flock of birds flying over a flock of birds flying over the ocean a flock of birds flying in a flock of birds flying over a beach 7 dnou9 birds flying over the water birds flying over the water in the sun birds flying over an ocean birds flying the water near a mountain ? dnouy several birds are Modify scores to include diversity: several birds are flying over a body of water 0the+Xbirds',the',an[the] several birds fly 0over+Xbirdsthe'an[over] several birds flying over a body of water\nFigure 2: Diverse beam search operates left-to-right through time and top to bottom through groups. Diversity between groups is combined with joint log probabilities, allowing continuations to be found efficiently. The resulting outputs are more diverse than for standard approaches.\nhamming diversity, which adds diversity penalty -1 for each appearance of a possible extension word at the same time step in a previous group - 'birds', 'the', and 'an' in the example - and O to all other possible completions. We discuss other forms for the diversity function in Section 5.1..\nAs we optimize each group with the previous groups fixed, extending group g at time t amounts to a standard BS using dissimilarity augmented log probabilities and can be written as:.\ng-1 U ytj Vg argmax + X [t] jEV9 Y B',[t] bE[B'] h=1 s.t.X>0,J\nwhere A is scalar controlling the strength of the diversity term. The full procedure to obtain diverse sequences using our method, Diverse Beam Search (DBS), is presented in Algorithm 1. It consists. of two main steps for each group at each time step -.\n1) augmenting the log probabilities of each possible extension with the diversity term computec from previously advanced groups (Algorithm 1, Line 5) and,. 2) running one step of a smaller BS with B' beams using the augmented log probabilities to extend. the current group (Algorithm 1, Line 6)..\nAlgorithm 1: Diverse Beam Search\nAlgorithm 1: Diverse Beam Search 1 Perform a diverse beam search with G groups using a beam width of B 2 for t = 1, :.. T do // perform one step of beam search for first group without diversity Yit + argmax(y,uyb,m) be[By O(yt,[g) 3 for g = 2, ... G do 4 // augment log probabilities with diversity penalty O(yi,[tj) < O(y6,[tj) +X(Uh=i Y{tj)[y6,t]b E[B'],y6,[t] E Vf and A>0 5 // perform one step of beam search for the group (yi.1yb,1) be(By O(yf,tg) S.t. yi,[t] + yj,[t]Vi F j 7 Return set of B solutions, Y[Tl = I T\n/ augment log probabilities with diversity penalty O(yi,(qj) O(yb,[tj) +A(Uh=1 Yt)[yb,t] ]b E[B'], yb,[t] E Vf and X > 0 // perform one step of beam search for the group. bE[B] O(yb,[t] (argmax/ s.t. yi,[t] F Yj,[t]ViF j"}, {"section_index": "4", "section_name": "4 RELATED WORK", "section_text": "Diverse M-Best Lists. The task of generating diverse structured outputs from probabilistic models. has been studied extensively (Park & Ramanan. 2011: Batra et al.. 2012: Kirillov et al.. 2015: Prasad et al.. 2014). Batra et al. (2012) formalized this task for Markov Random Fields as the DivMBest problem and presented a greedy approach which solves for outputs iteratively, conditioning on pre. vious solutions to induce diversity. Kirillov et al. (2015) show how these solutions can be found.\nNote that the first group (g = 1) is not 'conditioned' on other groups during optimization, so our method is guaranteed to perform at least as well as a beam search of size B'\njointly (non-greedily) for certain kinds of energy functions. The techniques developed by Kirillo are not directly applicable to decoding from RNNs, which do not satisfy the assumptions made\nMost related to our proposed approach is the work of Gimpel et al. (2013), who applied DivMBest. to machine translation using beam search as a black-box inference algorithm. Specifically, in this. approach, DivMBest knows nothing about the inner-workings of BS and simply makes B sequential. calls to BS to generate B diverse solutions. This approach is extremely wasteful because BS is. called B times, run from scratch every time, and even though each call to BS produces B solutions,. only one solution is kept by DivMBest. In contrast, DBS avoids these shortcomings by integrating diversity within BS such that no beams are discarded. By running multiple beam searches in parallel. and at staggered time offsets, we obtain large time savings making our method comparable to a single run of classical BS. One potential disadvantage of our method w.r.t. Gimpel et al. (2013) is that sentence-level diversity metrics cannot be incorporated in DBS since no group is complete when. diversity is encouraged. However, as observed empirically by us and Li et al. (2015), initial words tend to disproportionally impact the diversity of the resultant sequences - suggesting that later words. may not be important for diverse inference..\nDiverse Decoding for RNNs. Efforts have been made by Li et al. (2015) and Li & Jurafsky (2016) to produce diverse decodings from recurrent models for conversation modeling and machine trans. lation. Both of these works propose new heuristics for creating diverse M-Best lists and employ. mutual information to re-rank lists of sequences. The latter achieves a goal separate from ours,. which is simply to re-rank diverse lists..\nLi & Jurafsky (2016) proposes a BS diversification heuristic that discourages beams from sharing common roots, implicitly resulting in diverse lists. Introducing diversity through a modified objec tive (as in DBS) rather than via a procedural heuristic provides easier generalization to incorporate different notions of diversity and control the exploration-exploitation trade-off as detailed in Section 5.1. Furthermore, we find that DBS outperforms the method of Li & Jurafsky (2016).\nLi et al. (2015) introduced a novel decoding objective that maximizes mutual information between inputs and predicted outputs to penalize generic sequences. This operates on a principle orthogo nal and complementary to DBS and Li & Jurafsky (2016). It works by penalizing utterances that are generally more frequent (diversity independent of input) rather than penalizing utterances that are similar to other utterances produced for the same input (diversity conditioned on input). Fur thermore, the input-independent approach requires training a new language model for the target language while DBS just requires a diversity function . Combination of these complementary techniques is left as interesting future work.\nIn other recent work, Wu et al. (2016) modify the beam search objective by introducing length normalization to favor longer sequences and a coverage penalty that favors sequences that account for the complete input sequence. While the coverage term does not generalize to all neural sequence models, the length-normalization term can be implemented by modifying the joint-log-probability of each sequence. Although the goal of this method is not to produce diverse lists and hence not directly comparable, it is a complementary technique that can be used in conjunction with our diverse decoding method."}, {"section_index": "5", "section_name": "5 EXPERIMENTS", "section_text": "In this section, we evaluate our approach on image captioning, machine translation, conversation and. visual question generation tasks to demonstrate both its effectiveness against baselines and its gen. eral applicability to any inference currently supported by beam search. We also analyze the effects. of DBS parameters, explore human preferences for diversity, and discuss diversity's importance in explaining complex images. We first explain the baselines and evaluations used in this paper..\nBaselines & Metrics. Apart from classical beam search, we compare DBS with the diverse decoding method proposed in Li & Jurafsky (2016). We also compare against two other complementary decoding techniques proposed in Li et al. (2015) and Wu et al. (2016). Note that these two techniques are not directly comparable with DBS since the goal is not to produce diverse lists. We now provide a brief description of the comparisons mentioned:\n- Li & Jurafsky (2016): modify BS by introducing an intra-sibling rank. For each partial solution the set of V] beam extensions are sorted and assigned intra-sibling ranks k E [V| in order\nWe compare to our own implementations of these methods as none are publicly available. Both Li & Jurafsky (2016) and Li et al. (2015) develop and use re-rankers to pick a single solution from the generated lists. Since we are interested in evaluating the quality of the generated lists and in isolating the gains due to diverse decoding, we do not implement any re-rankers, simply sorting by log-probability.\nWe evaluate the performance of the generated lists using the following two metrics\nSimultaneous improvements in both metrics indicate that output sequences have increased diversity without sacrificing fluency and correctness with respect to target tasks.\nNumber of Groups (G). Setting G=B allows for the maximum exploration of the search space while setting G=1 reduces DBS to BS, resulting in increased exploitation of the search-space around the 1-best decoding. Empirically, we find that maximum exploration correlates with improved oracle accuracy and hence use G=B to report results unless mentioned otherwise. See the supplement for a comparison and more details.\nDiversity Strength (). The diversity strength X specifies the trade-off between the model score and. diversity terms. As expected, we find that a higher value of produces a more diverse list; however. very large values of can overpower model score and result in grammatically incorrect outputs. We set via grid search over a range of values to maximize oracle accuracies achieved on the validation. set. We find a wide range of values (0.2 to 0.8) work well for most tasks and datasets..\nChoice of Diversity Function (). In Section 3, we defined () as a function over a set of partial. solutions that outputs a vector of dissimilarity scores for all possible beam completions. Assuming. that each of the previous groups influences the completion of the current group independently, we. 1 Y +1 3, we illustrated a simple hamming diversity of this form that penalizes selection of tokens propor-. tionally to the number of time it was used in previous groups. However, this factorized diversity. term can take various forms in our model - with hamming diversity being the simplest. For lan-. guage models, we study the effect of using cumulative (i.e. considering all past time steps), n-gram. and neural embedding based diversity functions. Each of these forms encode differing notions of diversity and result in DBS outperforming BS. We find simple hamming distance to be effective and. report results based on this diversity measure unless otherwise specified. More details about these. forms of the diversity term are provided in the supplementary..\nof decreasing log probabilities, Ot(yt). The log probability of an extension is then reduced in. proportion to its rank, and continuations are re-sorted under these modified log probabilities to. select the top B 'diverse' beam extensions.. Li et al. (2015): train an additional unconditioned target sequence model U(y) and perform BS. decoding on an augmented objective P(y[x) - XU(y), penalizing input-independent decodings. Wu et al. (2016) modify the beam-search objective by introducing length-normalization that fa. vors longer sequences. The joint log-probability of completed sequences is divided by a factor (5 + |y)/(5 + 1)a, where E [0, 1]\nOracle Accuracy: Oracle or top k accuracy w.r.t. some task-specific metric, such as BLEU (Pap. ineni et al., 2002) or SPICE (Anderson et al., 2016), is the maximum value of the metric achieved. over a list of k potential solutions. Oracle accuracy is an upper bound on the performance of any. re-ranking strategy and thus measures the maximum potential of a set of outputs.. Diversity Statistics: We count the number of distinct n-grams present in the list of generated. outputs. Similar to Li et al. (2015), we divide these counts by the total number of words generatec to bias against long sentences.\nHere we discuss the impact of the number of groups, strength of diversity , and various forms of. diversity for language models. Note that the parameters of DBS (and other baselines) were tuned on a held-out validation set for each experiment. The supplement provides further discussion and experimental details."}, {"section_index": "6", "section_name": "5.2 IMAGE CAPTIONING", "section_text": "Dataset and Models. We evaluate on two datasets - COCO (Lin et al., 2014) and PASCAL-50S (Vedantam et al., 2015). We use the public splits as in Karpathy & Fei-Fei (2015) for COCO. PASCAL-50S is used only for testing (with 200 held out images used to tune hyperparameters). We. train a captioning model (Vinyals et al., 2015) using the neuraltalk2' code repository..\nResults. Table 1 shows Oracle (top k) SPICE for different values of k. DBS consistently outper forms BS and Li & Jurafsky (2016) on both datasets. We observe that gains on PASCAL-50S are more pronounced (7.14% and 4.65% SPICE@20 improvements over BS and Li & Jurafsky (2016)) than COCO. This suggests diverse predictions are especially advantageous when there is a mismatch between training and testing sets, implying DBS may be better suited for real-world applications.\nTable 1 also shows the number of distinct n-grams produced by different techniques. Our method. produces significantly more distinct n-grams (almost 300% increase in the number of 4-grams pro- duced) as compared to BS. We also note that our method tends to produce slightly longer captions compared on average. Moreover, on the PASCAL-50S test split we observe that DBS finds more. likely top-1 solutions on average - DBS obtains an average maximum log probability of -6.53 op. posed to -6.91 found by BS of the same beam width. This empirical evidence suggests that using. DBS as a replacement to BS may lead to lower inference approximation error..\nTable 1: Oracle accuracy and distinct n-grams on COCO and PASCAL-50S datasets for image captioning a B = 20. While we report SPICE, we observe similar trends in other metrics (reported in supplement)..\nHuman Studies. To evaluate human preference between captions generated by DBS and BS, we. perform a human study via Amazon Mechanical Turk using all 1000 images of PASCAL-50S. For. each image, both DBS and standard BS captions are shown to 5 different users. They are then asked \"Which of the two robots understands the image better?\" In this forced-choice test, DBS captions. were preferred over BS 60% of the time by human annotators..\nIs diversity always needed? While these results show that diverse outputs are important for systems that interact with users, is diversity always beneficial? While images with many objects (e.g., a park or a living room) can be described in multiple ways, the same is not true when there are few objects (e.g., a close up of a cat or a selfie). This notion is studied by Ionescu et al. (2016), which defines a \"difficulty score\"': the human response time for solving a visual search task. On the PASCAL- 50S dataset, we observe a positive correlation (p = 0.73) between difficulty scores and humans preferring DBS to BS. Moreover, while DBS is generally preferred by humans for 'difficult' images both are about equally preferred on 'easier' images. Details are provided in the supplement."}, {"section_index": "7", "section_name": "5.3 MACHINE TRANSLATION", "section_text": "'https://github.com/karpathy/neuraltalk2 2https://github.com/harvardnlp/seq2seq-attr\nDataset Method Oracle Accuracy (SPICE) Diversity Statistics @1 @5 @10 @20 distinct-1 distinct-2 distinct-3 distinct-4 Beam Search 4.933 7.046 7.949 8.747 0.12 0.57 1.35 2.50 Li & Jurafsky (2016) 5.083 7.248 8.096 8.917 0.15 0.97 2.43 5.31 PASCAL-50S DBS 5.357 7.357 8.269 9.293 0.18 1.26 3.67 7.33 Wu et al. (2016) 5.301 7.322 8.236 8.832 0.16 1.10 3.16 6.45 Li et al. (2015) 5.129 7.175 8.168 8.560 0.13 1.15 3.58 8.42 Beam Search 16.278 22.962 25.145 27.343 0.40 1.51 3.25 5.67 Li & Jurafsky (2016) 16.351 22.715 25.234 27.591 0.54 2.40 5.69 8.94 COCO DBS 16.783 23.081 26.088 28.096 0.56 2.96 7.38 13.44 Wu et al. (2016) 16.642 22.643 25.437 27.783 0.54 2.42 6.01 7.08 Li et al. (2015) 16.749 23.271 26.104 27.946 0.42 1.37 3.46 6.10\nWe use the WMT'14 dataset containing 4.5M sentences to train our machine translation models.. We train stacking LSTM models as detailed in Luong et al. (2015), consisting of 4 layers and 1024-. dimensional hidden states. While decoding sentences, we employ the same strategy to replace UNK. tokens. We train our models using the publicly available seq2 seq-at tn? code repository. We re-. port results on news-test-2013 and news-test-2014 and use the news-test-2012 to tune the parameters of DBS. We use sentence level BLEU scores to compute oracle metrics and report distinct n-grams.\nsimilar to image captioning. Results are shown in Table 2 and we again find that DBS consistently outperforms all baselines.\nTable 2: Quantitative results on English-German translation on the newstest-2013 and newstest-2014 datase combined (at B = 20).\nMethod Oracle Accuracy (BLEU-4) Diversity Statistics @1 @5 @10 @20 distinct-1 distinct-2 distinct-3 distinct-4 Beam Search 20.5 22.4 23.8 24.2 0.04 0.75 2.10 3.23 Li & Jurafsky (2016) 20.7 22.6 24.0 24.3 0.04 0.81 2.92 4.61 DBS 20.8 22.9 24.4 24.8 0.06 0.95 3.67 5.54 Wu et al. (2016) 20.6 22.6 24.3 24.6 0.05 0.88 2.83 4.50 Li et al. (2015) 20.7 23.1 24.4 24.6 0.04 0.86 2.76 4.31"}, {"section_index": "8", "section_name": "5.4 DIALOG GENERATION", "section_text": "Dialog generation is a task that is inherently diverse as there are multiple valid ways to respond to a statement. We train a seq2seq model consisting of LSTMs as in Vinyals & Le (2015) on the Cornell Movie Dialogs Corpus (Danescu-Niculescu-Mizil & Lee, 2011) using the neura1convo' reposi- tory. The training dataset consists of 222,182 conversational exchanges between movie characters Since automatic evaluation of dialog generation responses is an open research problem with exist- ing metrics being poorly correlated with human judgement (Liu et al., 2016), we show qualitative results to demonstrate the effectiveness of DBS. Table 3 compares BS and DBS at B=3.\nTable 3: Qualitative examples of dialog generation comparing best-first search, BS and DBS"}, {"section_index": "9", "section_name": "5.5 VISUAL OUESTION GENERATION", "section_text": "We also report results on Visual Question Generation (VQG) (Mostafazadeh et al., 2016), where. a model is trained to produce questions about an image. Generating visually focused questions is interesting because it requires reasoning about multiple problems that are central to vision - e.g.,. object attributes, relationships between objects, and natural language. Furthermore, many questions. could make sense for one image, so it is important that lists of generated questions be diverse..\nhttps://github.com/macournoyer/neuralconvo\nPrompt Beam Search Diverse Beam Search I don't know. I don't know. Why do you want to be the president? I don't know... Because you know! I don't know, but I am the best Because, I have another life I know! I know. It's such a beautiful day! It's true. Well... Yeah. That's not true, honey. Jacob. Jacob. What's your name? Berger, my man. Berger, darling Berger, Thomas. My mother used to hum that to me.\nWe use the VQA dataset (Antol et al., 2015) to train a model similar to image captioning architec tures. Instead of captions, the training set now consists of 3 questions per image. Similar to previous results, using beam search to sample outputs results in similarly worded questions (see Fig. 3) and DBS brings out new details captured by the model. Counting the number of types of questions gen- erated (as defined by Antol et al. (2015)) allows us to measure this diversity. We observe that the number of question types generated per image increases from 2.3 for BS to 3.7 for DBS (at B = 6).\nBeam search is widely a used approximate inference algorithm for decoding sequences from neural sequence models; however, it suffers from a lack of diversity. Producing multiple highly similar and generic outputs is not only wasteful in terms of computation but also detrimental for tasks with\nFigure 3: Qualitative results on Visual Question Generation. DBS generates questions that are non-generic anc belong to different question types.\ninherent ambiguity like many involving language. In this work, we modify Beam Search with a diversity-augmented sequence decoding objective to produce Diverse Beam Search. We develop a 'doubly greedy' approximate algorithm to minimize this objective and produce diverse sequence decodings. Our method consistently outperforms beam search and other baselines across all our experiments without extra computation or task-specific overhead. DBS is task-agnostic and can be applied to any case where BS is used, which we demonstrate in multiple domains. Our implementa- tion available at https://github.com/ashwinkalyan/dbs."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. Proceedings of the International Conference on Learning Repre sentations (ICLR), 2014. 1\nJenny Rose Finkel, Christopher D Manning, and Andrew Y Ng. Solving the problem of cascading errors: Approximate bayesian inference for linguistic annotation pipelines. In Proceedings of. the Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 618-626 2006. 1 K. Gimpel, D. Batra, C. Dyer, and G. Shakhnarovich. A systematic exploration of diversity in ma. chine translation. In Proceedings of the Conference on Empirical Methods in Natural Language. Processing (EMNLP), 2013. 1, 5, 12\nAlex Graves, Abdel-rahman Mohamed, and Geoffrey E. Hinton. Speech recognition with deep recurrent neural networks. abs/1303.5778. 2013. 1\nRadu Tudor Ionescu, Bogdan Alexe, Marius Leordeanu, Marius Popescu, Dim Papadopoulos, and Vittorio Ferrari. How hard can it be? Estimating the difficulty of visual search in an image. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016. 7\nJiwei Li and Dan Jurafsky. Mutual information and diverse decoding improve neural machine trans lation. arXiv preprint arXiv:1601.00372, 2016. 2, 5, 6, 7, 8, 13, 14\nTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr. Dollar, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context, 2014. 7\nNasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Van. derwende. Generating natural questions about an image. Proceedings of the Annual Meeting on Association for Computational Linguistics (ACL), 2016. 8\nAdarsh Prasad, Stefanie Jegelka, and Dhruv Batra. Submodular meets structured: Finding diverse subsets in exponentially-large structured item sets. In Advances in Neural Information Processing. Systems (NIPS), 2014. 2, 4\nSubhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell and Kate Saenko. Sequence to sequence-video to text. In Proceedings of IEEE Conference or Computer Vision and Pattern Recognition (CVPR), pp. 4534-4542, 2015. 1\nMinh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention based neural machine translation. arXiv preprint arXiv:1508.04025, 2015. 7\nKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the Annual Meeting on Association for Computational Linguistics (ACL), 2002. 6\n1, 8 Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neura image caption generator. In Proceedings of IEEE Conference on Computer Vision and Patter Recognition (CVPR), 2015. 1, 2, 7\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey. Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine trans- lation system: Bridging the gap between human and machine translation. arXiv preprint. arXiv:1609.08144, 2016. 5, 6, 7, 8, 13, 14"}, {"section_index": "11", "section_name": "SENSIVITY STUDIES", "section_text": "Number of Groups. Fig. 4 presents snapshots of the transition from BS to DBS at B = 6 and G = {1, 3, 6}. As beam width moves from 1 to G, the exploration of the method increases resulting in more diverse lists.\nB = 1 B = 3 B= 6 A small bird is standing on a rock A small bird is standing on a rock A small bird sitting on a rock A small bird sitting on a rock A small bird sitting on a rock A bird is standing on a rock in the sand A small bird sitting on top of a rock A small bird is standing on a rock in the field. A small bird is standing on a rock. A small bird standing on a rock A small bird is standing on a rock in the sand. A small bird sitting on a rock in a field A small bird is standing on the ground A white and black bird standing on a rock. A white and black bird standing on a rock A small bird sitting on top of a tree branch A white and black bird is standing on a rock. A yellow and black bird sitting on a rock.\nFigure 4: Effect of increasing the number of groups G. The beams that belong to the same group are colore similarly. Recall that diversity is only enforced across groups such that G = 1 corresponds to classical BS\nDiversity Strength. As noted in Section 5.1, our method is robust to a wide range of values of the diversity strength (). Fig. 5a shows a grid search of X for image-captioning on the PASCAL-50S. dataset.\nChoice of Diversity Function. The diversity function can take various forms ranging from sim ple hamming diversity to neural embedding based diversity. We discuss some forms for language. modelling below:\nHamming Diversity. This form penalizes the selection of tokens used in previous groups. proportional to the number of times it was selected before.. Cumulative Diversity. Once two sequences have diverged sufficiently, it seems unnecessary anc perhaps harmful to restrict that they cannot use the same words at the same time. To encode. this 'backing-off' of the diversity penalty we introduce cumulative diversity which keeps a. count of identical words used at every time step, indicative of overall dissimilarity. Specifically. (Ytj)[yjtj] = exp{(ret bea' I[yb,ry, l)/r} where T is a temperature parameter control- ling the strength of the cumulative diversity term and I [] is the indicator function.. n-gram Diversity. The current group is penalized for producing the same n-grams as previous. groups, regardless of alignment in time - similar to Gimpel et al. (2013). This is proportional tc. the number of times each n-gram in a candidate occurred in previous groups. Unlike hamming. diversity, n-grams capture higher order structures in the sequences.. Neural-embedding Diversity. While all the previous diversity functions discussed above perform. exact matches, neural embeddings such as word2vec (Mikolov et al., 2013) can penalize semanti. cally similar words like synonyms. This is incorporated in each of the previous diversity functions by replacing the hamming similarity with a soft version obtained by computing the cosine simi. larity between word2vec representations. When using with n-gram diversity, the representation of. the n-gram is obtained by summing the vectors of the constituent words..\nEach of these various forms encode different notions of diversity. Hamming diversity ensures dif- ferent words are used at different times, but can be circumvented by small changes in sequence alignment. While n-gram diversity captures higher order statistics, it ignores sentence alignment. Neural-embedding based encodings can be seen as a semantic blurring of either the hamming or n-gram metrics, with word2vec representation similarity propagating diversity penalties not only to exact matches but also to close synonyms. Fig. 5b shows the oracle performace of various forms of the diversity function described in Section 5.1. We find that using any of the above functions help outperform BS in the tasks we examine; hamming diversity achieves the best oracle performance despite its simplicity."}, {"section_index": "12", "section_name": "IMAGE CAPTIONING EVALUATION", "section_text": "While we report oracle SPICE values in the paper, our method consistently outperforms base-. lines and classica1 BS on other standard metrics such as CIDEr (Table 4), METEOR (Table 5) and ROUGE (Table 6). We provide these additional results in this section..\nlabmda grid search Oracle CiDEr vs. Number of solutions 1.00 1.2 0.95 1.1 0.90 1.0 0.85 0.80 0.9 oeeeee 0.75 0.8 0.70 DBS Cumulative Diversity 0.7 DBS N-gram Diversity (N = 2) 0.65 B = 1 B = 4 DBS Word2VecDiversity B = 2 B = 6 0.6 DBS 0.60 BS 0.55 0.5 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 0 5 10 15 20 lambda k (a) Grid search of diversity strength parameter (b) Effect of multiple : for the diversity function\nTable 4: CIDEr Oracle accuracy on COCO and PASCAL-50S datasets for image captioning at B = 20\nDataset Method Oracle Accuracy (CIDEr) @1 @5 @10 @20 Beam Search 53.79 83.94 96.70 107.63 Li & Jurafsky (2016) 54.61 85.21 99.80 110.64 PASCAL-50S DBS 57.82 89.38 103.75 113.43 Wu et al. (2016) 47.77 72.12 84.64 105.66 Li et al. (2015) 49.80 81.35 96.87 107.37 Beam Search 87.27 121.74 133.46 140.98 Li & Jurafsky (2016) 91.42 111.33 116.94 119.14 COCO DBS 86.88 123.38 135.68 142.88 Wu et al. (2016) 87.54 122.06 133.21 139.43 Li et al. (2015) 88.18 124.20 138.65 150.06\nTable 5: METEOR Oracle accuracy on COCO and PASCAL-50S datasets for image captioning at B = 20\nDataset Method Oracle Accuracy (METEOR) @1 @5 @10 @20 Beam Search 12.24 16.74 19.14 21.22 Li & Jurafsky (2016) 13.52 17.65 19.91 21.76 PASCAL-50S DBS 13.71 18.45 20.67 22.83 Wu et al. (2016) 13.34 17.20 18.98 21.13 Li et al. (2015) 13.04 17.92 19.73 22.32 Beam Search 24.81 28.56 30.59 31.87 Li & Jurafsky (2016) 24.88 29.10 31.44 33.56 COCO DBS 25.04 29.67 33.25 35.42 Wu et al. (2016) 24.82 28.92 31.53 34.14 Li et al. (2015) 24.93 30.11 32.34 34.88\nModified SPiCE evaluation. To measure both the quality and the diversity of the generated cap tions, we compute SPICE-score by comparing the graph union of all the generated hypotheses with the ground truth scene graph. This measure rewards all the relevant relations decoded as against ora- cle accuracy that compares to relevant relations present only in the top-scoring caption. We observe that DBS outperforms both baselines under this measure with a score of 18.345 as against a score of 16.988 (beam search) and 17.452 (Li & Jurafsky, 2016).\nFigure 5: Fig. 5a shows the results of a grid search of the diversity strength () parameter of DBS on the validation split of PAsCAL 50S dataset. We observe that it is robust for a wide range of values. Fig. 5b compares the performance of multiple forms for the diversity function (). While naive diversity performs the best, other forms are comparable while being better than BS\nTable 6: ROUGE Oracle accuracy on COCO and PASCAL-50S datasets for image captioning at B = 20"}, {"section_index": "13", "section_name": "HUMAN STUDIES", "section_text": "For image-captioning, we conduct a human preference study between BS and DBS captions a explained in Section 5. A screen shot of the interface used to collect human preferences for caption generated using DBS and BS is presented in Fig. 6. The lists were shuffled to guard the task fron being gamed by a turker.\nTable 7: Frequency table for image difficulty and human preference for DBS captions on PASCAL50S datase\ndifficulty score # images % images DBS bin range was preffered - 481 50.51% [-0,+o] 409 69.92% +o 110 83.63%\nAs mentioned in Section 5, we observe that difficulty score of an image and human preference fo DBS captions are positively correlated. The dataset contains more images that are less difficult and so, we analyze the correlation by dividing the data into three bins. For each bin, we report th % of images for which DBS captions were preferred after a majority vote (i.e. at least 3/5 turker voted in favor of DBS) in Table 7. At low difficulty scores consisting mostly of iconic images - on might expect that BS would be preferred more often than chance. However, mismatch between th statistics of the training and testing data results in a better performance of DBS. Some examples fo this case are provided in Fig. 7. More general qualitative examples are provided in Fig. 8."}, {"section_index": "14", "section_name": "DISCUSSION", "section_text": "Are longer sentences better? Many recent works propose a scoring or a ranking objective that depends on the sequence length. These favor longer sequences, reasoning that they tend to have more details and resulting in improved accuracies. We measure the correlation between length of a sequence and its accuracy (here, SPICE) and observe insignificant correlation between SPICE and sequence length. On the PASCAL-50S dataset, we find that BS and DBS have are negatively correlated (p = 0.003 and p = 0.015 respectively), while (Li & Jurafsky, 2016) is correlated positively (p = 0.002). Length is not correlated with performance in this case.\nEfficient utilization of beam budget. In this experiment, we emperically show that DBS makes. efficient use of the beam budget in exploring the search space for better solutions. Fig. 9 shows the. variation of oracle SPICE (@B) with the beam size. At really high beam widths. all decoding tech niques achieve similar oracle accuracies. However, diverse decoding techniques like DBS achieve. the same oracle at much lower beam widths. Hence, DBS not only produces sequence lists that are. significantly different but also efficiently utilizes the beam budget to decode better solutions.\nDataset Method Oracle Accuracy (ROUGE-L) @1 @5 @10 @20 Beam Search 45.23 56.12 59.61 62.04 Li & Jurafsky (2016) 46.21 56.17 60.15 62.95 PASCAL-50S DBS 46.24 56.90 60.35 63.02 Wu et al. (2016) 43.73 52.29 56.49 61.65 Li et al. (2015) 44.12 54.67 57.34 60.11 Beam Search 52.46 58.43 62.56 65.14 Li & Jurafsky (2016) 52.87 59.89 63.45 65.42 COCO DBS 53.04 60.89 64.24 67.72 Wu et al. (2016) 52.13 58.26 62.89 65.77 Li et al. (2015) 53.10 59.32 63.04 66.19\nWhich of the two robots understands the image better?\nTwo robots are shown an image. They both make 5 guesses each for describing the image with single sentence.\nsingle sentence. Which robot do you think is more intelligent or human-like displaying a better understanding of the image? Note: Select the radio button above the set of captions that you pick. O a chair and a chair in a room a chair and a chair in a room a couch and chair in a room with a window a living room with a couch and a table a room with a bed and a chair and a table a living room with a couch and a chair a chair sitting in a room with a chair and a chair a living room with a chair and a chair an empty chair with a red chair in the comer a chair and a chair in a room with a window a bed with a red chair and a table with a laptop on it a chair and a chair in a room with a table\nFigure 6: Screen-shot of the interface used to perform human studies\nO O a chair and a chair in a room a chair and a chair in a room a couch and chair in a room with a window a living room with a couch and a table a room with a bed and a chair and a table a living room with a couch and a chair a chair sitting in a room with a chair and a chair a living room with a chair and a chair an empty chair with a red chair in the comer a chair and a chair in a room with a window a bed with a red chair and a table with a laptop on it a chair and a chair in a room with a table\nBeamSearch A man riding a motorcycle on a dirt road A man riding a motorcycle on a beach A man riding a motorcycle on the side of a road. A man riding a bike on a dirt road A man riding a motorcycle on the side of the road A man riding a motorcycle on the side of a beach Diverse Beam Search A man riding a motorcycle on a beach. A man riding a bike on a dirt road A man riding a bike on a dirt road A man on a motorcycle is flying a kite A person on a skateboard riding on the side of a road A person on a bicycle with a helmet on on the ground Difficulty Score : 2.8308 Beam Search A black bear standing in a grassy field A black bear standing in a field of grass A black bear is standing in the grass A black bear is standing in a field A black bear standing in the grass next to a tree A black bear standing in the grass near a fence Diverse Beam Search A black dog is standing in the grass A black dog is standing in the grass A black bear walking through a grassy field A black bear walking in a field of grass. A black and white dog is standing in the grass A black bear standing in the grass near a fence Difficulty Score : 2.9287 Beam Search A close up of a bowl of broccoli A close up of a plate of broccoli A close up of a broccoli plant on a table A close up of a bowl of broccoli on a table A close up of a broccoli plant in a garden A close up of a plate of broccoli and cauliflower Diverse Beam Search A close up of a bowl of broccoli A close up of a plate of broccoli and broccoli A green plant with a green plant in it A green plant with a bunch of green leaves A white plate topped with broccoli and a plant A small green plant with a green plant in it Difficulty Score: 2.8999\nFigure 7: For images with low difficulty score, BS captions are preferred to DBS as show in the first figure. However, we observe that DBS captions perform better when there is a mismatch between the statistics of the testing and training sets. Interesting captions are colored in blue for readability..\nFigure 8: For images with a high difficulty score, captions produced by DBS are preferred to BS. Interesting captions are colored in blue for readability..\nBeam Search A group of people sitting at a table with laptops A group of people sitting at a table. A couple of people that are sitting at a table A group of people sitting around a table with laptops A group of people sitting at a table in front of laptops A group of people sitting at a table with a laptop Diverse Beam Search A group of people sitting at a table with laptops A group of people sitting at a table with laptops A group of people sitting around a table with laptops A group of people are sitting at a table Two people sitting at a table with laptops Three people are sitting at a table with laptops. Difficulty Score : 5.4382 Beam Search A woman sitting in front of a laptop computer. A woman sitting at a table with a laptop. A woman sitting at a table with a laptop computer. A woman is working on a laptop computer A woman sitting at a desk with a laptop computer A woman is sitting at a table with a laptop Diverse Beam Search A woman sitting at a table with a laptop computer A woman is working on a laptop computer A woman is sitting at a table with a laptop A man sitting at a desk with a laptop computer A woman in a kitchen with a laptop computer A man is sitting at a table with a laptop and a computer Difficulty Score : 4.1815 Beam Search A wooden table topped with plates of food. A table with plates of food on it A wooden table topped with plates and bowls of food A table that has a bunch of plates on it A wooden table topped with plates of food and glasses A wooden table topped with plates of food and cups Diverse Beam Search A table with a plate of food and a glass of wine. A table with a plate of food and a glass. A table with plates of food and a glass of wine A dining table with a plate of food and a glass of wine A table with a bowl of food and a bowl of soup on it A dining room table with a plate of food and a glass of DifficultyScore:3.8146 wine on it\n0.110 0.26 0.25 0.105 0.24 0.100 0.23 0.095 0.22 90 0.090 0.21 0.20 0.085 DBS, lambda=0.5 DBS, lambda=0.5 DBS, lambda=0.9 0.19 DBS, lambda=0.9 BS BS 0.080 L&J16, lambda=0.6 0.18 L&J16, lambda=0.6 L&J16, lambda=1.2 L&J16, lambda=1.2 0.075. 0.17 10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100 number of beams (B) number of beams (B) (a) Oracle SPICE (@B) vs B (b) Oracle METEOR (@B) vs B\nFigure 9: As the number of beams increases, all decoding methods tend to achieve about the same oracle accuracy. However, diverse decoding techniques like DBS utilize the beam budget efficiently achieving highei oracle accuracies at much lower beam budgets."}] |
HyM25Mqel | [{"section_index": "0", "section_name": "SAMPLE EFFICIENT ACTOR-CRITIC WITH EXPERIENCE REPLAY", "section_text": "Ziyu Wang\nVictor Bapst\nziyu@google.com\nvbapst@google.com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Realistic simulated environments, where agents can be trained to learn a large repertoire of cognitive skills, are at the core of recent breakthroughs in AI (Bellemare et al.]2013) Mnih et al.]2015 Schulman et al.]2015a] Narasimhan et al.]2015] Mnih et al.]2016] Brockman et al.]2016] Oh et al.2016). With richer realistic environments, the capabilities of our agents have increased and improved. Unfortunately, these advances have been accompanied by a substantial increase in the cost of simulation. In particular, every time an agent acts upon the environment, an expensive simulation step is conducted. Thus to reduce the cost of simulation, we need to reduce the number of simulation steps (i.e. samples of the environment). This need for sample efficiency is even more compelling when agents are deployed in the real world.\nExperience replay (Lin] 1992) has gained popularity in deep Q-learning (Mnih et al.]2015] Schaul et al.2016] [Wang et al.[2016f [Narasimhan et al.[2015], where it is often motivated as a technique for reducing sample correlation. Replay is actually a valuable tool for improving sample efficiency and, as we will see in our experiments, state-of-the-art deep Q-learning methods (Schaul et al.]2016. Wang et al.|2016) have been up to this point the most sample efficient techniques on Atari by a. significant margin. However, we need to do better than deep Q-learning, because it has two important limitations. First, the deterministic nature of the optimal policy limits its use in adversarial domains. Second, finding the greedy action with respect to the Q function is costly for large action spaces..\nPolicy gradient methods have been at the heart of significant advances in AI and robotics (Silver et al. 2014 Lillicrap et al. 2015 Silver et al.|2 2016 Lev1ne et al. 2015 Mnih et al.|2016 ISchulman et al. 2015af Heess et al.2015). Many of these methods are restricted to continuous domains or to very specific tasks such as playing Go. The existing variants applicable to both continuous and discrete domains, such as the on-policy asynchronous advantage actor critic (A3C) of|Mnih et al.(2016), are sample inefficient.\nThe design of stable, sample efficient actor critic methods that apply to both continuous and discrete action spaces has been a long-standing hurdle of reinforcement learning (RL). We believe this paper\nNicolas Heess\nkorayk@google.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "This paper presents an actor-critic deep reinforcement learning agent with ex. perience replay that is stable, sample efficient, and performs remarkably well on challenging environments, including the discrete 57-game Atari domain and several continuous control problems. To achieve this, the paper introduces several inno vations, including truncated importance sampling with bias correction, stochastic dueling network architectures, and a new trust region policy optimization method.\nis the first to address this challenge successfully at scale. More specifically, we introduce an actor. critic with experience replay (ACER) that nearly matches the state-of-the-art performance of deep. Q-networks with prioritized replay on Atari, and substantially outperforms A3C in terms of sample efficiency on both Atari and continuous control domains..\nACER capitalizes on recent advances in deep neural networks, variance reduction techniques, the off-policy Retrace algorithm (Munos et al.]2016) and parallel training of RL agents (Mnih et al. 2016). Yet, crucially, its success hinges on innovations advanced in this paper: truncated importance sampling with bias correction, stochastic dueling network architectures, and efficient trust region policy optimization.\nOn the theoretical front, the paper proves that the Retrace operator can be rewritten from our proposed truncated importance sampling with bias correction technique.\nHere, the expectations are with respect to the observed environment states xt and the actions generated by the policy r, where xt+1:oo denotes a state trajectory starting at time t + 1..\nWe also need to define the advantage function A (xt, at) = Q\" (xt, at) - V\" (xt), which provides a relative measure of value of each action since Ea.[A (xt, at)] = 0..\nA\"(xt,at)Vg l0g ne(at|xt) t>0\nk-1 a3c (xt Ve log ne(at|xt) y- tk t>0 i=0\nA3C combines both k-step returns and function approximation to trade-off variance and bias. We may think of V (xt) as a policy gradient baseline used to reduce variance\nIn the following section, we will introduce the discrete-action version of ACER. ACER may be understood as the off-policy counterpart of the A3C method of|Mnih et al.[(2016). As such, ACER builds on all the engineering innovations of A3C, including efficient parallel CPU computation\nConsider an agent interacting with its environment over discrete time steps. At time step t, the agent. observes the nx-dimensional state vector xt E C Rnx, chooses an action at according to a policy. (a[xt) and observes a reward signal rt E R produced by the environment. We will consider discrete actions at E {1, 2, ..., Na} in Sections|3and4] and continuous actions at E A _ Rna in Section|5\nThe goal of the agent is to maximize the discounted return R, =. i>o Y'rt+i in expectation. The discount factor y E [0, 1) trades-off the importance of immediate and future rewards. For an agent. following policy , we use the standard definitions of the state-action and state only value functions.\n(xt, at [Rt|xt,At] and V\"(xt) = Eat [Q(xt,at)|xt] t+1:00,At+1:x\nThe parameters 0 of the differentiable policy e(at[xt) can be updated using the discounted approxi mation to the policy gradient (Sutton et al.2 2000], which borrowing notation from Schulman et al. 2015b), is defined as:\nFollowing Proposition 1 ofSchulman et al.(2015b), we can replace A (xt, at) in the above expression with the state-action value Q* (xt, at), the discounted return Rt, or the temporal difference residual. rt + yV(xt+1) - V(xt), without introducing bias. These choices will however have different variance. Moreover, in practice we will approximate these quantities with neural networks thus introducing additional approximation errors and biases. Typically, the policy gradient estimator using Rt will have higher variance and lower bias whereas the estimators using function approximation will have higher bias and lower variance. Combining Rt with the current value function approximation to minimize bias while maintaining bounded variance is one of the central design principles behind. ACER.\nTo trade-off bias and variance, the asynchronous advantage actor critic (A3C) of|Mnih et al.|(2016) uses a single trajectory sample to obtain the following gradient approximation:.\nACER uses a single deep neural network to estimate the policy e(at|xt) and the value function. V (xt). (For clarity and generality, we are using two different symbols to denote the parameters of. the policy and value function, 0 and O,, but most of these parameters are shared in the single neura. network.) Our neural networks, though building on the networks used in A3C, will introduce severa. modifications and new modules."}, {"section_index": "3", "section_name": "DISCRETE ACTOR CRITIC WITH EXPERIENCE REPLAY", "section_text": "(at|xt) where Pt denotes the importance weight. This estimator is unbiased, but it suffers from. (atxt) very high variance as it involves a product of many potentially unbounded importance weights. To. prevent the product of importance weights from exploding, Wawrzynski|(2009) truncates this product Truncated importance sampling over entire trajectories, although bounded in variance, could suffer. from significant bias.\nTwo important facts about equation (4) must be highlighted. First, note that it depends on Q\" and not on Q, consequently we must be able to estimate Q. Second, we no longer have a product of importance weights, but instead only need to estimate the marginal importance weight pt. Importance sampling in this lower dimensional space (over marginals as opposed to trajectories) is expected to exhibit lower variance.\nIn the following subsection, we adopt the Retrace algorithm ofMunos et al.(2016) to estimate Q\" Subsequently, we propose an importance weight truncation technique to improve the stability of the off-policy actor critic ofDegris et al.(2012), and introduce a computationally efficient trust region scheme for policy optimization. The formulation of ACER for continuous action spaces will require further innovations that are advanced in Section|5\nIn this paper, we estimate Q* (xt, at) using Retrace (Munos et al.2016). (We also experimented with the related tree backup method of|Precup et al.[(200o) but found Retrace to perform better in practice.) Given a trajectory generated under the behavior policy , the Retrace estimator can be expressed recursively as follows\nt(xt,at) = rt + YPt+1[Qret(xt+1,At+1) - Q(xt+1,At+1)]+ yV(xt+1)\n'For ease of presentation, we consider only X = 1 for Retrace\nOff-policy learning with experience replay may appear to be an obvious strategy for improving. the sample efficiency of actor-critics. However, controlling the variance and stability of off-policy estimators is notoriously hard. Importance sampling is one of the most popular approaches for off-. policy learning (Meuleau et al.[2000) Jie & Abbeel| 2010, Levine & Koltun]2013). In our context, it proceeds as follows. Suppose we retrieve a trajectory {xo, ao, ro, (|xo), ... , xk, ak, rk, (|xk)} where the actions have been sampled according to the behavior policy , from our memory of. experiences. Then, the importance weighted policy gradient is given by:.\nk k k imp Pt Ve log ne(at|xt) t=0 t=0 i=0\nRecently, Degris et al.(2012) attacked this problem by using marginal value functions over the limiting distribution of the process to yield the following approximation of the gradient:.\nDegris et al.(2012) estimate Q\" in equation (4) using lambda returns: R = rt + (1 - X)yV(xt+1) + Aypt+1Rt+1. This estimator requires that we know how to choose X ahead of time to trade off bias and variance. Moreover, when using small values of to reduce variance, occasional large importance weights can still cause instability.\nTo approximate the policy gradient gmarg, ACER uses Qret to estimate Q. As Retrace uses multi-. step returns, it can significantly reduce bias in the estimation of the policy gradient2"}, {"section_index": "4", "section_name": "3.2 IMPORTANCE WEIGHT TRUNCATION WITH BIAS CORRECTION", "section_text": "The clipping of the importance weight in the first term of equation (7) ensures that the variance of. the gradient estimate is bounded. The correction term (second term in equation (7)) ensures that our. estimate is unbiased. Note that the correction term is only active for actions such that pt(a) > c. In particular, if we choose a large value for c, the correction term only comes into effect when the variance of the original off-policy estimator of equation (4) is very high. When this happens, our. decomposition has the nice property that the truncated weight in the first term is at most c while the.\nWe model Q (xt, a) in the correction term with our neural network approximation Qe. (xt, at). This modification results in what we call the truncation with bias correction trick, in this case applied to the function Ve log e(atxt)Q*(xt, at):\nqmarg =ExEat ptVglog re(at|xt)Qret(xt,at)]+E 9lOg nTe 1~ 0tQ\nEquation (8) involves an expectation over the stationary distribution of the Markov process. We can however approximate it by sampling trajectories { xo, ao. Xk,ak,rk, u(:xk)}\nQ is the current p(at|xt) ' value estimate of Q\", and V(x) = Ea~Q(x, a). Retrace is an off-policy, return-based algorithm which has low variance and is proven to converge (in the tabular case) to the value function of the target policy for any behavior policy, seeMunos et al.(2016).\nThe recursive Retrace equation depends on the estimate Q. To compute it, in discrete action spaces we adopt a convolutional neural network with \"two heads\"' that outputs the estimate Qe., (xt, at), as well as the policy e(at[xt). This neural representation is the same as in (Mnih et al.]2016), with the exception that we output the vector Qe.. (xt, at) instead of the scalar Ve..(xt). The estimate Ve.. (xt can be easily derived by taking the expectation of Qe, under ne.\nTo learn the critic Qe., (xt, at), we again use Qret(xt, at) as a target in a mean squared error loss and update its parameters 0, with the following standard gradient:.\nQret(xt,at) - Qo,(xt,at))Vo,Qor(xt,at))\nBecause Retrace is return-based, it also enables faster learning of the critic. Thus the purpose of the multi-step estimator Qret in our setting is twofold: to reduce bias in the policy gradient, and to enable faster learning of the critic, hence further reducing bias.\nThe marginal importance weights in Equation (4) can become large, thus causing instability. To safe-guard against high variance, we propose to truncate the importance weights and introduce a correction term via the following decomposition of gmarg:.\narg =E PtVgl0g ne(at|xt)Q\"(xt,at =ExtEat[PtVglog ne(at|xt)Q\"(xt,at)]+E Velog ne(a|xt (X+ Pt(a)\ngenerated from the behavior policy . Here the terms (.[xt) are the policy vectors. Given thest trajectories, we can compute the off-policy ACER gradient:.\nacer 9t pt Ve log e(at|xt)|Q* (xt,at)- Ve,(xt) + E Ve logne(a|xt)[Qe, (xt, a) - Ve, (xt) a~T\nIn the above expression, we have subtracted the classical baseline Ve, (xt) to reduce variance\nIt is interesting to note that, when c = oo, (9) recovers (off-policy) policy gradient up to the use of Retrace. When c = 0, (9) recovers an actor critic update that depends entirely on Q estimates In the continuous control domain, (9) also generalizes Stochastic Value Gradients if c = 0 and the reparametrization trick is used to estimate its second term (Heess et al.2015)\nThe policy updates of actor-critic methods do often exhibit high variance. Hence, to ensure stability we must limit the per-step changes to the policy. Simply using smaller learning rates is insufficient. as they cannot guard against the occasional large updates while maintaining a desired learning. speed. Trust Region Policy Optimization (TRPO) (Schulman et al.]2015a) provides a more adequate solution.\nIn this section we introduce a new trust region policy optimization method that scales well to large. problems. Instead of constraining the updated policy to be close to the current policy (as in TRPO) we propose to maintain an average policy network that represents a running average of past policies and forces the updated policy to not deviate far from this average..\nWe decompose our policy network in two parts: a distribution f, and a deep neural network that gen erates the statistics $e(x) of this distribution. That is, given f, the policy is completely characterized by the network e: (.[x) = f(.[e(x)). For example, in the discrete domain, we choose f to be the categorical distribution with a probability vector e(x) as its statistics. The probability vector is of course parameterised by 0.\nWe denote the average policy network as $e. and update its parameters 0a \"softly\"' after each update to the policy parameter 0: 0a a0a + (1 a)0.\nConsider, for example, the ACER policy gradient as defined in Equation (9), but with respect to\n1 minimize acer qt Z subject to )DkLlf($e.(xt)llf($e(xt)] Iz<\nThis transformation of the gradient has a very natural form. If the constraint is satisfied, there is no. change to the gradient with respect to e(xt). Otherwise, the update is scaled down in the direction\nSchulman et al.(2015a) approximately limit the difference between the updated policy and the current policy to ensure safety. Despite the effectiveness of their TRPO method, it requires repeated computation of Fisher-vector products for each update. This can prove to be prohibitively expensive in large domains.\nacer PtV$e(xt) log f(at|$e(x))[Qret(xt,at) - Vo,(xt)] 9t + E 7e(xt) log f(at|$o(x))[Qe,(xt,a) - V6 a~TT Ot0\nGiven the averaged policy network, our proposed trust region update involves two stages. In the first stage, we solve the following optimization problem with a linearized KL divergence constraint:.\n.acer minimize Z subject to pe(xt)DKL[f(|$ea(xt))|lf(|$e(xt))] z< 8\nSince the constraint is linear, the overall optimization problem reduces to a simple quadratic program ming problem, the solution of which can be easily derived in closed form using the KKT conditions.. Letting k = V$e(xt)DKL[f(|ea(xt)||f(|$e(xt)], the solution is:\nkT sacer 9t cacer 9t max k <\n1.8 1.6 1.5 1.4 (uewnn u!) ue!pnn (uewnh 12 1.0 u!) Medaan 1 on-policy + 0 replay (A3C) 1 on-policy + 1 replay (ACER) 0.6 1 on-policy + 4 replay (ACER) 0.4 1 on-policy + 8 replay (ACER) DQN 0.2 Prioritized Replay 0.0 0.0 100 200 300 400 500 600 700 800 900 20 40 60 80 100 120 Million Steps Hours\nFigure 1: ACER improvements in sample (LEFT) and computation (R1GHT) complexity on Atari On each plot, the median of the human-normalized score across all 57 Atari games is presented for ratios of replay with O replay corresponding to on-policy A3C. The colored solid and dashed line. represent ACER with and without trust region updating respectively. The environment steps are. counted over all threads. The gray curve is the original DQN agent (Mnih et al.2015) and the black curve is one of the Prioritized Double DQN agents fromSchaul et al.(2016).\nof k, thus effectively lowering rate of change between the activations of the current policy and th average policy network\nIn the second stage, we take advantage of back-propagation. Specifically, the updated gradient with respect to $e, that is z*, is back-propagated through the network to compute the derivatives with respect to the parameters. The parameter updates for the policy network follow from the chain rule d$e(x) *\nThe trust region step is carried out in the space of the statistics of the distribution f, and not in the space of the policy parameters. This is done deliberately so as to avoid an additional back-propagation step through the policy network\nWe would like to remark that the algorithm advanced in this section can be thought of as a general. strategy for modifying the backward messages in back-propagation so as to stabilize the activations\nInstead of a trust region update, one could alternatively add an appropriately scaled KL cost to th objective function as proposed by|Heess et al.(2015). This approach, however, is less robust to th choice of hyper-parameters in our experience\nThe ACER algorithm results from a combination of the above ideas, with the precise pseudo-code. appearing in Appendix [A] A master algorithm (Algorithm [1) calls ACER on-policy to perform updates and propose trajectories. It then calls ACER off-policy component to conduct several replay. steps. When on-policy, ACER effectively becomes a modified version of A3C where Q instead of V baselines are employed and trust region optimization is used."}, {"section_index": "5", "section_name": "4 RESULTS ON ATARI", "section_text": "We use the Arcade Learning Environment of|Bellemare et al.(2013) to conduct an extensive evaluation. We deploy one single algorithm and network architecture, with fixed hyper-parameters, to learn. to play 57 Atari games given only raw pixel observations and game rewards. This task is highly. demanding because of the diversity of games, and high-dimensional pixel-level observations.\nOur experimental setup uses 16 actor-learner threads running on a single machine with no GPUs. We. adopt the same input pre-processing and network architecture as Mnih et al.(2015). Specifically. the network consists of a convolutional layer with 32 8 8 filters with stride 4 followed by another. convolutional layer with 64 4 4 filters with stride 2, followed by a final convolutional layer with 64. 3 3 filters with stride 1, followed by a fully-connected layer of size 512. Each of the hidden layers. is followed by a rectifier nonlinearity. The network outputs a softmax policy and Q values\nWhen using replay, we add to each thread a replay memory that is up to 50 000 frames in size. The total amount of memory used across all threads is thus similar in size to that of DQN (Mnih et al 2015). For all Atari experiments, we use a single learning rate adopted from an earlier implementatior of A3C without further tuning. We do not anneal the learning rates over the course of training as in Mnih et al.(2016). We otherwise adopt the same optimization procedure as in Mnih et al.[(2016 Specifically, we adopt entropy regularization with weight 0.001, discount the rewards with y = 0.99 and perform updates every 20 steps (k = 20 in the notation of Section2). In all our experiments witl experience replay, we use importance weight truncation with c = 10. We consider training ACER both with and without trust region updating as described in Section[3.3] When trust region updating is used, we use & = 1 and a = 0.99 for all experiments.\nTo compare different agents, we adopt as our metric the median of the human normalized score over all 57 games. The normalization is calculated such that, for each game, human scores and random scores are evaluated to 1, and O respectively. The normalized score for a given game at time t is computed as the average normalized score over the past 1 million consecutive frames encountered until time t. For each agent, we plot its cumulative maximum median score over time. The result is summarized in Figure 1\nThe four colors in Figure[1correspond to four replay ratios (0, 1, 4 and 8) with a ratio of 4 meaning that we use the off-policy component of ACER 4 times after using the on-policy component (A3C) That is, a replay ratio of O means that we are using A3C. The solid and dashed lines represent ACER with and without trust region updating respectively. The gray and black curves are the original DQN (Mnih et al.]2015) and Prioritized Replay agent of[Schaul et al.(2016) agents respectively.\nAs shown on the left panel of Figure[1 replay significantly increases data efficiency. We observe that when using the trust region optimizer, the average reward as a function of the number of environmental steps increases with the ratio of replay. This increase has diminishing returns, but with enough replay ACER can match the performance of the best DQN agents. Moreover, it is clear that the off-policy actor critics (ACER) are much more sample efficient than their on-policy counterpart (A3C).\nRetrace requires estimates of both Q and V, but we cannot easily integrate over Q to derive V in continuous action spaces. In this section, we propose a solution to this problem in the form of a nove representation for RL, as well as modifications necessary for trust region updating"}, {"section_index": "6", "section_name": "5.1 POLICY EVALUATION", "section_text": "Retrace provides a target for learning Qe,, but not for learning Ve,. We could use importanc sampling to compute Ve.. given Qe.., but this estimator has high variance.\nn Xt,at) ~ Ve.(xt)+ Ao.(xt,at ) Ag,(xt,ui), and ui~ne([xt n i=1\nThe right panel of Figure[1shows that ACER agents perform similarly to A3C when measured by wall clock time. Thus, in this case, it is possible to achieve better data-efficiency without necessarily compromising on computation time. In particular, ACER with a replay ratio of 4 is an appealing alternative to either the prioritized DQN agent or A3C.\nWe propose a new architecture which we call Stochastic Dueling Networks (SDNs), inspired by the Dueling networks of|Wang et al.(2016), which is designed to estimate both V\" and Q\" off-policy while maintaining consistency between the two estimates. At each time step, an SDN outputs a of O* and a deterministic estimate V. of V. such that stochastic estimate O.\nFigure 2: A schematic of the Stochastic Dueling Network. In the drawing, [u1, ... , un] are assumed to be samples from e(xt). This schematic illustrates the concept of SDNs but does not reflect the real sizes of the networks used.\nIn addition to SDNs, however, we also construct the following novel target for estimating V\ntarget = min\nThe above target is also derived via the truncation and bias correction trick; for more details, see AppendixD\nFinally, when estimating Q et in continuous domains, we implement a slightly different formulatiol\nthe action space. Although not essential, we have found this formulation to lead to faster learning"}, {"section_index": "7", "section_name": "5.2 TRUST REGION UPDATING", "section_text": "To adopt the trust region updating scheme (Section 3.3) in the continuous control domain, one simply has to choose a distribution f and a gradient specification gt qacer suitable for continuous action spaces\nacer (x+) log f(at|$e(xt))(Qpc(xt,at) - Ven(xt 9t + E Qe,(xt,a) - Ve,(xt))V$a(xt) l0g f(a|$e(x a~T Pt(a\nGiven f and gacer, we apply the same steps as detailed in Section|3.3|to complete the update\nAA n Ag, (x1, u1), ... , Ae, (x1, un) at Xt [U1,..., Un]\nTo derive gacer in continuous action spaces, consider the ACER policy gradient for the stochastic dueling network, but with respect to :.\nIn the above definition, we are using Qopc instead of Qret. Here, Qopc (xt, at) is the same as Retrace. with the exception that the truncated importance ratio is replaced with 1 (Harutyunyan et al.]2016) Please refer to Appendix|B|an expanded discussion on this design choice. Given an observation xt, we can sample a. ~ e( [xt) to obtain the following Monte Carlo approximation.\nOt V Dex+ xt,at) - Vor(xt))V$o(xt) l0g f(at|$e(xt))\nThe precise pseudo-code of ACER algorithm for continuous spaces results is presented in Appendix |A\nWalker2d (9-DoF/6-dim. Actions) Fish (13-DoF/5-dim. Actions). Cartpole (2-DoF/1-dim. Actions). Million Steps Million Steps Million Steps Humanoid (27-DoF/21-dim. Actions) Reacher3 (3-DoF/3-dim. Actions) Cheetah (9-DoF/6-dim. Actions). Rerrrrss TRUST-TIS TRUST-A3C TIS ACER A3C 100 120 1 20 140 160 Million Steps Million Steps Million Steps\nFigure 3: [Top] Screen shots of the continuous control tasks. [BoTTom] Performance of different methods on these tasks. ACER outperforms all other methods and shows clear gains for the higher dimensionality tasks (humanoid, cheetah, walker and fish). The proposed trust region method by itself improves the two baselines (truncated importance sampling and A3C) significantly.\nWe evaluate our algorithms on 6 continuous control tasks, all of which are simulated using the. MuJoCo physics engine (Todorov et al.]2012). For descriptions of the tasks, please refer to Appendix. E.1| Briefly, the tasks with action dimensionality in brackets are: cartpole (1D), reacher (3D), cheetah (6D), fish (5D), walker (6D) and humanoid (21D). These tasks are illustrated in Figure[3\nAll the aforementioned setups share the same network architecture that computes the policy and state values. We maintain an additional small network that computes the stochastic A values in the case of ACER. We use n = 5 (using the notation in Equation (13)) in all SDNs. Instead of mixing on-policy and replay learning as done in the Atari domain, ACER for continuous actions is entirely off-policy with experiences generated from the simulator (4 times on average). When using replay, we add to each thread a replay memory that is 5, 000 frames in size and perform updates every 50 steps (k = 50 in the notation of Section[2). The rate of the soft updating ( as in Section[3.3) is set to 0.995 in all setups involving trust region updating. The truncation threshold c is set to 5 for ACER.\nTo benchmark ACER for continuous control, we compare it to its on-policy counterpart both with and without trust region updating. We refer to these two baselines as A3C and Trust-A3C. Additionally we also compare to a baseline with replay where we truncate the importance weights over trajectories as in (Wawrzynski!2009). For a detailed description of this baseline, please refer to Appendix |E Again, we run this baseline both with and without trust region updating, and refer to these choices as Trust-TIS and TIS respectively. Last but not least, we refer to our proposed approach with SDN and. trust region updating as simply ACER. All five setups are implemented in the asynchronous A3C. framework.\nWe use diagonal Gaussian policies with fixed diagonal covariances where the diagonal standard deviation is set to 0.3. For all setups, we sample the learning rates log-uniformly in the range [10-4, 10-3.3]. For setups involving trust region updating, we also sample uniformly in the range 0.1, 2]. With all setups, we use 30 sampled hyper-parameter settings.\nIn continuous control, ACER outperforms the A3C and truncated importance sampling baselines by very significant margin."}, {"section_index": "8", "section_name": "6.1 ABLATIONS", "section_text": "To further tease apart the contributions of the different components of ACER, we conduct an ablatior analysis where we individually remove Retrace / Q() off-policy correction, SDNs, trust region and truncation with bias correction from the algorithm. As shown in Figure 4] Retrace and off policy correction, SDNs, and trust region are critical: removing any one of them leads to a clear deterioration of the performance. Truncation with bias correction did not alter the results in the Fisl and Walker2d tasks. However, in Humanoid, where the dimensionality of the action space is mucl higher, including truncation and bias correction brings a significant boost which makes the originall kneeling humanoid stand. Presumably, the high dimensionality of the action space increases the variance of the importance weights which makes truncation with bias correction important. For more details on the experimental setup please see AppendixE.4"}, {"section_index": "9", "section_name": "THEORETICAL ANALYSIS", "section_text": "Retrace is a very recent development in reinforcement learning. In fact, this work is the first tc consider Retrace in the policy gradients setting. For this reason, and given the core role that Retrace plays in ACER, it is valuable to shed more light on this technique. In this section, we will prove thai Retrace can be interpreted as an application of the importance weight truncation and bias correction trick advanced in this paper.\nConsider the following equation:\nQ*(xt,at) =Ex [rt+YPt+1Q*(xt+1,At+1)] t+1at+1\nPt+1(a) = Ext+1at+1 rt+YPt+1Q\"(xt+1,At+1) +y E Ct.C a~TT Pt+1(a)\nPt+1(b) - 117 (x,a) =E. Pi rt+ b~1 Pt+1(b) t>0 i=1\nThe expectation E,, is taken over trajectories starting from x with actions generated with respect to . When Q\" is not available, we can replace it with our current estimate Q to get a return-based\n3 For videos of the policies learned with ACER, please see:https : / /www . youtube. com/wat ch?v=. NmbeQYoVv5g&list=PLkmHIkhlFjiTlvwxEnsJMs3v7seR5HSP\nThe empirical results for all continuous control tasks are shown Figure[3] where we show the mean and standard deviation of the best 5 out of 30 hyper-parameter settings over which we searched3 For sensitivity analyses with respect to the hyper-parameters, please refer to Figures5land 6Jin the Appendix.\nHere, we also find that the proposed trust region optimization method can result in huge improvements over the baselines. The high-dimensional continuous action policies are much harder to optimize than the small discrete action policies in Atari, and hence we observe much higher gains for trust region optimization in the continuous control domains. In spite of the improvements brought in by trust region optimization, ACER still outperforms all other methods, specially in higher dimensions\nesitmate of Q. This operation also defines an operator:\nPt+1(b) - Hpi BQ(x,a) = E rt+yE 6~TT Pt+1(b) t>0 i=1\nIn the following proposition, we show that B is a contraction operator with a unique fixed point Q and that it is equivalent to the Retrace operator..\nProposition 1. The operator B is a contraction operator such that ||BQ - Q | yQ - Q| and B is equivalent to Retrace..\nThe above proposition not only shows an alternative way of arriving at the same operator, but als provides a different proof of contraction for Retrace. Please refer to Appendix|C|for the regularizatio. conditions and proof of the above proposition.\nFinally, B, and therefore Retrace, generalizes both the Bellman operator T\" and importance sampling Specifically, when c = 0, B = T\" and when c = oo, B recovers importance sampling; see. AppendixC\nFish Walker2d Humanoid (13-DoF/5-dim. Actions) (9-DoF/6-dim. Actions (27-DoF/21-dim. Actions) SNOS ON urnnnnnon nn Million Steps Million Steps Million Steps\nFigure 4: Ablation analysis evaluating the effect of different components of ACER. Each row compares ACER with and without one component. The columns represents three control tasks. Red lines, in all plots, represent ACER whereas green lines ACER with missing components. This study indicates that all 4 components studied improve performance where 3 are critical to success. Note that the ACER curve is of course the same in all rows.\nWe showed that the method not only matches the performance of the best known methods on Atari but that it also outperforms popular techniques on several continuous control problems\nThe efficient trust region optimization method advanced in this paper performs remarkably well in. continuous domains. It could prove very useful in other deep learning domains, where it is hard to stabilize the training process."}, {"section_index": "10", "section_name": "ACKNOWLEDGMENTS", "section_text": "We are very thankful to Marc Bellemare, Jascha Sohl-Dickstein, and Sebastien Racaniere for prooi reading and valuable suggestions.\nT. Degris, M. White, and R. S. Sutton. Off-policy actor-critic. In ICML, pp. 457-464, 2012\nS. Levine and V. Koltun. Guided policy search. In ICML, 2013\nV. Mnih, A. Puigdomenech Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu Asynchronous methods for deep reinforcement learning. arXiv:1602.01783, 2016..\nWe have introduced a stable off-policy actor critic that scales to both continuous and discrete action spaces. This approach integrates several recent advances in RL in a principle manner. In addition. it integrates three innovations advanced in this paper: truncated importance sampling with bias correction, stochastic dueling networks and an efficient trust region policy optimization method.\nAnna Harutyunyan, Marc G Bellemare, Tom Stepleton, and Remi Munos. Q () with off-policy corrections arXiv preprint arXiv:1602.04951, 2016\nN. Heess, G. Wayne, D. Silver, T. Lillicrap, T. Erez, and Y. Tassa. Learning continuous control policies by stochastic value gradients. In NIPS, 2015.\nS. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015. T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv:1509.02971, 2015. L.J. Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3):293-321, 1992. N. Meuleau, L. Peshkin, L. P. Kaelbling, and K. Kim. Off-policy policy search. Technical report, MIT AI Lab,\nT. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. In ICLR, 2016\nJ. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimization. In ICML, 2015a. J. Schulman, P. Moritz, S. Levine, M. I. Jordan, and P. Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv:1506.02438, 2015b. D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy gradient algorithms In ICML, 2014. D. Silver, A. Huang, C.J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou. V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap. M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016. R. S. Sutton, D. Mcallester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In NIPS, pp. 1057-1063, 2000. E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. In International Conference on Intelligent Robots and Systems, pp. 5026-5033, 2012. Z. Wang, T. Schaul, M. Hessel, H. van Hasselt, M. Lanctot, and N. de Freitas. Dueling network architectures for deep reinforcement learning. In ICML, 2016. P. Wawrzynski. Real-time reinforcement learning by sequential actor-critics and experience replay. Neural Networks, 22(10):1484-1497, 2009.\nJ. Oh, V. Chockalingam, S. P. Singh, and H. Lee. Control of memory, active perception, and action in Minecraft. In ICML, 2016. D. Precup, R. S. Sutton, and S. Singh. Eligibility traces for off-policy policy evaluation. In ICML, pp. 759-766, 2000."}, {"section_index": "11", "section_name": "ACER PSEUDO-CODE FOR DISCRETE ACTIONS", "section_text": "Algorithm 1 ACER for discrete actions (master algorithm\n7 Assume ratio of replay r. repeat Call ACER on-policy, Algorithm2 n Possion(r) for i E {1,...,n} do Call ACER off-policy, Algorithm2 end for until Max iteration or time reached.. Algorithm 2 ACER for discrete actions. Reset gradients d0 0 and d0, 0. Initialize parameters 0' 0 and 0', 0u.. if not On-Policy then Sample the trajectory {xo, ao, ro, (-|xo), . .. , xk, ak, Tk, (-|xk)} from the replay mem else Get state xo end if for i E{0,..., k} do Compute f(|e'(xi)), Qe, (xi,) and f(|$ea(xi)) if On-Policy then Perform a, according to f(|$e' (xi)) Receive reward r; and new state xi+1 (|xi)f(|$e'(xi)) end if Pimin{1, f(a|$g(xs)] (ai|xi) end for for terminal x k a Qe, (xk,a)f(a|$e(xk)) otherwise foriE{k-1,...,0} do Qret ri+yQret Vi<Ca Qe(xi,a)f(a|$e'(xi)) Computing quantities needed for trust region updating.. g min{c,Pi(ai)} V$o,(x;) log f(ai|$e(xi))(Qret_Vi) f(a|$e'(xi))V$o,(xi) l0g f(a|$e'(xi))(Qo(xi,a Pi(a) k V$o,(x;)DKL[f(|$ea(xi)||f(|$e(xi) Accumulate gradients wrt 0': d0' - d0' + $o(x;) g- max{0,kg-} l|k?J ae Accumulate gradients wrt 0': d0 d0 + Ve, (Qret _ Qo!, (xi, a))?. Update Retrace target: Qret Pi (Qret - Qo, (xi, ai)) + Vi end for Perform asynchronous update of 0 using d0 and of 0, using d0,.. Updating the average policy network: 0a a0a + (1 )0\ng min{c,Pi(ai)} V$g,(x;) log f(ai|$e'(xi))(Qret _ Vi) f(a$e $e(x;) log f(a|$e'(xi))(Qe(xi,ai)- Vi) k ,(x;)DKL[f(|$ea(xi)||f(|$e'(xi)]\nAlgorithm 3 ACER for Continuous Actions\nBecause of the lack of the truncated importance ratio, the operator defined by Qopc is only a. contraction if the target and behavior policies are close to each other (Harutyunyan et al.|2016) Q() with off-policy corrections is therefore less stable compared to Retrace and unsafe for policy. evaluation.\nQopc, however, could better utilize the returns as the traces are not cut by the truncated importance weights. As a result, Qopc could be used efficiently to estimate Q\" in policy gradient (e.g. in Equation (16). In our continuous control experiments, we have found that Qopc leads to faster learning..\nRETRACE AS TRUNCATED IMPORTANCE SAMPLING WITH BIAS CORRECTION\nFor the purpose of proving proposition 1, we assume our environment to be a Markov Decision Process (t, A, y, P, r). We restrict to be a finite state space. For notational simplicity, we also restrict A to be a finite action space. P : , A -> defines the state transition probabilities and r : , A -> [- RmAx, RmAx] defines a reward function. Finally, y E [0, 1) is the discount factor.\nAIgorithm 3 ACER for Continuous Actions Reset gradients d0 0 and d0, 0. Initialize parameters 0' 0 and 0', O,. Sample the trajectory {xo, ao, ro, (|xo), ... , xk, ak, Tk, (.|xk)} from the replay memory for i E {0, .::, k} do Compute f(|e'(xi)), Ve, (xi), Qe, (xi, ai), and f(|$ea(xi)) Sample a ~ f($e'(xi)) (ai|xi) (a'|xi) Ci min end for for terminal x k Ve. (xk) Otherwise Qopc + Qret for i E {k - 1,.:.,0} do Qret + ri + ~Qret Qopc ri + yQopc Computing quantities needed for trust region updating g+1 min{c,Pi} V$g,(x;) l0g f(ai|$e'(xi)) (Qpc(xi,ai)- Vo(xi) Qo,(xi,ai)- Vo,(xi))V$o(xi) l0g f(ai|$o'(xi)) k V$e,(xi)DKL[f(|$ea(xi)||f(|$e'(xi)] Accumulate gradients wrt 0's: d0 d0, +(Qret _ Qo, (xi, ai)Vo, Qo (xi, ai) d0 d0 + min{1,Pi} (Qret(xt,ai)-Qo,(xt,ai))Vo,Vo(xi) Update Retrace target: Qret Ci . (xi,ai))+ Ve.(xi)\n9 -V (x) log f(ai[$e'(xi) k DKL[f$ea(xi)f($e(xi)\nAccumulate gradients wrt 0: d0 d0 + d$e(xi -max Oe ||k|l Accumulate gradients wrt 0': d0, d0 + (Qret _ Qo, (xi, ai))Vo, Qo, (xi, ai) d0y d0+min{1,pi}(Qret(xt,ai)-Qe,(xt,ai)) Ve! Ve! (xi Update Retrace target: Qret ci ( Oret_ Qo,(xi,ai)) + Vo,(xi) Update Retrace target: Qopc Xi, ai) ) + Ve! (xi)\nProof of proposition 1. First we show that B is a contraction operator\n117 Pt+1 E ) E (Q(xt+1,b)- Q\"(xt+1,b) b~ Pt+1(b t>0 i=1 E I pi Ot+1 E Q(xt+1,b Q\"(xt+1,b) b~TT Pt+1(b) t>0 i=1 E IPi < ) Pt+1) sup|Q(xt+1,b)- Q*(xt+1 t>0 i=1\nPt+1(6 E [Q(xt+1,b)] = E [Pt+1(b)Q(xt+1,b)]+,E b~TT b~T Pt+1(b)\n(x,a) - Q*(x,a) Pt+1(b) E E Q(xt+1,b)-Q(xt+1,b) 6~TT Pt+1(6) i=1 Pt+1(b) E 11 < E Q(xt+1,b)- Q\"(xt+1,b)| Q b~TT Pt+1(b) t>0 i=1 < E Pt+1) sup|Q(xt+1,b)- Q*(xt+ (22 t>0 here Pt+ 0t+1(b) E E [t+1(b)]. The last inequality in the above equation i - b~7I Pt+1(b) e to Holder's inequality HPi 22 < sup|Q(x,b) - \"(x,b)E, x,b t>0 i=1 HPi HPi sup|Q(x,b) -Q*(x,b)]E, x6 t>0 i=1 t>0 i=1 t+1 HPi sup|Q(x,b) Q\"(x,b)]E IIP x,b t>0 \\i=1 t>0 i=1 sup|Q(x,b) - Q\"(x,b)](yC-(C1)) L nere C = Since C t=0 1, we have that yC- Therefore, we have shown that B is a contraction operator.\nIPi sup|Q(x,b) - Q*(x,b)]E x,b t>0 i=1 I Pi IPi sup|Q(x,b) Q*(x,b)]Eu C.C t>0 i=1 t>0 i=1 t+1 IPi I Pi sup|Q(x,b) (x,b)E, i=1 t>0 i=1 sup|Q(x,b) Q*(x,b)](yC-(C -1))\nNow we show that B is the same as Retrace. By apply the trunction and bias correction trick, we have\nE IPi +y.E [Q(xt+1,b)] Pt+1(b) t>0 y E [Pt+1(b)Q(xt+1,b)] yE b~TT Pt+1 E y.E [Q(xt+1,b)]-y.E [Pt+1(b)Q(xt+ t>0 E rt+y E [Q(xt+1,b)]-YPt+1Q(xt+1,At+ 7 t>0 i=1 E t+y.E [Q(xt+1,b)]- Q(xt; Q(x,a) = RQ(x,a t>0 i=\nIn the remainder of this appendix, we show that B generalizes both the Bellman operator and importance sampling. First, we reproduce the definition of B:.\nPt+1(b) - c HPi BQ(x,a) =E t- Xt+1, b~TT Pt+1(b) t>0 i=1"}, {"section_index": "12", "section_name": "D DERIVATION OF Vtarg", "section_text": "By using the truncation and bias correction trick, we can derive the following\nTaxt Ot(a =E min + E a~u a~T Pt(a\nytarget() Xt) := min + E nre a~\nQe,(xt,a) =E min E a~ a~ 0t(a\ntarget := min\nCartpole swingup This is an instance of the classic cart-pole swing-up task. It consists of a pole. attached to a cart running on a finite track. The agent is required to balance the pole near the center of the track by applying a force to the cart only. An episode starts with the pole at a random angle. and zero velocity. A reward zero is given except when the pole is approximately upright (within. 5 deg) and the track approximately in the center of the track (0.05) for a track length of 2.4. The observations include position and velocity of the cart, angle and angular velocity of the pole. a sine/cosine of the angle, the position of the tip of the pole, and Cartesian velocities of the pole. The dimension of the action space is 1.\nBQ(x,a) = E rt+y E (Q(xt+1,)) ~\n= E rt+y.E (O Q(xt+1,b) E 11 (x,a ) Pi It b~ i=1 t>0 i=1\nWe, however, cannot use the above equation as a target as we do not have access to Q. To derive a target, we can take a Monte Carlo approximation of the first expectation in the RHS of the above equation and replace the first occurrence of Q\" with Qret and the second with our current neural net. approximation Qe.. (xt, :):\nReacher3 The agent needs to control a planar 3-link robotic arm in order to minimize the distance between the end effector of the arm and a target. Both arm and target position are chosen randomly at the beginning of each episode. The reward is zero except when the tip of the arm is within O.05 of the target, where it is one. The 8-dimensional observation consists of the angles and angular velocity of all joints as well as the displacement between target and the end effector of the arm. The 3-dimensional action are the torques applied to the joints.\nCheetahThe Half-Cheetah (Wawrzynski (2009); Heess et al.(2015)) is a planar locomotion task where the agent is required to control a 9-DoF cheetah-like body (in the vertical plane) to move in the. direction of the x-axis as quickly as possible. The reward is given by the velocity along the x-axis and a control cost: r = vx + 0.1||a||2. The observation vector consists of the z-position of the torso. and its x, z velocities as well as the joint angles and angular velocities. The action dimension is 6.\nFish The goal of this task is to control a 13-DoF fish-like body to swim to a random target in 3D space. The reward is given by the distance between the head of the fish and the target, a small penalty for the body not being upright, and a control cost. At the beginning of an episode the fish is initialized facing in a random direction relative to the target. The 24-dimensional observation is given by the displacement between the fish and the target projected onto the torso coordinate frame, the joint angles and velocities, the cosine of the angle between the z-axis of the torso and the world z-axis and the velocities of the torso in the torso coordinate frame. The 5-dimensional actions control the position of the side fins and the tail.\nWalker The 9-DoF planar walker is inspired by (Schulman et al.(2015a)) and is required to move. forward along the x-axis as quickly as possible without falling. The reward consists of the x-velocit. of the torso, a quadratic control cost, and terms that penalize deviations of the torso from the preferrec. height and orientation (i.e. terms that encourage the walker to stay standing and upright). Th 24-dimensional observation includes the torso height, velocities of all DoFs, as well as sines anc cosines of all body orientations in the x-z plane. The 6-dimensional action controls the torque applied at the joints. Episodes are terminated early with a negative reward when the torso exceeds upper and lower limits on its height and orientation..\nHumanoid The humanoid is a 27 degrees-of-freedom body with 21 actuators (21 action dimen. sions). It is initialized lying on the ground in a random configuration and the task requires it to achieve a standing position. The reward function penalizes deviations from the height of the head. when standing, and includes additional terms that encourage upright standing, as well as a quadratic. action penalty. The 94 dimensional observation contains information about joint angles and velocities. and several derived features reflecting the body's pose..\nThe baseline TIS follows the following update equations\nThe baseline Trust-TIS is appropriately modified according to the trust region update described in Section3.3"}, {"section_index": "13", "section_name": "E.3 SENSITIVITY ANALYSIS", "section_text": "In this section, we assess the sensitivity of ACER to hyper-parameters. In Figures|5|and|6] we show for each game, the final performance of our ACER agent versus the choice of learning rates, and the trust region constraint o respectively..\nNote, as we are doing random hyper-parameter search, each learning rate is associated with a random 8 and vice versa. It is therefore difficult to tease out the effect of either hyper-parameter independently\nII Pt+i y'rt+i+ ykVo,(xk+t)-Vo,(xt)|Vol0ge(at|Xt pdates to the policy: min. =0 I Pt+i pdates to the value: min . y'rt+i+ykVo,(xk+t)-Vo,(xt) Ve, Ve. =0\nk-1 II Pt+i updates to the policy: min. y'rt+i+ykVo,(xk+t)- Vo,(xt)|Ve logno(at|xt i=0 k-1 I Pt+i updates to the value: min. y'rt+i+ykVo,(xk+t)-Vo,(xt) Ve.. i=0 =0\nCheetah Fish Walker2D ar Rey Cuunnmny 3.9 3.5 3.9 3.5 3.4 4.0 3.9 3.8 3.6 Log Learning Rate Log Learning Rate Log Learning Rate Cartpole Reacher3 Humanoid 350 156 Cunnnnmne 200 00 150 30 100 100 50 40 4.0 3.9 3.8 3.7 3.6 3.5 3.4 3.3 4.0 3.9 3.8 3.7 3.6 3.5 3.4 3.3 4.0 3.9 3.8 3.7 3.6 3.5 3.4 Log Learning Rate Log Learning Rate Log Learning Rate\nCheetah Fish Walker2D 4.0 3.9 3.8 3.7 3.6 3.5 3.4 3.3 4.0 3.9 3.8 3.7 3.6 3.5 3.4 3.3 4.0 3.9 3.8 3.7 3.6 3.5 3.4 3.3 Log Learning Rate Log Learning Rate Log Learning Rate Cartpole Reacher3 Humanoid 350 10 300 15 250 30 C unnneee 20 200 2 200 150 30 100 36 100 50 4.0 3.9 3.8 3.7 3.6 3.5 3.4 4.0 3.9 3.8 3.7 3.6 3.5 3.4 3.3 4.0 3.9 3.8 3.7 3.6 3.5 3.4 3 Log Learning Rate Log Learning Rate Log Learning Rate\nFigure 5: Log learning rate vs. cumulative rewards in all the continuous control tasks for ACER. The. plots show the final performance after training for all 30 log learning rates considered. Note that each learning rate is associated with a different as a consequence of random search over hyper-parameters\nCheetah Fish Walker2D Trust Region Constraint () Trust Region Constraint () Trust Region Constraint () Cartpole Reacher3 Humanoid 350 300 15 250 CCnnnnnmnee 200 25 200 150 30 35 40 Trust Region Constraint () Trust Region Constraint () 1.0 Trust Region Constraint () 1.0\nCheetah Fish Walker2D Reerrp 20 C cunnnnne C unnnnnne Trust Region Constraint () Trust Region Constraint () Trust Region Constraint () Cartpole Reacher3 Humanoid 350 300 250 30 Cunnnnnmne 100 50 0.5 2.0 Trust Region Constraint () Trust Region Constraint () Trust Region Constraint ()\nFigure 6: Trust region constraint () vs. cumulative rewards in all the continuous control tasks for ACER. The plots show the final performance after training for all 30 trust region constraints (). searched over. Note that each d is associated with a different learning rate as a consequence of random. search over hyper-parameters.\nFor the ablation analysis, we use the same experimental setup as in the continuous control experiment while removing one component at a time.\nWe observe, however, that ACER is not very sensitive to the hyper-parameters overall. In addition. smaller d's do not seem to adversely affect the final performance while larger d's do in domains of higher action dimensionality. Similarly, smaller learning rates perform well while bigger learning. rates tend to hurt final performance in domains of higher action dimensionality..\nTo evaluate the effectiveness of Retrace/Q() with off-policy correction, we replace both wit importance sampling based estimates (following Degris et al.(2012)) which can be expresse. recursively: Rt = rt + Pt+1Rt+1\nXt,at) - Ve,(xt)) Ve,Ve,(xt)\nwhich has markedly lower variance. We update our Q estimates as before\nTo evaluate the effects of the truncation and bias correction trick, we change our c parameter (see Equation (16)) to 0o so as to use pure importance sampling\nTo evaluate the Stochastic Dueling Networks, we replace it with two separate networks: one comput ing the state values and the other Q values. Given Qret (xt, at), the naive way of estimating the state. values is to use the following update rule:\nCt,at) - Ver(xt)) Ve,Ve,(xt"}] |
rkKCdAdgx | [{"section_index": "0", "section_name": "COMPACT EMBEDDING OF BINARY-CODED INPUTS AND OUTPUTS USING BLOOM FILTERS", "section_text": "Joan Serra & Alexandros Karatzoglou\nfirstname.lastname@telefonica.com\nThe size of neural network models that deal with sparse inputs and outputs is of ten dominated by the dimensionality of those inputs and outputs. Large models with high-dimensional inputs and outputs are difficult to train due to the limited memory of graphical processing units, and difficult to deploy on mobile devices with limited hardware. To address these difficulties, we propose Bloom embed- dings, a compression technique that can be applied to the input and output of neu- ral network models dealing with sparse high-dimensional binary-coded instances Bloom embeddings are computationally efficient, and do not seriously compro mise the accuracy of the model up to 1/5 compression ratios. In some cases, they even improve over the original accuracy, with relative increases up to 12%. We evaluate Bloom embeddings on 7 data sets and compare it against 4 alternative methods, obtaining favorable results. We also discuss a number of further advan- tages of Bloom embeddings, such as 'on-the-fly' constant-time operation, zero or marginal space requirements, training time speedups, or the fact that they do not require any change to the core model architecture or training configuration."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The size of neural network models that deal with sparse inputs and outputs is often dominated by the dimensionality of such inputs and outputs. This is the case, for instance, with recommender systems where high-dimensional sparse vectors, typically in the order from tens of thousands to hundreds of millions, constitute both the input and the output of the model (e.g., Wu et al.|2016f Hidasi et al. 2016, Cheng et al.]2016} Strub et al.2016). This results in large models that present a number of difficulties, both at training and prediction stages. Apart from training and prediction times, an obvious bottleneck of such models is space: their size (and even performance) is hampered by the physical memory of graphical processing units (GPUs), and they are difficult to deploy on mobile devices with limited hardware (cf.Han et al.]2016).\nOne option to reduce the size of sparse inputs and outputs is to embed them into a lower-dimensional. space. Embedding sparse high-dimensional inputs is commonplace (e.g.,Bengio et al.|200ofTurian. et al.2010f Mikolov et al.|2013). However, embedding sparse high-dimensional outputs, or even. inputs and outputs at the same time, is much less common (cf. Weston et al.[2002, Bengio et al. 2010f Akata et al.2015). Importantly, typical embeddings still require the storage and processing. of large matrices with the same dimensionality as the input/output (like the original neural network model would do). Thus, the gains in terms of space are limited. As mentioned, the size of such. models is dominated by the input/output dimensionality, with input and output layers representing. about 99.94% of the total amount of weights of the model1.\nIn general, an ideal embedding procedure for sparse high-dimensional inputs/outputs should produce compact embeddings, of much lower dimensionality than the original input/output. In addition, it\n1An example can be found in the neural network model of|Hidasi et al.(2016), which uses a gated recurren. unit to perform session-based recommendations with input/output layers of dimensionality 330,000 and interna layers of dimensionality 100."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "should consume little space, both in terms of storage and memory space. Smaller sizes imply less parameters, thus training the model on embedded vectors would also be faster than with the original instances. The embedding of the output should also lead to a formulation for which the appropriate loss should be clear. Embeddings should not compromise the accuracy of the model nor the required number of training epochs to obtain that accuracy. In addition, no changes to the original core architecture of the model should be required to achieve good performance (obviously, input/outpu1 dimensions must change). The embedding should also be fast; if not to be done directly 'on-the-fly' at least fast enough so that speed improvements made during training are not lost in the embedding operation. Last, but not least, output embeddings should be easily reversible, so that the output of the model could be mapped to the original items at prediction time.\nIn this paper, we propose an unsupervised embedding technique that fulfills all the previous require ments. It can be applied to both input and output layers of neural network models that deal with binary (one-hot encoded) inputs and/or outputs. In addition, it produces lower-dimensionality bi- nary embeddings that can be easily mapped to the original instances. Provided that the embedding dimension is not too low, the accuracy is not compromised. Furthermore, in some cases, we show that training with embedded vectors can even increase prediction accuracy. The embedding requires no changes to the core network structure nor to the model configuration, and works with a softmax output, the most common output activation for binary-coded instances. As it is unsupervised, the embedding does not require any preliminary training. Moreover, it is a constant-time operation that can be either performed on-the-fly, requiring no disk or memory space, or can be cached in memory, occupying orders of magnitude less space than a typical embedding matrix. Lower dimensionality of input/output vectors result in faster training, and the mapping from the embedded space to the original one does not add an overwhelming amount of time to the prediction stage. The proposed embedding is based on the idea of Bloom filters (Bloom 1970), and therefore it inherits part of the theory developed around that idea (Blustein & El-Maazawi|2002 Dillinger & Manolios]2004 Mitzenmacher & Upfal2005Bonomi et al.||2006).\nA common approach to embed high-dimensional inputs is the hashing trick (Langford et al.]2007. Shi et al.]2009] [Weinberger et al.]2009). However, the hashing trick approach does not deal witl. outputs, as it offers no explicit way to map back from the (dense) embedding space to the origi. nal space. A more elementary version of the hashing trick (Ganchev & Dredze] 2008) can be usec. at the outputs by considering it as a special case of the Bloom-based methodology proposed here. A framework providing both encoding and decoding strategies is the error-correcting output code. (ECOC) framework (Dietterich & Bakiri] 1995). Originally designed for single-class outputs, it car. be also applied to class sets (Armano et al.]2012). The compressed sensing approach ofHsu et al. (2009) builds on top of ECOC to reduce multi-label regression to binary regression problems. Simi. larly,Cisse et al.(2013) use Bloom filters to reduce multi-label classification to binary classificatior problems and improve the robustness of individual binary classifiers' errors. Another example of : framework offering recovery capabilities is kernel dependency estimation (Weston et al.2002)\nData-dependent embeddings that require some form of learning also exist. A typical approach is. to rely on variants of latent semantic analysis or singular value decomposition (SVD), exploiting similarities or correlations that may be present in the data. Again, the issue of mapping from the. embedding space to the original space is left unresolved. Nonetheless, recently, Chollet (2016) has. successfully applied a K-nearest neighbors (KNN) algorithm to perform such a mapping and to de. rive a ranking of the elements in the original space. An SVD decomposition of the pairwise mutua. information matrix (PMI) is used to perform the embedding, and cosine similarity is used as loss. function and to retrieve neighbors. Using the KNN trick offers the possibility to exploit differen. types of factorization of similarity-based matrices. Canonical correlation analysis is an example that. considers both inputs and outputs at the same time (Hotelling 1936). Other examples consider ing output embeddings are nuclear norm regularized learning (Amit et al.|2007), label embedding trees (Bengio et al.]2010), or the WSABIE algorithm (Weston et al.2010). In the presence of side. information, like text descriptions, element or class taxonomies, or manually-collected data, a range. of approaches are applicable. Akata et al.[(2015] provide a comprehensive list. In our study, we assume no side information is available and focus on input/output-based embeddings..\nFrom a more general perspective, reducing the space of (or compressing) neural network models is. an active research topic, driven by the need to deploy such models in systems with limited hardware resources. A common approach is to reduce the size of already trained models by some quantizatior. and/or pruning of the connections in dense layers (Courbariaux et al.[2015] Han et al.]2016] Kin et al.|[2016). A less frequently used approach is to reduce the model size before training (Chen et al. 2015). These methods typically do not focus on input layers and, to the best of our knowledge, non. of them deals with high-dimensional outputs. It is also worth noting that a number of techniques hav. been proposed to efficiently deal with high-dimensional outputs, specially in the natural language. processing domain. The hierarchical softmax approach (Morin & Bengio] 2005) or the more recen. adaptive softmax (Grave et al.|2016) are two examples of those. Yet, as mentioned, the focus o. these works is on speed, not on space. The work of|Vincent et al.(2015) focuses on both aspects o. very large sparse outputs but, to the best of our knowledge, cannot be applied to traditional softmax. Outputs."}, {"section_index": "3", "section_name": "3 BLOOM EMBEDDINGS", "section_text": "Bloom filters (Bloom][1970) are a compact probabilistic data structure that is used to represent sets of elements, and to efficiently check whether an element is a member of a set (Mitzenmacher & Upfalf2005). Since the instances we deal with represent sets of one-hot encoded elements, Bloom. filters are an interesting option to embed those in a compact space with good recovery (or checking.. guarantees.\nIn essence, Bloom filters project every element of a set to k different positions of a binary array. each of which with a range from 1 to m, ideally distributing the projected elements uniformly at random (Mitzenmacher & Upfal. 2005). Proper independent hash functions can be derived using. enhanced double hashing or triple hashing (Dillinger & Manolios 2004). The number of hash. functions k is usually a constant, k < m, proportional to the expected number of elements to be. projected."}, {"section_index": "4", "section_name": "3.2 EMBEDDING AND RECOVERY", "section_text": "In the following, we describe the use of Bloom filter techniques in embedding binary high dimensional instances, and the recovery or mapping to such instances from these embeddings. We denote our approach as Bloom embedding (BE). The idea we pursue is to embed both inputs and outputs and to perform training in the embedding space. To do so, only a probability-based output activation is required, together with a loss function that is appropriate for such activations.\nLet x be an input or output instance with dimensionality d, such that x = [x1,... xd], x; E {0, 1} niently (and compactly) represent x as set z = {i}-1, Zi E N<d, where c is the number of non-zero. elements and z; is the position of such elements in x. For every set z, we generate an embedded in- stance u of dimensionality m < d, such that u = [u1, ... m], u; E {0, 1}. To do so, we first set all. m components of u to O. Then, iteratively, for every element zi, i = 1, ... c, and every projection Hi, j = 1,...k, we assign\nUHj(zi) =1.\nn essence, Bloom filters project every element of a set to k different positions of a binary array\nTo check if an element is in u, one feeds it to the k hash functions H to get k array positions. If any of. the bits at these positions is O, then the element is definitely not in the set. Thus, element checks re-. turn no false negatives, meaning that the structure gives an answer with 100% recall (Mitzenmacher. & Upfal|[2005). However, if all k bits at the projected positions are 1, then either the element is in the. set, or the bits have by chance been set to 1 during the insertion of other set elements. This implies that false positives are possible, due to collisions between projections of different elements (Blus-. tein & El-Maazawi|2002). The values of m and k can be adjusted to control the probability of such. collisions. However, in practice, m is usually constrained by space requirements, and k 10 is. employed, independent of the number of elements to be projected, and giving less than 1% false. positive probability (Bonomi et al.]2006).\nand assign outputs x; = L(z). Alternatively, if a more numerically-stable output is desired, we can compute the negative log-likelihood.\nL(zi) =-`log(VH, j=1\nBoth operations, when iterated for i = 1,... d, define a ranking over the elements in x, which i. the most common way to define (and evaluate) sparse high-dimensional outputs. One could poten tially also recover a probability distribution by re-normalization, but the problems we consider are information retrieval-type of problems (Manning et al.] 2008), which are typically seen as ranking problems, such as ranking recommendations based on user preferences (Weimer et al.]2008).\nNote that BE, by construction, already offers a number of the aforementioned desired qualities fo sparse high-dimensional embeddings (Sec.1). Specifically, BE is designed for both inputs and outputs, offering a rank-based mapping between the original instances and the embedded vectors. BE yields a more compact representation of the original instance and requires no disk or memory space (at most some marginal RAM space, not GPU memory). In addition, BE can be performed on-the-fly, without training, and in constant time. In the following, we demonstrate the remaining desirable qualities using a comprehensive experimental setup: we show that the accuracy of the model is not compromised given a reasonable embedding dimension (sometimes it even improves) that no changes in the model architecture nor configuration are required, that training times are faster thanks to the reduction of the number of parameters of the model, that evaluation times do not carry much overhead. and that performance is generally better than a number of alternative approaches"}, {"section_index": "5", "section_name": "4.1 GENERAL CONSIDERATIONS", "section_text": "Notice that, since H; has a range between 1 and m, k > 1, and m < d, a number of z; elements. may map to the same index of u. Bloom filters mitigate this by properly choosing k independent. hash functions H (see above). Notice furthermore that the process has no space requirements, as H. is computed on-the-fly. Finally, notice that the embedding of a set z is constant time: the process is O(ck), with c bounded by the maximum number of non-sparse elements in x, c < d, and k being a constant that is set beforehand, k < m < d. In practice, this constant time is dominated. by the time spent on H to generate a hash. If we want to be faster than that, and at the same time. ensure an optimal (uniform) distribution of the outputs of H, we can decide to compromise part of. the available memory to pre-compute a hash matrix storing the projections or hash indices for allthe. potential elements in z. We can do it by generating vectors h = [h1, ... hs] for each zi, where hj. is a uniformly randomly chosen integer between 1 and m (without replacement). This way, by pre-. generating all projections for all d elements, we end up with a d k matrix H of integers between 1. and m, which we can easily store in random-access memory (RAM), not in the GPU memory..\nWe now explain how to recover a probability-based ranking of the d elements of x at the output of the model. Assuming a softmax activation is used, we have a probability vector v = [1,... Vm]. that, at training time, is compared to the binary embedding u of some ground truth set z (or vector x). We can think of v, as the probability of being the projection of some element zt, that is, vi ~. P(u; = 1) ~ P(H,(zi) = i) (see Eq.1). To unravel the embedding v and map to the d original. elements of x, we can understand v as a k-way factorization of every element x. Following the idea of Bloom filters, if an element maps to u; and v, = 0, then the element is definitely not in the output of the model. Otherwise, if an element maps to u, and v; is relatively large, we want the likelihood of that element to reflect that. Specifically, given an element position z; from x, we can compute the likelihood of zi as\nk L(zi)=II UHj(zi) j=1\nWe demonstrate that BE works under several settings and that it can be applied to multiple tasks We consider a number of data sets, network architectures, configurations, and evaluation measures In total, we define 7 different setups, which we summarize in Sec.4.2|and detail in Appendix A] We\nTable 1: Data set statistics after data cleaning and splitting. From left to right: data set name, type of modeled interaction, number of instances n, test split size, instance dimensionality d, median number of non-zero components c, and median density c/d.\nTable 2: Experimental setup and baseline scores. From left to right: data set name, network archi tecture and optimizer, evaluation measure name, random score Sr, and baseline score So..\nalso demonstrate that BE is competitive with respect to the available alternatives. To this end, we consider 4 different state-of-the-art approaches, which we overview in Sec.4.3\nData sets are formed by inputs with n instances, corresponding to either individual instances (or one. hot encoded user profiles) or to sequences of instances (or profile lists). Outputs, also of n instances. correspond to individual instances or to class labels. Instances have an original dimensionality d. corresponding to the cardinality of all possible profile items. Given the nature of the considerec problems, instances are very sparse, with all but c elements being different from 0, c < d, typically. with c/d in the order of 10-5 (Table|1).\nEach combination of data set, network architecture, configuration, and evaluation measure define a task. For every task, we compute a baseline score So, corresponding to running the plain neura network model without any embedding. We then report the performance of the i-th combinatio. of training with a particular embedding on a particular task with respect to the baseline score usin? St/So. This way, we can compare the performance across different tasks using different evaluatior measures, reporting relative improvement/loss with respect to the baseline. Similarly, to compar across different dimensionalities, we report the ratio of embedding dimensionality with respect tc the original dimensionality, m/d, and to compare across different training and evaluation times, w report time ratios with respect to the baseline, T/To.\nWe now give a brief summary of the 7 considered tasks (Tables 1and 2). For a more detailed. explanation related to data, network architecture, configuration, or evaluation methodology, we refer\nData set Interaction n Split d c c/d ML User-Movies 138,224 10,000 15,405 18 1.2 10-3 PTB Sequence-Words 929,589 82,430 10,001 1 1.0:10-4 CADE Words-Category 40,983 13,661 193,998 17 8.8 10-5 MSD User-Songs 597,155 50,000 69,989 5 7.1:10-5 AMZ User-Books 916,484 50,000 22,561 1 4.410-5 BC User-Books 25,816 2,500 54,069 2 3.7: 10-5 YC Session-Clicks 1,865,997 50,000 35,732 1 2.8 10-5\nData set Architecture + Optimizer Evaluation measure SR So ML Feed-forward + Adam Mean average precision 0.003 0.160 PTB LSTM + SGD Reciprocal rank 0.001 0.342 CADE Feed-forward + RMSprop Accuracy (%) 8.5 58.0 MSD Feed-forward + Adam Mean average precision <0.001 0.066 AMZ Feed-forward + Adam Mean average precision <0.001 0.049 BC Feed-forward + Adam Mean average precision <0.001 0.010 YC GRU + Adagrad Reciprocal rank <0.001 0.368\nFor each data set, and based on the literature, we select an appropriate baseline neural network architecture. We experiment with both feed-forward (autoencoder-like) and recurrent networks, carefully selecting their parameters and configuration to match (or even improve) the state-of-the. art results. For the sake of comparison, we also choose appropriate and well-known evaluation measures. Depending on the data set, we work with mean average precision, reciprocal ranks, or accuracy (Manning et al.2008).\nthe reader to Appendix [A] Further references can be also found there. All data sets are publicly available, and for all tasks we use categorical cross-entropy as loss function."}, {"section_index": "6", "section_name": "4.3 ALTERNATIVE APPROACHES", "section_text": "To compare the performance of BE with the state-of-the-art, we consider 4 different embedding alternatives. We base our evaluation on performance, measured at a given input/output compressior ratio. It is important to note that, in general, besides performance, alternative approaches do no present some of the other desired qualities (Sec.1) that BE offers, such as on-the-fly operation constant-time, no supervision, or no network/configuration changes.\n1. Movielens (ML): movie recommendation with the Movielens data set (Harper & Konstan. 2015). We employ a 3-layer feed-forward neural network model and optimize its parame-. ters with Adam. We evaluate the accuracy of the model with mean average precision.. 2. Penn treebank (PTB): next word prediction with the Penn treebank data set (Mikolov. 2012). We employ a long short-term memory (LSTM) network and optimize its param-. eters with stochastic gradient descent (SGD). We evaluate the accuracy of the model with. the reciprocal rank of the correct prediction. 3. CADE web directory (CADE): text categorization with the CADE web directory data. set (Cardoso-Cachopof 2007). We employ a 4-layer feed-forward neural network model. and optimize its parameters with RMSprop. This is the only considered task where output. embeddings are not required (classification into 12 text categories). We use accuracy as. evaluation measure. 4. Million song data set (MSD): song recommendation with the Million song data set (Bertin-. Mahieux et al.f [2011). We employ a 3-layer feed-forward neural network model and opti-. mize its parameters with Adam. We evaluate the accuracy of the model with mean average. precision. 5. Amazon book reviews (AMZ): book recommendation with the Amazon book reviews data set (McAuley et al.2015). We employ a 4-layer feed-forward neural network and optimize. its parameters with Adam. We evaluate the accuracy of the model with mean average. precision. 6. Book crossing (BC): book recommendation with the book crossing data set (Ziegler et al.. 2005). We employ a 4-layer feed-forward neural network and optimize its parameters with. Adam. We evaluate the accuracy of the model with mean average precision.. 7. YooChoose (YC): session-based recommendation with the YooChoose RecSys15 challenge. data se[2 We employ a gated recurrent unit (GRU) model and optimize its parameters with. Adagrad. We evaluate the accuracy of the model with the reciprocal rank..\n1. Hashing trick (HT). We first consider the popular hashing trick for classifier inputs (Lang- ford et al.| 2007! Weinberger et al.[ [2009]. In general, these methodologies only focus on inputs and are not designed to deal with any type of output. Nonetheless, in the case of binary outputs, variants like the one used by|Ganchev & Dredze(2008) can be adapted to map to the original items using Eqs.2|or[3 In fact, considering this adaptation for recovery, the approach can be seen as a special case of BE with k = 1. 2. Error-correcting output codes (ECOC). Originally designed for single-class targets (Diet- terich & Bakiri,1995), ECOC can be applied to class sets (inputs and outputs), with its corresponding encoding and decoding strategies (Armano et al.]2012). Yet, in the case of training neural networks, it is not clear which loss function should be used. The obvious choice would be to use the Hamming distance. However, in pre-analysis, a Hamming loss turned out to be significantly inferior than cross-entropy. Therefore, we use the latter in our experiments. We construct the ECOC matrix with the randomized hill-climbing method ofDietterich & Bakiri(1995) 3. Pairwise mutual information (PMI). Recently,Chollet(2016) has proposed a PMI approach for embedding sets of image labels into a dense space of real-valued vectors. The approach\n1.2 1.0 0.8 0.6 S ML PTB 0.4 CADE MSD AMZ 0.2 BC YC Baseline 0.0 0.0 0.2 0.4 0.6 0.8 1.0 m/d\nFigure 1: Score ratios S,/So as a function of dimensionality ratio m/d using k = 4. Qualitatively similar plots are observed for other values of k"}, {"section_index": "7", "section_name": "5 RESULTS", "section_text": "We start by reporting on the performance of BE. First of all, we focus on performance as a functior of the embedding dimension. As mentioned, to facilitate comparisons, we report in relative terms. using score ratios S/So and dimensionality ratios m/d. When plotting the former as a function of the latter, we see several things that are worth noting (Fig.1). Firstly, we observe that, for most of the tasks, score ratios approach 1 as m approaches d. This indicates that the introduction of BE does not degrade the original score of the Baseline when the embedding dimension m is comparable tc the original dimension d. Secondly, we observe that the lower the dimensionality ratio, the lower the score ratio. This is to be expected, as one cannot embed sets of elements with their intrinsic dimensionality to an infinitesimally small m. Importantly, the reduction of S/So should not be linear with m/d, but should maximize S, for low m (thus getting curves close to the top left corner of Fig.1). We see that BE fulfills this requirement. In general, we can reduce the size of inputs anc outputs 5 times (m/d = 0.2) and still maintain more than 92% of the value of the original score The ML task is the only exception, which we think is due to the abnormally high density of the data (Table|1), inhibiting the embedding to low dimensions3] CADE is the task for which BE achieves the highest S, for low m. Presumably, the CADE task is the easiest one we consider, as only inpu embeddings are required.\nAn additional observation is worth noting (Fig.1). Interestingly, we find that BE can improve the scores over the Baseline for a number of tasks. That is the case for 3 out of the 7 considered tasks: MSD with m/d > 0.3, AMZ with m/d > 0.2, and BC with 0.3 < m/d < 0.6. The fact that an embedding performs better than the original Baseline has been also observed in some other methods\nNote that the ML data is essentially collected through a survey-type method (Harper & Konstan 2015\nis based on the SVD of a PMI matrix computed from counting pairwise co-occurrences. I uses cosine similarity as the loss function and, at prediction time, it performs KNN (agair. using cosine similarity) with the projection of individual labels to obtain a ranking. 4. Canonical correlation analysis (CCA). CCA is a common way to learn a joint dense, real valued embedding for both inputs and outputs at the same time (Hotelling||1936). CCA car. be computed using SVD on a correlation matrix (Hsu et al.]2012) and, similarly to PMI we can use the KNN trick to rank elements or labels at prediction time. Correlation is now. the metric of choice, both for the loss function and for determining the neighbors..\nFigure 2: Score ratios S/So as a function of the number of hash functions k: using m/d = 0.3 (left) and m/d = 1 (right).\nfor specific data sets (Weston et al.[|2002;Langford et al.||2007[|Chollet]2016). For instance,Chollet (2016) has reported increases up to 7% using the PMI approach on the so-called JFT data set. Here, depending on the task and the embedding dimension, relative increases go from 1 to 12%. Given that the data sets where we observe these increases are some of the less dense ones (Table[1), we hypothesize that, in the case of BE, such increases come from having k times more active elements in the ground truth output (recall that one output element is projected k times using k independent hash functions, Sec.3.2. With k more times elements set to 1 in the output, a better estimation of the gradient may be computed (larger errors that propagate back to the rest of the network).\nWe now focus on performance as a function of the number of projections k, reporting score ratios S/So as above (Fig.2). From repeating the plots for different values of m/d, we observe that. St/So is always low for k = 1 (Fig.2 left), except when m approaches d, where we have an. almost flat behavior (Fig.2l right). In general, S/So jumps up for k 2 and remains stable until. k ~ 10, where the decrease of S/So becomes more apparent (Fig.2] left). The best operating range typically corresponds to 2 k 4. The ML task is again an exception, with a best operating range. around 7 < k < 10.\nFinally, we compare the performance of BE to the one of the considered alternative methods. We do. so by establishing a dimensionality ratio m/d and computing the corresponding score ratio S/Sg for a given task (Table3. We see that BE is better than the alternative methods in 5 out of the. 7 tasks (10 out of the 14 considered test points). PMI is better in one of the tasks (CADE) and. CCA is better also in one of the tasks (AMZ). It is relevant to note that, when BE wins, it always. does so by a relatively large margin (see, for instance, the ML or YC tasks). Otherwise, when an. alternative approach wins, generally it does so by a smaller margin (see, for instance, the AMZ. task). These results become more relevant if we realize that PMI and CCA are both SVD-based\n1.2 1.2 1.0 1.0 0.8 0.8 ML ML PTB PTB 0.6 0.6 CADE CADE S MSD MSD 0.4 0.4 AMZ AMZ BC BC 0.2 YC 0.2 YC Baseline . Baseline 0.0 0.0 2 4 6 8 10 12 14 2 4 6 8 10 12 14 k k\nBesides performance scores, it is interesting to assess whether the reduction of input and output dimensions has an effect to training and evaluation times. To this end, we plot the time ratios T,/To as a function of the dimensionality ratio m/d (Fig.3). Regarding training times, we basically observe a linear decrease with m/d (Fig.3] left). ML is an exception to the trend, and CADE and AMZ experiment almost no decrease for very low dimensionality ratios m/d < 0.2. In general, we confirm faster training times thanks to the reduction of the number of parameters of the model dominated by input/output matrices (output dimension also affecting the time to compute the loss function). We obtain a 2 times speedup for a 2 times input/output compression and, roughly, a little bit over 3 times speedup for a 5 times input/output compression. Regarding evaluation times, we alsc observe a linear trend (Fig.3| right). However, this time, T/To is not as low, with values slightly above 1 but always below 1.5 (with the exception of CADE for m/d > 0.6). Overall, this indicates that, compared to the Baseline evaluation time, the mapping used by BE when reconstructing the output does not introduce an overwhelming amount of extra computation time. With the exception of ML, extra computation time is below 20% for m/d < 0.5.\nTable 3: Comparison of BE with the considered alternatives. Score ratios S, /So for different combi nations of data set and compression ratio m/d. Best results are highlighted in bold, up to statistica significance (Mann- Whitney U, p > 0.05).\napproaches, introducing a separate degree of supervised learning to the task by exploiting pairwise. element co-occurrences and correlations, respectively (Sec. 4.3). In contrast, BE does not require any learning. We formulate a co-occurrence-based version of BE in Appendix B] which achieves. moderate performance increments over BE and more closely approaches the performance of PMI and CCA on the two tasks where BE was not already performing best. To conclude, a further. interesting thing to note is that we confirm the small variation in the score ratios obtained for 2 k 10 (Fig.2). Here, score ratios for 3 k 5 are often comparable in a statistical significance. sense (Table3).\n1.8 1.8 1.6 1.6 1.4 1.4 1.2 1.2 1.0 E 0.8 E 0.8 0.6 0.6 0.4 ML AMZ 0.4 - ML AMZ PTB BC PTB BC 0.2 CADE YC 0.2 CADE YC MSD BI. MSD BI. 0.0 0.0 - 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 m/d m/d\nFigure 3: Time ratios T,/To as a function of dimensionality ratios m/d with k = 4: training time (left) and evaluation time (right). Qualitatively similar plots are observed for other values of k. B1. denotes baseline\nTest point Alternative methods BE Data set m/d HT ECOC PMI CCA k = 3 k =4 k = 5 ML 0.2 0.234 0.342 0.043 0.209 0.750 0.770 0.722 ML 0.3 0.285 0.208 0.045 0.200 0.796 0.813 0.815 PTB 0.2 0.357 0.453 0.837 0.638 0.919 0.908 0.881 PTB 0.4 0.528 0.454 0.836 0.695 0.942 0.920 0.902 CADE 0.01 0.857 0.359 0.984 0.928 0.862 0.853 0.855 CADE 0.03 0.914 0.363 1.002 0.950 0.914 0.925 0.926 MSD 0.05 0.078 0.268 0.216 0.679 0.695 0.738 0.738 MSD 0.1 0.151 0.310 0.321 0.740 0.835 0.841 0.832 AMZ 0.1 0.166 0.182 0.851 1.030 0.864 0.881 0.861 AMZ 0.2 0.289 0.185 0.995 1.048 1.016 1.029 1.008 BC 0.05 0.189 0.817 0.022 0.313 0.777 0.750 0.837 BC 0.1 0.199 0.886 0.025 0.465 0.965 0.919 0.831 YC 0.03 0.150 0.076 0.776 0.466 0.841 0.858 0.858 YC 0.05 0.240 0.083 0.777 0.517 0.919 0.910 0.928\nWe have proposed the use of Bloom embeddings to represent sparse high-dimensional binary-coded. inputs and outputs. We have shown that a compact representation can be obtained without compro- mising the performance of the original neural network model or, in some cases, even increasing it by a substantial factor. Due to the compact representation, the loss function and the input and output layers deal with less parameters, which results in faster training times. The approach compares fa vorably with respect to the considered alternatives, and offers a number of further advantages such as on-the-fly operation or zero space requirements, all this without introducing changes to the core. network architecture, task configuration, or loss function..\nIn the future, besides continuing to exploit co-occurrences (Appendix B), one could extend the. proposed approach by considering further extensions of Bloom filters such as counting Bloom fil. ters (Bonomi et al.|2006). In theory, those extensions could provide a more compact representatior by breaking the binary nature of the embedding. However, they could require the modification of. the loss function or the mapping process (Eqs.2|and[3). A faster mapping process using the sorted probabilities of v could also be studied.."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank the curators of the data sets used in this study for making them publicly-available. We also thank Santi Pascual for his comments on a previous version of the paper."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "F. Bonomi, M. Mitzenmacher, R. Panigrahy, S. Singh, and G. Varghese. An improved constructior for counting Bloom filters. In Y. Azar and T. Erlebach (eds.), European Symposium on Algorithm. (ESA), volume 4168 of Lecture Notes in Computer Science, pp. 684-695. Springer-Verlag, Berlin Germany, 2006. A. Cardoso-Cachopo. Improving methods for single-label text categorization. PhD thesis, Institutc Superior Tecnico, Universidade Tecnica de Lisboa, 2007. W. Chen, J. Wilson, S. Tyree, K. Weinberger, and Y. Chen. Compressing neural networks with the hashing trick. In Proc. of the Int. Conf. on Machine Learning (ICML), pp. 2285-2294, 2015.\nZ. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid. Label-embedding for image classification IEEE Trans. on Pattern Analysis and Machine Intelligence, 38(7):1425-1438, 2015. Y. Amit, M. Fink, N. Srebro, and S. Ullman. Uncovering shared structures in multiclass classifica- tion. In Proc. of the Int. Conf. on Machine Learning (ICML), pp. 17-24, 2007. G. Armano, C. Chira, and N. Hatami. Error-correcting output codes for multi-label text categoriza- tion. In Proc. of the Italian Information Retrieval Conf. (IIR), pp. 26-37, 2012. S. Bengio, J. Weston, and D. Grangier. Label embedding trees for large multi-class tasks. In Advances in Neural Information Processing Systems (NIPS), volume 23, pp. 163-171. 2010. Y. Bengio, R. Ducharme, and P. Vincent. A neural probabilistic language model. In T. K. Leen, T. G. Dietterich, and V. Tresp (eds.), Advances in Neural Information Processing Systems (NIPS), volume 13, pp. 932-938. 2000. DT\nH.-T. Cheng, L. Koc, J. Harmsen, T. Shaked, T. Chandra, H. Aradhye, G. Anderson, G. Corrado. W. Chai, M. Ispir, R. Anil, Z. Haque, L. Hong, V. Jain, X. Liu, and H. Shah. Wide & deep learning. for recommender systems. In Proc. of the Workshop on Deep Learning for Recommender Systems. (DLRS), pp. 7-10, 2016. K. Cho, B. Van Merrienboer, D. Bahdanau, and Y. Bengio. On the properties of neural machine. translation: encoder-decoder approaches. In Proc. of the Workshop on Syntax, Semantics and. Structure in Statistical Translation (SSST), pp. 103-111, 2014. F. Chollet. Information-theoretic label embeddings for large-scale image classification. ArXiv:. 1607.05691, 2016. M. Cisse, N. Usunier, T. Artieres, and P. Gallinari. Robust Bloom filters for large multilabel clas-.\nH.-T. Cheng, L. Koc, J. Harmsen, T. Shaked, T. Chandra, H. Aradhye, G. Anderson, G. Corrado W. Chai, M. Ispir, R. Anil, Z. Haque, L. Hong, V. Jain, X. Liu, and H. Shah. Wide & deep learning. for recommender systems. In Proc. of the Workshop on Deep Learning for Recommender Systems (DLRS), pp. 7-10, 2016. K. Cho, B. Van Merrienboer, D. Bahdanau, and Y. Bengio. On the properties of neural machine. translation: encoder-decoder approaches. In Proc. of the Workshop on Syntax, Semantics ana. Structure in Statistical Translation (SSST), pp. 103-111, 2014. F. Chollet. Information-theoretic label embeddings for large-scale image classification. ArXiv:. 1607.05691, 2016. M. Cisse, N. Usunier, T. Artieres, and P. Gallinari. Robust Bloom filters for large multilabel clas. sification tasks. In Advances in Neural Information Processing Systems (NIPs), pp. 1851-1859 2013. M. Courbariaux, Y. Bengio, and J.-P. David. BinaryConnect: training deep neural networks with. binary weights during propagations. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and. R. Garnett (eds.), Advances in Neural Information Processing Systems (NIPS), pp. 3123-3131 2015. T. G. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output. codes. Journal of Artificial Intelligence Research, 2:263-286, 1995. P. C. Dillinger and P. Manolios. Bloom filters in probabilistic verification. In Proc. of the Int. Conf.. on Formal Methods in Computer-Aided Design (FMCAD), pp. 367-381, 2004. J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121-2159, 2011. K. Ganchev and M. Dredze. Small statistical models by random feature mixing. In ACL Workshop. on Mobile Language Processing (MLP), pp. 19-20, 2008. X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks.. In Proc. of the Int. Conf. on Artificial Intelligence and Statistics (AISTATS), pp. 249-256, 2010 X. Glorot, A. Bordes, and Y. Bengio. Deep sparse rectifier neural networks. In Proc. of the Int.. Conf. on Artificial Intelligence and Statistics (A1STATS), pp. 315-323, 2011.. E. Grave, A. Joulin, M. Cisse, D. Grangier, and H. Jegou. Efficient softmax approximation for. GPUs. ArXiv: 1609.04309, 2016. A. Graves. Generating sequences with recurrent neural networks. ArXiv: 1308.0850, 2013. S. Han, H. Mao, and W. J. Dally. Deep compression: compressing deep neural networks witl. pruning, trained quantization and Huffman coding. In Proc. of the Int. Conf. on Learning Repre. sentations (ICLR), 2016. F. M. Harper and J. K. Konstan. The MovieLens datasets: history and context. ACM Trans. or Interactive Intelligent Systems, 5(4):19, 2015. B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk. Session-based recommendations with recur rent neural networks. In Proc. of the Int. Conf. on Learning Representations (ICLR), 2016. URI.\nA. Graves. Generating sequences with recurrent neural networks. ArXiv: 1308.0850, 2013\nOO.OOO, S. Han, H. Mao, and W. J. Dally. Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding. In Proc. of the Int. Conf. on Learning Repre- sentations (ICLR), 2016. F. M. Harper and J. K. Konstan. The MovieLens datasets: history and context. ACM Trans. on Interactive Intelligent Systems, 5(4):19, 2015. B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk. Session-based recommendations with recur- rent neural networks. In Proc. of the Int. Conf. on Learning Representations (ICLR), 2O16. URL https://arxiv.org/abs/1511.06939 S. Hochreiter and J. Schmidhuber. Long short-term memory networks. Neural Computation, 9(8): 1735-1780, 1997.\nH. Hotelling. Relations between two sets of variates. Biometrika, 28(3-4):321-377, 1936\nD. J. Hsu, S. M. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing. In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta (eds.), Advance. in Neural Information Processing Systems (NIPS), volume 22, pp. 772-780. 2009. Y.-D. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin. Compression of deep convolutiona. neural networks for fast and low power mobile applications. In Proc. of the Int. Conf. on Learning. Representations (ICLR), 2016. URLhttps://arxiv.0rg/abs/1511.06530\nandD.Oplal . TTooaolllly ana conpullng. Te tic analysis. Cambridge University Press, Cambridge, UK, 2005. F. Morin and Y. Bengio. Hierarchical probabilistic neural network language model. In Proc. of the Int. Workshop on Artificial Intelligence and Statistics (AISTATS), pp. 246-252, 2005. Q. Shi, J. Petterson, G. Dror, J. Langford, A. Smola, and S. V. N. Vishwanathan. Hash kernels for structured data. Journal of Machine Learning Research, 10:2615-2637, 2009. F. Strub, R. Gaudel, and J. Mary. Hybrid recommender system based on autoencoders. In Proc. of the Workshop on Deep Learning for Recommender Systems (DLRS), pp. 11-16, 2016. T. Tieleman and G. Hinton. Lecture 6.5-RMSprop: divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning 4, 2, 2012. J. Turian, L. Ratinov, and Y. Bengio. Word representations: a simple and general method for semi- supervised learning. In Proc. of the Annual Meeting of the Association for Computational Lin guistics (ACL), pp. 384-394, 2010. P. Vincent, A. Brebisson, and X. Bouthilier. Efficient exact gradient update for training deep net- works with very large sparse targets. In Advances in Neural Information Processing Systems (NIPS), pp. 1108-1116. 2015. M. Weimer, A. Karatzoglou, Q. V. Le, and A. J. Smola. COFI RANK - maximum margin matrix factorization for collaborative ranking. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis (eds.), Advances in Neural Information Processing Systems (NIPS), volume 20, pp. 1593-1600. 2008. K. Weinberger, A. Dasgupta, J. Attenberg, J. Langford, and A. Smola. Feature hashing for large scale multitask learning. In Proc. of the Int. Conf. on Machine Learning (ICML), pp. 1113-1120, 2009. J. Weston, O. Chapelle, A. Elisseeff, B. Scholkopf, and V. Vapnik. Kernel dependency estimation. In S. Becker, S. Thrun, and K. Obermayer (eds.), Advances in Neural Information Processing Systems (NIPS), volume 15, pp. 873-880. 2002\nD. J. Hsu, S. M. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta (eds.), Advances in Neural Information Processing Systems (NIPS), volume 22, pp. 772-780. 2009. Y.-D. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin. Compression of deep convolutional neural networks for fast and low power mobile applications. In Proc. of the Int. Conf. on Learning Representations (ICLR), 2016. URLhttps://arxiv.0rg/abs/1511.06530 D. P. Kingma and J. L. Ba. Adam: a method for stochastic optimization. In Proc. of the Int. Conf. on Learning Representations (ICLR), 2015. URLhttps://arxiv.org/abs/1412.6980 J. Langford, L. Li, and A. Strehl. Vowpal wabbit online learning project. Technical report, 2007. URLhttp://hunch.net/?p=309 C. D. Manning, P. Raghavan, and H. Schutze. Introduction to information retrieval. Cambridge University Press, Cambridge, UK, 2008. M. P. Marcus, B. Santorini, and M. A. Marcinkiewich. Building a large annotated corpus of English: the Penn treebank. Computational Linguisitcs, 19(2):313-330, 1993.\nK. Weinberger, A. Dasgupta, J. Attenberg, J. Langford, and A. Smola. Feature hashing for large scale multitask learning. In Proc. of the Int. Conf. on Machine Learning (ICML), pp. 1113-1120, 2009. J. Weston, O. Chapelle, A. Elisseeff, B. Scholkopf, and V. Vapnik. Kernel dependency estimation. In S. Becker, S. Thrun, and K. Obermayer (eds.), Advances in Neural Information Processing Systems (NIPS), volume 15, pp. 873-880. 2002.\nBengio.andI Ldrge scalelnageannolallon. Iearnngto rankwilr word-image embeddings. Machine Learning, 81(1):21-35, 2010. Y. Wu, C. DuBois, A. X. Zheng, and M. Ester. Collaborative denoising auto-encoders for top-n. recommender systems. In Proc. of the ACM Int. Conf. on Web Search and Data Mining (WsDM), pp. 153-162, 2016. C.-N. Ziegler, S. M. McNee, J. A. Konstan, and G. Lausen. Improving recommendation lists through topic diversification. In Proc. of the Int. World Wide Web Conf. (www), pp. 22-32, 2005..\nWe first consider the task of movie recommendation with the Movielens 20M data sel(Harpe. & Konstan! 2015). This data set comprises 20 million ratings applied to roughly 27,000 movies by. over 138,000 users. To recommend movies that users would like, ratings, originally between O.5 anc. 5 stars, were discretized with a threshold of 3.5. Then, movies with less than 5 ratings were removed resulting in a total of 15,405 movies. User profiles were next built using a chronologically-orderec. list of liked movies. We removed users with less than 2 movies and limited profiles to a maximum oj. 2,000 movies (less than O.1% fulfilled this condition). Inputs and outputs were built by splitting use. profiles uniformly at random, ensuring a minimum of one movie in both input and output. Finally. 10,000 random users were taken out for validation and another 10,000 for testing. The ML data se1. is the most dense data set we consider. with a median of 18 movies in input/output profiles (Table[1)\nTo perform recommendations with the ML data set, we build on top of|Wu et al. (2016) and conside a 3-layer feed-forward neural network with a softmax output and 150 rectified linear units (Glorot et al.[2011) in the hidden layers. We initialize the weights with uniform random numbers, weightec by the input and output dimensionality of the layer (Glorot & Bengiof 2010). We optimize the weights of the network using cross-entropy and Adam (Kingma & Ba]2015), with a learning rate of 0.001 and parameters 1 = 0.9 and 2 = 0.999. Training is performed for 15 epochs and with batches of 32 instances. If no improvement is seen on the validation set after one epoch, the learning rate is divided by 5. As done with all the other tasks, we make sure that the network architecture and the number of epochs is sufficient to achieve a state-of-the-art result. As the outpu probabilities define a ranking of movies that the user may like, the accuracy of the result is measurec with mean average precision (Manning et al.||2008). The obtained baseline score So = 0.160 can be considered a state-of-the-art result (Wu et al.||2016). Performing movie rankings at random yields a score Sr = 0.003."}, {"section_index": "10", "section_name": "A.2 PENN TREEBANK (PTB)", "section_text": "Another task we consider is next-word prediction with the Penn treebank data set (Marcus et al. 1993). We employ the data made available byMikolov (2012), which contains close to 1 millior words and defines validation and test splits of roughly 74,000 and 82,000 words, respectively. The vocabulary is limited to 10,000 words, with all other words mapped to an 'unknown' token (Table[1) We consider the end of the sentence as an additional token and form input sequences of length 10.\nInspired by |Graves(2013), we perform next word prediction with an LSTM network (Hochreiter & Schmidhuber[1997). We set the inner dimensionality to 250 and train the network with SGD. We use a learning rate of 0.25, a momentum of 0.99, and clip gradients to have a maximum norm of 1 (Graves2013). We use batches of 128 instances and train the model for 10 epochs. As for the rest, we proceed as with the ML task. We evaluate the result using the reciprocal rank of the correct prediction (Manning et al.2008). We achieve a performance of So = 0.342, which indicates that,. on average, the correct word is ranked on the third position. Predicting words at random yields a score SR = 0.001.\nhttp://grouplens.org/datasets/movielens/20m/"}, {"section_index": "11", "section_name": "A.3 CADE WEB DIRECTORY (CADE)", "section_text": "We perform single-label text categorization using web pages classified by human experts from the. CADE web directory of Brazilian web pages5[(Cardoso-Cachopo2o07). The data set contains. around 40,o00 documents assigned to one of 12 categories such as services, education, health, or culture. We use the train and test splits provided by|Cardoso-Cachopo[(2007), further splitting the train set randomly to obtain a validation set from it. Validation and test splits comprise 5,O00 and 13,661 documents, respectively. The size of the vocabulary is close to 200,000 words with a median. number of 17 words per document (Table1)\nTo perform classification we use a 4-layer feed-forward neural network with a softmax output. The number of units is, from input to output, 400, 200, 100, and 12, and we use rectified linear units as activations for the hidden layers. We train the network for 10 epochs, using batches of 32 instances and RMSprop (Tieleman & Hinton2012) with a learning rate of O.0002 and exponential decay oj 0.9. As for the rest, we proceed as with the ML task. We obtain a baseline accuracy of So = 58.0 %. slightly superior than the best baseline reported by|Cardoso-Cachopo[(2007), and a random accuracy. of Sr = 8.5 % (Table|2). Notice that this is the only data set that does not have a sparse instance o1. user profile as output."}, {"section_index": "12", "section_name": "A.4 MILLION SONG DATA SET (MSD)", "section_text": "The next task we consider is song recommendation with the million song data set (Bertin-Mahieux et al.[|2011). We take the Echo Nest taste profile subset] which includes over 48 million play counts of around 384,000 songs for roughly 1 million users. We assume that a user likes a song when this has listened to it a minimum of 3 times. We then remove the songs that appear less than 20 times and build user profiles with a minimum of 5 songs. We split the data set as with the ML task, keeping 50,000 user profiles for validation and another 50,000 for testing. The MSD data set has a median of 5 songs in input/output profiles (Table1).\nTo recommend future listens to the user we use a 3-laver feed-forward neural network with a softmay output and 300 rectified linear units in the hidden layers. We fit the model for 10 epochs with batches. of 64 instances. As for the rest, we proceed as with the ML task. We obtain a baseline mean average. precision of So = 0.066 and a random score of Sr below 0.001."}, {"section_index": "13", "section_name": "A.5 AMAZON BOOK REVIEWS (AMZ)", "section_text": "We also consider book recommendations with the Amazon book reviews data set'|(McAuley et al. 2015). The data set originally contains 22 million ratings of over 2 million books by approximatel. 3 million users. We proceed as with the ML data set, but this time setting the minimum numbe of ratings per book to 100 and splitting the data with 50,o00 instances for validation and anothe 50,000 instances for testing."}, {"section_index": "14", "section_name": "A.6 BOOK CROSSING (BC)", "section_text": "Continuing with book recommendations, we consider the book crossing data set'[(Ziegler et al. 2005). It contains 278,000 users providing over 1 million ratings about a little more than 271,000 books. We remove books with less than 2 ratings, discretize those by a threshold of 4, and proceed as with the ML data set, but keeping 2,500 users for validation and another 2,500 for testing. The BC data set is known to be a very sparse data set, specially after removing users with less than 2 book reviews (Table1).\nhttp://ana.cachopo.org/datasets-for-single-label-text-categorizati http://labrosa.ee.columbia.edu/millionsong/tasteprofile http://jmcauley.ucsd.edu/data/amazon/ http://www2.informatik.uni-freiburg.de/~cziegler/Bx/\nWe here use a 4-layer feed-forward neural network with a softmax output and 300 rectified linear inits in the hidden lavers. We fit the model for 10 epochs with batches of 64 instances and. as for the rest, we proceed as with the ML task. We obtain a baseline mean average precision of So = 0.049 and a random score of Sr below 0.001.\nTo perform recommendations we use the same architecture and configuration as with the MSD task but this time we use 250 units in the hidden layers. We obtain a baseline mean average precision of So = 0.010 and a random score of Sr below 0.001."}, {"section_index": "15", "section_name": "A.7 YOOCHOOSe (YC)", "section_text": "We finally study session-based recommendations using the YooChoose RecSys15 challengq'|data. Here, the task is to predict the next click given a sequence of click events for a given session in. an e-commerce site (Hidasi et al.2016). We work with the training set of the challenge and keep. only the click events. We take the first 2 million sessions of the data set which have a minimum of 2 clicks, and keep apart 50,000 for validation and another 50,000 for testing. We form sequences of,. at most, 13 clicks to the 35,000 possible links (Table|1). Note that this is a sequential data set with. one-hot encoded instances of only one event each..\nTo predict the next click we proceed as in Hidasi et al.(2016) and consider a GRU model (Chc et al. 2014). We set the inner dimensionality to 100 and train the network with Adagrad (Duchi et al. 2011), using a learning rate of O.01. We use batches of 64 instances and train the model for. 10 epochs. As for the rest, we proceed as with the ML task. As with PTB, we evaluate the result using the reciprocal rank of the correct prediction. We achieve a performance of So = 0.368, which can be assumed to be as good as state-of-the-art models on this data (Hidasi et al.2016). Predicting. clicks at random yields a score Sr below O.001..\nIn Bloom filters and BE, collisions are unavoidable due to the lower embedding dimensionalit and the use of multiple projections (Sec.3). In addition we have seen that alternative approaches. produce embeddings by exploiting co-occurrence information (Secs.2 and|4.3). Here, we study a variant of BE that takes advantage of co-occurrence information to adjust the collisions that wil inevitably take place when performing the embedding. We denote this approach by co-occurrence based Bloom embedding (CBE).\nWhat we propose is a quite straightforward approach to CBE, which does not add much extra pre computation time. Training and testing times remain the same, as CBE uses a pre-computed hashing matrix H (Sec.3.2). The general idea of the proposed approach is to 're-direct' the collisions of the co-occurring elements to the same bits or positions of u. Our implementation of this idea is detailed In Algorithm[1] and briefly explained below.\nInput: Input and/or output instances X (n d sparse binary matrix), embedding dimensionality m number of projections k, and pre-computed hashing matrix H (d k integers matrix).. Output: Co-occurrence-based hashing matrix H'.\nhttp://recsys.yoochoose.net/challenge.html\nTable 4: Co-occurrence statistics and average score increase of CBE over BE. From left to right:. data set name, input percent of co-occurrent pairs, input average co-occurrence ratio of co-occurrent. pairs, output percent of co-occurrent pairs, output average co-occurrence ratio of co-occurrent pairs, and average score increases of CBE over BE (%, calculated using 100(S; - S)/So and averaging over all m/d points). Co-occurrence values for PTB and YC inputs correspond to considering. training sequences, not isolated sequence elements.\nFirst, we count pairwise co-occurrences and store them in a sparse matrix C (line 1). Next, we threshold C by the average element frequency in X using the Hadamard product O and a component wise sign function (line 2). We then get the lower triangular part of C and return it in coordinates format, that is, using a tuple of values, row indices, and column indices (line 3). We will use th order in cVAL to update the hash matrix H. To do so, we first loop over the indices of the sorted value. of cVAL in increasing order (line 4). After selecting the corresponding elements a and b (line 5), we then draw integers from URND (lines 6-8). The function URND(x, y, z) is a uniform random intege generator between x and y (both included) such that the output integer is not included in the set z that is, URnD(x, y,z) z. Rows a and b of H are transformed to sets hg and h, and its union i computed (line 6). Finally, we use the integers generated by URND to pick projections Ja and j from H, and assign them the same bit r (line 9). By updating the projections in H in increasing order of co-occurrence (line 4), we give priority to the pairs with largest co-occurrence, setting then to collide to the same bit r (line 9)."}, {"section_index": "16", "section_name": "B.2 CBE RESULTS", "section_text": "Overall, the performance of CBE only provides moderate increments over the original BE approac (Fig.4). With the exception of the BC task, the performance of CBE is always higher than the on of BE. However, with the exception of the AMZ task, we do not observe dramatic increases of CBI over BE. On average, such increases are between 0.4% and 8.4% (Table 4] right). One possibl explanation for these moderate performance increases is the low co-occurrence in the considere data (Table 4] left). As it can be seen, typically less than 3% of all possible pairs show a co occurrence. Moreover, the average co-occurrence count of such co-occurring pairs is very low, wit ratios p to the total number of instances n in the order of 10-5 or 10-6\nDespite being moderate on average, we observed that the increments provided by CBE were more. prominent for low dimensionality ratios m/d. By relating CBE with the best approaches resulting. from the comparison of BE with the alternatives, we see that CBE is generally better than BE, sometimes with a statistically significant difference (Table5). Furthermore, we see that CBE, being. based on co-occurrences, more closely approaches PMI and CCA in the tasks where those were. performing best, and even outperforms them in one test point (AMZ, m/d = 0.2; compare also with Table[3. Being closer to those co-occurrence-based approaches is an indication that CBE leverages co-occurrence information to some extent.\nData set Co-occurrence statistics Score increase (%) Input (%) Input (p) Output (%) Ouput (p) k = 3 k = 4 ML 25.2 1.3 : 10-4 32.9 1.0 : 10-4 +0.9 +1.7 PTB 3.3 2.4 : 10-5 0 0 +0.1 +0.9 CADE 1.3 8.8 : 10-5 N/A N/A 0.4 0.1 MSD 1.3 3.0 : 10-6 1.3 3.1 : 10-6 +0.5 +1.5 AMZ 3.0 1.8 : 10-6 3.0 1.8 : 10-6 +6.6 +8.4 BC 0.8 4.9 : 10-5 0.4 4.9 : 10-5 -3.4 1.0 YC 0.2 1.5 : 10-6 0 0 +0.4 +0.3\n1.2 1.0 00.8 S 0.6 ML(BE) PTB (BE) S 0.4 ML (CBE) PTB (CBE) 0.2 Baseline Baseline 0.0 1.2 1.0 0.6 CADE (BE) MSD (BE) s 0.4 CADE (CBE) MSD (CBE) 0.2 Baseline Baseline 0.0 1.2 1.0 00.8 S 0.6 - AMZ (BE) BC (BE) s 0.4 AMZ (CBE) BC (CBE) 0.2 Baseline Baseline 0.0 1.2 1.0 00.8 0.6 : YC (BE) s 0.4 YC (CBE) 0.2 Baseline 0.0 0.0 0.2 0.4 0.6 0.8 1.00.0 0.2 0.4 0.6 0.8 1.0 m/d m/d\nFigure 4: Comparison of score ratios S/So as a function of dimensionality ratio m/d for BF (dashed lines) and CBE (solid lines) using k = 4. Qualitatively similar plots are observed for other values of k.\nTable 5: Comparison of CBE versus the results in Table[3] Score ratios S /So for different combi nations of data set and compression ratio m/d. Best results are highlighted in bold, up to statistica significance (Mann-Whitney-U, p > 0.05).\nTest point Best so far CBE Data set m/d Method Si/So k = 3 k = 4 ML 0.2 BE 0.770 0.760 0.781 ML 0.3 BE 0.815 0.812 0.867 PTB 0.2 BE 0.919 0.915 0.907 PTB 0.4 BE 0.942 0.937 0.922 CADE 0.01 PMI 0.984 0.854 0.853 CADE 0.03 PMI 1.002 0.921 0.922 MSD 0.05 BE 0.738 0.759 0.756 MSD 0.1 BE 0.841 0.856 0.873 AMZ 0.1 CCA 1.030 0.994 0.991 AMZ 0.2 CCA 1.048 1.109 1.117 BC 0.05 BE 0.837 0.774 0.808 BC 0.1 BE 0.965 0.880 0.878 YC 0.03 BE 0.858 0.871 0.880 YC 0.05 BE 0.928 0.933 0.936"}] |
S1X7nhsxl | [{"section_index": "0", "section_name": "IMPROVING GENERATIVE ADVERSARIAL NETWORKS WITH DENOISING FEATURE MATCHING", "section_text": "David Warde-Farley & Yoshua Bengio\nMontreal Institute for Learning Algorithms, * CIFAR Senior Fellov Universite de Montreal Montreal. Ouebec. Canada\ndavid.warde-farley,. yoshua.bengio}@umontreal.ca\nWe propose an augmented training procedure for generative adversarial networks designed to address shortcomings of the original by directing the generator to- wards probable configurations of abstract discriminator features. We estimate and track the distribution of these features, as computed from data, with a denoising auto-encoder, and use it to propose high-level targets for the generator. We com bine this new loss with the original and evaluate the hybrid criterion on the task of unsupervised image synthesis from datasets comprising a diverse set of visual categories, noting a qualitative and quantitative improvement in the \"objectness' of the resulting samples."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Generative adversarial networks (Goodfellow et al. 2014a) (GANs) have become well known for their strength at realistic image synthesis. The objective function for the generative network is an implicit function of a learned discriminator network, estimated in parallel with the generator which aims to tell apart real data from synthesized. Ideally, the discriminator learns to capture distinguishing features of real data, which the generator learns to imitate, and the process iterates until real data and synthesized data are indistinguishable.\nIn practice, GANs are well known for being quite challenging to train effectively. The relative model capacities of the generator and discriminator must be carefully balanced in order for the generator to effectively learn. Compounding the problem is the lack of an unambiguous and computable convergence criterion. Nevertheless, particularly when trained on image collections from relatively narrow domains such as bedroom scenes (Yu et al.]2015) and human faces (Liu et al.]2015), GANs have been shown to produce very compelling results.\nFor diverse image collections comprising a wider variety of the visual world, the results have gen erally been less impressive. For example, samples from models trained on ImageNet (Russakovsky et al.2014) roughly match the local and global statistics of natural images but yield few recogniz. able objects. Recent work (Salimans et al.]2016) has sought to address this problem by training the discriminator in a semi-supervised fashion, granting the discriminator's internal representations knowledge of the class structure of (some fraction of) the training data it is presented. This tech- nique markedly increases sample quality, but is unsatisfying from the perspective of GANs as a tool for unsupervised learning.\nWe propose to augment the generator's training criterion with a second training objective whicl. guides the generator towards samples more like those in the training set by explicitly modeling th data density in addition to the adversarial discriminator. Rather than deploy a second computation ally expensive convolutional network for this task, the additional objective is computed in the space. of features learned by the discriminator. In that space, we train a denoising auto-encoder, a famil. of models which is known to estimate the energy gradient of the data on which it is trained. We. evaluate the denoising auto-encoder on samples drawn from the generator, and use the \"denoised. features as targets - nearby feature configurations which are more likely than those of the generate. sample, according to the distribution estimated by the denoiser.."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We show that this yields generators which consistently produce recognizable objects on the CIFAR 10 dataset without the use of label information as in Salimans et al.(2016). The criterion appears to improve stability and possesses a degree of natural robustness to the well known \"collapse\" pathol ogy. We further investigate the criterion's performance on two larger and more diverse collections of images, and validate our qualitative observations quantitatively with the Inception score proposed inSalimans et al.(2016).\nThe generative adversarial networks paradigm (Goodfellow et al. 2014a) estimates generative sam plers by means of a training procedure which pits a generator G against a discriminator D. D is. trained to tell apart training examples from samples produced by G, while G is trained to increase the probability of its samples being incorrectly classified as data. In the original formulation, the training procedure defines a continuous minimax game.\narg min arg max Ex~D log D(x) + Ez~p(z) log(1 - D (G(z)) G D\nwhere D is a data distribution on Rn, D is a function that maps Rn to the unit interval, and G is a function that maps a noise vector z E Rm, drawn from a simple distribution p(z), to the ambient space of the training data, Rn. The idealized algorithm can be shown to converge and to mini- mize the Jensen-Shannon divergence between the data generating distribution and the distribution parameterized by G.\nGoodfellow et al.(2014a) found that in practice, minimizing (1) with respect to the parameters of ( proved difficult, and elected instead to optimize an alternate objective.\nat the same time as D is optimized as above. log D(G(z)) yields more favourably scaled per sample gradients for G when D confidently identifies a sample as counterfeit, avoiding the vanishing gradients arising in that case with the log(1 - D(G(z))) objective\nSubsequent authors have investigated applications and extensions of GANs; for a review of this body of literature, see [Warde-Farley & Goodfellow(2016). Of particular note for our purposes is Radford et al. (2015), who provide a set of general guidelines for the successful training of generative adversarial networks, and Salimans et al.|(2016), who build upon these techniques with a number of useful heuristics and explore a variant in which the discriminator D is trained to correctly classify labeled training data, resulting in gradients with respect to the discriminator evidently containing a great deal of information relevant to generating \"object-like\"' samples."}, {"section_index": "3", "section_name": "2.2 CHALLENGES AND LIMITATIONS OF GANS", "section_text": "While[Goodfellow et al.(2014a) provides a theoretical basis for the GAN criterion, the theory relies. on certain assumptions that are not satisfied in practice. Proofs demonstrate convergence of the GAN. criterion in the unconstrained space of arbitrary functions; in practice, finitely parameterized families. of functions such as neural networks are employed. As a consequence, the \"inner loop\" of the idealized algorithm - maximizing (1) with respect to (the parameters of) D, is infeasible to perform. exactly, and in practice only one or a few gradient steps stand in for this maximization. This results. in a de facto criterion for G which minimizes a lower bound on the correct objective (Goodfellow 2014).\nA commonly observed failure mode is that of full or partial collapse, where G maps a large fraction of probable regions under p(z) to only a few, low-volume regions of Rn; in the case of images, this. manifests as the appearance of many near-duplicate images in independent draws from G, as well as a lower diversity of samples and modes than what is observed in the dataset. As G and D are. typically trained via mini-batch stochastic gradient descent, several authors have proposed heuristics that penalize such duplication within each mini-batch (Salimans et al.] 2016, Zhao et al.2016)..\narg max Ez z) log D(G(z)\nwhere p(y|x) is provided by the output of the Inception network and p(y) = S p(x)p(y|x)dx p(y|x;). Note that this score can be made larger by a low-entropy per-sample posterior (i.e. the Inception network classifies a given sample with greater certainty) as well as a higher entropy. aggregate posterior (i.e. the Inception network identifies a wide variety of classes among the sam. ples presented to it). Salimans et al.[(2016) found this score correlated well with human evaluations of samplers trained on CIFAR-10; we therefore employ the Inception score here as a quantitative. measure of visual fidelity of the samples, following the previous work's protocol of evaluating the. average Inception score over 10 independent groups of 5,000 samples each. Error estimates corre. spond to standard deviations, in keeping with previously reported results.."}, {"section_index": "4", "section_name": "3 IMPROVING UNSUPERVISED GAN TRAINING ON DIVERSE DATASETS", "section_text": "In this work, we focus on the apparent difficulty of training GANs to produce \"object-like\"' sam. ples when trained on diverse collections of natural images. While Salimans et al. (2016) make. progress on this problem by employing labeled data and training the discriminator, here we aim tc make progress on the unsupervised case. Nevertheless, our methods would be readily applicable to supervised, semi-supervised or (with slight modifications) conditional setting..\nWe begin from the slightly subtle observation that in realistic manifestations of the GAN trainin procedure, the discriminator's (negative) gradient with respect to a sample points in a direction o (infinitesimal) local improvement with respect to the discriminator's estimate of the sample bein data; it does not necessarily point in the direction of a draw from the data distribution. Indeed, th. literature is replete with instances of gradient descent with respect to the input of a classificatio model, particularly wide-domain natural image classifiers, producing ghostly approximations to. particular class exemplar (Le et al.[2012] Erhan et al.[2009][Yosinski et al.[2015) when this proce dure is carried out without additional guidance, to say nothing of the problems posed by adversaria. examples (Szegedy et al.]2013] Goodfellow et al.]2014b) and fooling examples (Nguyen et al 2015).\nWhile the gradient of the loss function defined by the discriminator may be a source of information mostly relevant to very local improvements, the discriminator itself is a potentially valuable source of compact descriptors of the training data. Many authors have noted the remarkable versatility of high-level features learned by convolutional networks(Donahue et al.]2014) Yosinski et al.]2014 and the degree to which high-level semantics can be reconstructed from even the deepest layers of a network (Dosovitskiy & Brox2016). Although non-stationary, the distribution of the high level activations of the discriminator when evaluated on data is ripe for exploitation as an additional source of knowledge about salient aspects of the data distribution.\nWe propose in this work to track this distribution with a denoising auto-encoder r() trained on. the discriminator's hidden states when evaluated on training data. Alain & Bengio(2014) showed that a denoising auto-encoder trained on data from a distribution q(h) estimates via r(h) - h the. Oh transformed training data h = (x) with x ~ D, then r((x)) - (x') with x' = G(z) indicates in which direction x' should be changed in order to make h = (x') more like those features seen with the data. Minimizing [r((x')) - (x')|l2 with respect to x' would thus push x' towards. higher probability configurations according to the data distribution in the feature space (x). We. thus evaluate the discriminator features (x), and the denoising auto-encoder, on samples from the. generator, and treat the denoiser's output reconstruction as a fixed target for the generator. We refer. to this procedure as denoising feature matching, and employ it as a learning signal for the generator. in addition to the traditional GAN generator objective..\nGANs represent a departure from traditional probabilistic models based on maximum likelihood and its approximations in that they parameterize a sampler directly and lack a closed form for the likelihood. This makes objective, quantitative evaluation difficult. While previous results in the literature have reported approximate likelihoods based on Parzen window estimates, Theis et al. (2015) has convincingly argued that these estimates can be quite misleading for high-dimensional data. In this work, we adopt the Inception score proposed by Salimans et al.(2016), which uses a reference Inception convolutional neural network (Szegedy et al.[2015) to compute\nI({x}) = exp(E[DkL(p(y|x)|p(y)]))\nFormally, let G be the generator parameterized by OG, and D = do be our discriminator composing. feature extractor () : Rn -> Rk and a classifier d() : Rk > [0, 1]. Let C() : Rk > R be a corruption function to be applied at the input of the denoising auto-encoder when it is trained to denoise. The parameters of the discriminator D, comprising the parameters of both d and , is trained as in|Goodfellow et al.[(2014a), while the generator is trained according to.\narg min Ez~p(z) denoise|(G(z)) - r((G(z))|l2 - Aadv log D(G(z))\nwhere r(G(z)) is treated as constant with respect to gradient computations. Simultaneously, the denoiser r() is trained according to the objective.\narg min Ex~D[(x) - r(C((x))]\nThe theory surrounding denoising auto-encoders applies when estimating a denoising function from. a data distribution p(vecx). Here, we propose to estimate the denoising auto-encoder in the space. of discriminator features, giving rise to a distribution q((x)). A natural question is what effect. this has on the gradient being backpropagated. This is difficult to analyze in general, as for most. choices the mapping will not be invertible, though it is instructive to examine the invertible case. OX By the inverse function theorem, J is also invertible (and is in fact the Jacobian of the inverse -1) Applying the chain rule and re-arranging terms, taking advantage of the invertibility of J, we arrive at a straightforward relationship between the score of q and the score of p:.\nalog q((x)) Alog[p(x)[J]] d(x) d(x) a(x) a log dlogp(x) dx + d(x) d(x) dlogp(x) a log|J dx dx\ndJ. is a matrix of scalar derivatives of elements of J with respect to xk. Thus, we see that and the gradient backpropagated to the generator in an ideal setting is the gradient of the data dis- tribution p(x) along with an additive term which accounts for the changes in the rate of volume expansion/contraction in locally around x. In practice, is not invertible, but the added benefit of the denoiser-targeted gradient appears to reduce underfitting to the modes of p in the generator, irrespective of any distortions may introduce.\nDenoising feature matching was originally inspired by feature matching introduced by Salimans et al. (2016) as an alternative training criterion for GAN generators, namely (in our notation)\narg min ||Ex~D[(x)]-Ez~p(z) [D(G(z))]| 0 G\nFeature matching is equivalent to linear maximum mean discrepancy (Gretton et al.. 2006), em ploying linear first moment matching in the space of discriminator features () rather than the. more familiar kernelized formulation. When performed on features in the penultimate layer, Sali- mans et al.(2016) found that the feature matching criterion was useful for the purpose of improving.\nAlog q((x)) Alog[p(x)|J|] (6) d(x) a(x) 8(x) alog dlogp(x) dx (7) d(x) d(x) dlogp(x) alog|J| (8) dx dx a log|J (9) dxk\nalogJ dJ Tr dxk dx k\nresults on semi-supervised classification, using classification of samples from the generator as a so- phisticated form of data augmentation. Feature matching was, however, less successful at producing samples with high visual fidelity. This is somewhat unsurprising given that the criterion is insensi-. tive to higher-order statistics of the respective feature distributions. Indeed, a degenerate G which deterministically reproduces a single sample m such that (m) = Exed(x) trivially minimizes. q10); in practice the joint training dynamics of D and G do not appear to yield such degenerate. solutions.\nRather than aiming to merely reduce linear separability between data and samples in the feature space defined by (), denoising feature matching selects a more probable (according to the feature distribution implied by the data, as captured by the denoiser) feature space target for each sample produced by G and regresses G towards it. While an early loss of entropy in G could result in the generator locking on to one or a few attractors in the denoiser's energy landscape, we observe that this does not happen when used in conjunction with the traditional GAN objective, and in fact that the combination of the two objectives is notably robust to the collapses often observed in GAN training, even without taking additional measures to prevent them.\nThis work also draws inspiration from|Alain & Bengio(2014), which showed that a suitably trained denoiser learns an operator which locally maps a sample towards regions of high probability un der the data distribution. They further showed that a suitably trained' reconstruction function r(. behaves such that\nSeveral approaches to GAN-like models have cast the problem in terms of learning an energy func. tion.Kim & Bengio(2016) extends GANs by modeling the data distribution simultaneously with. an energy function parameterized by a deep neural network (playing the role of the discriminator and the traditional generator, carrying out learning with a learning rule resembling that of the Boltz. mann machine (Ackley et al.][1985), where the \"negative phase\" gradient is estimated from samples. from the generator. The energy-based GAN formulation ofZhao et al.(2016) resembles our work ir. their use of an auto-encoder which is trained to faithfully reconstruct (in our case, a corrupted, func tion of) the training data. The energy-based GAN replaces the discriminator with an auto-encoder which is trained to assign low energy (L2 reconstruction error) to training data and higher energy. to samples from G. To discourage generator collapses, a \"pull-away term'\"' penalizes the normal. ized dot product in a feature space defined by the auto-encoder's internal representation. In this work, we preserve the discriminator, trained in the usual discriminative fashion, and in fact preserve the traditional generator loss, instead augmenting it with a source of complementary informatior provided by targets obtained from the denoiser. The energy-based GAN can be viewed as training. the generator to seek fixed points of the autoencoding function (i.e. by backpropagating througl the decoder and encoder in order to decrease reconstruction error), whereas we treat the output of r() as constant with respect to the optimization as in Lee et al. (2015). That is to say, rather thar using backpropagation to steer the dynamics of the autoencoder, we instead employ our denoising. autoencoder to augment the gradient information obtained by ordinary backpropagation..\n' In the limit of infinite training data, with isotropic Gaussian noise of some standard deviation\nd logp(x) r(x) -x x dx\nThat is, r(x) - x estimates the score of the data generating distribution, up to a multiplicative. constant. Our use of denoising auto-encoders necessarily departs from idealized conditions in that the denoiser is estimated online from an ever-changing distribution of features.\nClosest to our own approach, concurrent work on model-based super-resolution bySonderby et al.. (2016) trains a denoising auto-encoder on high-resolution ground truth and evaluates it on synthe-. sized super-resolution images, using the difference between the original synthesized image and the. denoiser's output as an additional training signal for refining the output of the super-resolution net- work. Both Sonderby et al.(2016) and our own work are motivated by the results of|Alain & Bengio (2014) discussed above. Aside from addressing a different application area, our denoiser is learned. on-the-fly from a high-level feature representation which is itself learned..\nWe evaluate denoising feature matching on learning synthesis models from three datasets of increas-. ing diversity and size: CIFAR-10, STL-10, and ImageNet. Although several authors have described GAN-based image synthesis models operating at 128 128 (Salimans et al.]2016, Zhao et al. 2016) and 256 256 (Zhao et al.[2016) resolution, we carry out our investigations at relatively low resolutions, both for computational ease and because we believe that the problem of unconditional modeling of diverse image collections is not well solved even at low resolutions; making progress. in this regime is likely to yield insights that apply to the higher-resolution case..\nIn all experiments, we employ isotropic Gaussian corruption noise with = 1. Although we ex. perimented with annealing o towards 0 (as also performed inSonderby et al.(2016)), an annealing. schedule which consistently outperformed fixed noise remained elusive. We experimented with con- volutional denoisers, but our best results to date were obtained with deep, fully-connected denoisers using the ReLU nonlinearity on the penultimate layer of the discriminator. The number of hidden units was fixed to the same value in all denoiser layers, and the procedure is apparently robust to this. hyperparameter choice, as long as it is greater than or equal to the input dimensionality..\nOur generator and discriminator architectures follow the methods outlined in Radford et al.(2015) Accordingly, batch normalization (Ioffe & Szegedy 2015) was used in the generator and discrimi-. nator in the same manner as Radford et al.(2015), and in all layers of the denoiser except the output layer. In particular, as in Radford et al.(2015), we separately batch normalize data and generator. samples for the discriminator and denoiser with respect to each source's statistics. We calculate updates with respect to all losses with the parameters of all three networks fixed, and update all parameters simultaneously.\nAll networks were trained with the Adam optimizer Kingma & Ba (2014) with a learning rate of 10-4 and 1 = 0.5. The Adam optimizer is scale invariant, and so it suffices to e.g. tune Adenoise and fix Aadv to 1. In our experiments, we set Adenoise to O.03/nn, where nn is the number of discriminator hidden units fed as input to the denoiser; this division decouples the scale of the first term of (4) from the dimensionality of the representation used, reducing the need to adjust this hyperparameter simply because we altered the architecture of the discriminator.\nSamples from our model trained on CIFAR-10 are shown in Figure[1 and Inception scores for sev eral methods, including those reported in Salimans et al.(2016) and scores computed from sample generated from a model presented inDumoulin et al.(2016), are presented in Table[1] We achieve a mean Inception score of 7.72, falling slightly short of Salimans et al.[(2016), which employec. a supervised discriminator network (the same work reports a score of 4.36 .04 when labels are omitted from their training procedure). Qualitatively, the samples include recognizable cars, boat and various animals. The best performing generator network consisted of the 32 32 ImageNe. architecture from Radford et al.[(2015) with half the number of parameters at each layer, and les. than 40% of the parameters of the CIFAR-10 generator presented in Salimans et al.(2016).\nSemi-supervised Unsupervised Real data* Improved GAN (Salimans et al)' ALI (Dumoulin et al)'. Ours 11.24 .12 8.09 .07 5.34 0.05 7.72 0.13\nTable 1: Inception scores for models of CIFAR-10. * as reported in Salimans et al.[(2016); semi supervised t computed from samples drawn using author-provided model parameters and imple mentation.\nFigure 1: Samples generated from a model trained with denoising feature matching on CIFAR10\nSTL-10 (Coates et al. 2011) is a dataset consisting of a small labeled set and larger (100,000). unlabeled set of 96 96 RGB images. The unlabeled set is a subset of ImageNet that is more diverse than CIFAR-10 (or the labeled set of STL-1O), but less diverse than full ImageNet. We downsample by a factor of 2 on each dimension and train our networks at 48 48. Inception scores. for our model and a baseline, consisting of the same architecture trained without denoising feature matching (both trained for 50 epochs), are shown in Table[2] Samples are displayed in Figure[2.\nReal data Ours GAN Baseline 26.08 .26 8.51 0.13 7.84 .07"}, {"section_index": "5", "section_name": "5.3 IMAGENET", "section_text": "The ImageNet database (Russakovsky et al.]2014) is a large-scale database of natural images. We train on the designated training set of the most widely used release, the 2012 ImageNet Large Scale Visual Recognition Challenge (ILSVRC2012), consisting of a highly unbalanced split among 1,00o object classes. We preprocess the dataset as rescaled central crops following the procedure of Krizhevsky et al.(2012), except at 32 32 resolution to facilitate comparison with Radford et al.\nImageNet poses a particular challenge for unsupervised GANs due to its high level of diversity anc. class skew. With a generator and discriminator architecture identical to that used for the same datase. in Radford et al.(2015), we achieve a higher Inception score using denoising feature matching, using. denoiser with 10 hidden layers of 2,048 rectified linear units each. Both fall far short of the score assigned to real data at this resolution; there is still plenty of room for improvement. Samples are. displayed in Figure3\nTable 2: Inception scores for models of the unlabeled set of STL-10\nWe have shown that training a denoising model on high-level discriminator activations in a GAN and using the denoiser to propose high-level feature targets for the generator, can usefully improve\nFigure 2: Samples from a model trained with denoising feature matching on the unlabeled portior of the STL-10 dataset.\nFigure 3: Samples from our model of ILSVRC2012 at 32 32 resolution\nReal data. Radford et al* Ours 25.78 .47 8.83 0.14 9.18.13\nTable 3: Inception scores for models of ILSVRC 2012 at 32 32 resolution. * computed f samples drawn using author-provided model parameters and implementation..\nGAN image models. Higher Inception scores, as well as visual inspection, suggest that the procedure captures class-specific features of the training data in a manner superior to the original adversarial objective alone. That being said, we do not believe we are yet making optimal use of the paradigm The non-stationarity of the feature distribution on which the denoiser is trained could be limiting the ability of the denoiser to obtain a good fit, and the information backpropagated to the generator is always slightly stale. Steps to reduce this non-stationarity may be fruitful; we experimented briefly with historical averaging as explored inSalimans et al.[(2016) but did not observe a clear benefit thus far. Structured denoisers, including denoisers that learn an energy function for multiple hidden layers at once, could conceivably aid in obtaining a better fit. Learning a partially stochastic transition operator rather than a deterministic denoiser could conceivably capture interesting multimodalities that are \"blurred' by a unimodal denoising function.\nOur method is orthogonal and could conceivably be used in combination with several other GAN extensions. For example, methods incorporating an encoder component (Donahue et al.[ 2016 Dumoulin et al.|2016), various existing conditional architectures (Mirza & Osindero2014)Denton et al.[2015] Reed et al.2016), or the semi-supervised variant employed in Salimans et al.[(2016), could all be trained with an additional denoising feature matching objective.\nWe have proposed a useful heuristic, but a better theoretical grounding regarding how GANs are. trained in practice is a necessary direction for future work, including grounded criteria for assess. ing mode coverage and mass misassignment, and principled criteria for assessing convergence or performing early stopping."}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Ian Goodfellow. Laurent Dinh, Yaroslav Ganin and Kyle Kastner for helpful discussions We thank Vincent Dumoulin and Ishmael Belghazi for making available code and model parameters. used in comparison to ALI, as well as Alec Radford for making available the code and model pa. rameters for his ImageNet model. We would like to thank Antonia Creswell and Hiroyuki Yamazaki. for pointing out an error in the initial version of this manuscript, and anonymous reviewers for valu-. able feedback. We thank the University of Montreal and Compute Canada for the computational. resources used for this investigation, as well as the authors of Theano (A1-Rfou et al.|[2016), Blocks. and Fuel (van Merrienboer et al.]2015). We thank CIFAR, NSERC, Google, Samsung and Canada. Research Chairs for funding."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing systems, pp. 1486-1494, 2015.\nDavid H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann machines. Cognitive science, 9(1):147-169, 1985\nGuillaume Alain and Yoshua Bengio. What regularized auto-encoders learn from the data-generating distribution. Journal of Machine Learning Research, 15(1):3563-3593, 2014\nJeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In ICML pp. 647-655, 2014.\nDumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-laye features of a deep network. University of Montreal, 1341, 2009.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair.. Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling.. C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Pro-. cessing Systems 27, pp. 2672-2680. Curran Associates, Inc., 2014a. URLhttp: / /papers. nips.cc/paper/5423-generative-adversarial-nets.pdf\nIan J Goodfellow. On distinguishability criteria for estimating generative models. arXiv preprini arXiv:1412.6515, 2014.\nArthur Gretton, Karsten M Borgwardt, Malte Rasch, Bernhard Scholkopf, and Alex J Smola. A ker nel method for the two-sample-problem. In Advances in neural information processing systems pp. 513-520, 2006.\nTaesup Kim and Yoshua Bengio. Deep directed generative models with energy-based probability estimation. arXiv preprint arXiv:1606.03439, 2016.\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nZiwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild In Proceedings of International Conference on Computer Vision (ICCV). 2015\nMehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprini arXiv:1411.1784, 2014.\nJeff Donahue, Philipp Krahenbuhl, and Trevor Darrell. Adversarial feature learning. arXiv preprin. arXiv:1605.09782. 2016\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with de convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\nCasper Kaae Sonderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszar. Amortised map inference for image super-resolution. arXiv preprint arXiv:1610.04490, 2016.\nLucas Theis, Aaron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015.\nBart van Merrienboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde Farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning CoRR,abs/1506.00619,2015. URLhttp://arxiv.0rg/abs/1506.00619\nJason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances. in neural information nrocessin svstems. pp. 3320-3328. 2014\nJason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neura networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015.\nJunbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network arXiv preprint arXiv:1609.03126, 2016."}] |
rkmDI85ge | [{"section_index": "0", "section_name": "EFFICIENT SOFTMAX APPROXIMATION FOR GPUs", "section_text": "Edouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, Herve Jegou\n{egrave,ajoulin,moustaphacisse, grangier, rvj}@fb.com\nWe propose an approximate strategy to efficiently train neural network based. language models over very large vocabularies. Our approach, called adap. tive softmax, circumvents the linear dependency on the vocabulary size by ex ploiting the unbalanced word distribution to form clusters that explicitly min. imize the expectation of computational complexity. Our approach further re-. duces the computational cost by exploiting the specificities of modern archi. tectures and matrix-matrix vector operations, making it particularly suited for. graphical processing units. Our experiments carried out on standard bench-. marks, such as EuroParl and One Billion Word, show that our approach brings. a large gain in efficiency over standard approximations while achieving an ac-. curacy close to that of the full softmax. The code of our method is available at.\nhttps://github.com/facebookresearch/adaptive -softmax"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "This paper considers strategies to learn parametric models for language modeling with very lar. vocabularies. This problem is key to natural language processing, with applications in machi translation (Schwenk et al.[2012] Sutskever et al.[[2014]Vaswani et al.[2013) or automatic spee recognition (Graves et al.|2013|Hinton et al.[2012). In particular, Neural Network Language Mode (NNLMs) have received a renewed interest in recent years, by achieving state of the art performan on standard benchmarks (Jozefowicz et al.2016f Mikolov et al.]2010). These approaches are mo costly but generalize better than traditional non-parametric models (Bahl et al.|1983| Kneser & Ne. 1995).\nStatistical language models assign a probability to words given their history (Bahl et al.J|1983). They are evaluated by objective criteria such as perplexity (ppl), which directly measures the ability of the system to determine proper probabilities for all the words. This potentially makes parametric models prohibitively slow to train on corpara with very large vocabulary. For instance, the vocabulary of the One Billion Word benchmark (Chelba et al.][2013) contains around 800K words. In standard NNLMs such as feedforward networks (Bengio et al.2003a) or recurrent networks (Mikolov et al.]2010) computing this probability over the whole vocabulary is the bottleneck. Many solutions have been proposed to reduce the complexity of this expensive step (Bengio et al.|2003bf Goodman! 2001a Gutmann & Hyvarinen 2010). We distinguish (i) the methods that consider the original distribution and aim at providing approximations of the probabilities, or of a subset of them (Bengio et al. 2003b Ji et al.]2015), from (ii) the approaches that compute exact probabilities for an approximate model yielding a lower computational cost, such as the popular hierarchical softmax (Goodman 2001a Mnih & Hinton2009]Morin & Bengio2005)\nOur approach, called adaptive softmax, belongs to the second category. More specifically, it is inspired by the hierarchical softmax and its subsequent variants. In contrast to previous works and motivated by the trend that GPUs are comparatively more and more performant than CPUs, our design is oriented towards efficient processing on GPUs. In this context, our paper makes the following points:"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We define a strategy to produce an approximate hierarchical model. It departs from previous ones in that it explicitly takes into account the complexity of matrix-matrix multiplications on modern architectures, which is not trivially linear in the dimensions of the matrices.\nSimilar to our work,Zweig & Makarychev (2013) constructs their hierachy in order to explicitly reduce the computational complexity. They also solve the assignment problem with dynamic programming. However, they only consider hierachies where words are kept in the leaves of the tree leading to a significant drop of performance (reported to be around 5 - 10%), forcing them to also optimize for word similarity. In our case, allowing classes to be stored in the internal node of the tree leads to almost no drop of performance. Also, they assume a linear cost for the vector-matrix operation which significantly limits the use of their approach on distributed system such as GPU.\nThe idea of keeping a short-list of the most frequent words has been explored before (Le et al.]. 2011 Schwenk,[2007). In particular, Le et al.(2011) combines a short-list with a hierachical softmax basec on word representation. In contrast, the word hierarchy that we introduce in Section4|explicitly aims. at reducing the complexity.\nSampling based approximation. Sampling based approaches have been successfully applied to. approximate the softmax function over large dictionaries in different domains, such as language. modeling (Jozefowicz et al.J|2016), machine translation (Jean et al.||2015) and computer vision (Joulin et al.2015). In particular, importance sampling (Bengio & Senecal]2008] Bengio et al.] 2003b) selects a subset of negative targets to approximate the softmax normalization. Different schemes. have been proposed for sampling, such as the unigram and bigram distribution (Bengio et al.f2003b). or more recently, a power-raised distribution of the unigram (Ji et al.]2015] [Mikolov et al.]2013) While this approach often leads to significant speed-up at train time, it still requires to evaluate the. full softmax at test time.\nWe conduct an empirical complexity analysis of this model on recent GPUs. This leads us to define a realistic complexity model that is incorporated in the proposed optimization; Our approach provides a significant acceleration factor compared to the regular softmax, i.e., 2 to 10 speed-ups. Equivalently we improve the accuracy under computational constraints. Importantly, on the largest corpus, this higher efficiency empirically comes at no cost in accuracy for a given amount of training data, in contrast to concurrent approaches improving the efficiency.\nThis paper is organized as follows. Section2|briefly reviews the related work and Section 3|provides some background on the language modeling task that we consider. Section4|describes our proposal which is subsequently evaluated in Section 5|on typical benchmarks of the language modeling literature, including Text8, Europarl and One Billion Word datasets.\nLoss function approximation. The Hierarchical Softmax (HSM) is an approximation of the soft-. max function introduced byGoodman (2001a). This approach is generally used with a two-level. tree (Goodman[[2001a] Mikolov et al.1[2011c) but has also been extended to deeper hierarchies (Morin & Bengio2005, [Mnih & Hinton2009). In general, the hierarchy structure is built on word simi-. larities (Brown et al.|1992 Le et al.2011) Mikolov et al.]2013) or frequency binning (Mikolov et al.][2011c). In particular, Mikolov et al.(2013) proposes an optimal hierarchy by constructing a Huffman coding based on frequency. However this coding scheme does not take into account the. theoretical complexity reduction offered by matrix-matrix multiplication and distributed computation,. n narticular with modern GPUs\nOur work also shares similarities with the d-softmax introduced byChen et al.[(2015). They assign capacity to words according to their frequency to speed up the training. Less frequent words have smaller classifiers than frequent ones. Unlike our method, their formulation requires accessing the whole vocabulary to evaluate the probability of a word..\nSelf-normalized approaches. Self-normalized approaches aim at learning naturally normalized classifier, to avoid computing the softmax normalization. Popular methods are Noise Contrastive Estimation (Gutmann & Hyvarinen2010] [Mnih & Teh] 2012)Vaswani et al.]2013) or a penalization on the normalization function (Andreas & Klein2014; Devlin et al.]2014). Noise Contrastive Estimation (Gutmann & Hyvarinen]2010) replaces the softmax by a binary classifier distinguishing. the original distribution form a noisy one. While the original formulation still requires to compute. the softmax normalization, Mnih & Teh (2012) shows that good performance can be achieved even. without it.\nFinally,[Vincent et al.(2015) have also proposed an efficient way to train model with high dimensiona output space. Their approach is exact and leads to a promising speed-up but it cannot be directly applied to the softmax function, limiting its potential application to language modeling.\nT P(w1,..., WT) = P(wt|Wt-1,..., W t=1\nThis problem is traditionally addressed with non-parameteric models based on counting statis tics (Goodman 2001b). In particular, smoothed N-gram models (Bahl et al. 1983}Katz 1987 Kneser & Ney 1995) achieve good performance in practice (Mikolov et al. 2011a), especially when they are associated with cache models (Kuhn & De Moril|1990). More recently, parametric models based on neural networks have gained popularity for language modeling (Bengio et al.]2003a Jozefowicz et al.]2016) Mikolov et al.]2010). They are mostly either feedforward networks (Bengi et al.]2003a) or recurrent networks (Mikolov et al.2010).\nFeedforward network. In a standard feedforward network for language modeling, we fix a window. of length N and predict the next words according to the words appearing in this window. In the simplest case, this probability is represented by a 2-layer neural network acting on an input xt E VN defined as the concatenation of the one-hot representation of the NV previous words, wt-N+1, . .. , Wt.. The state ht of the hidden layer and subsequently the vector of scores yt associated with the next. token wt+1 are computed as\nwhere is a non linearity, e.g., the pointwise sigmoid function o(z) = 1/(1 + exp(-z)), and f is the softmax function discussed in the next section. This model is parameterized by the weight matrices P, A and B and is routinely learned with an optimization scheme such as stochastic gradient descent or Adagrad (Duchi et al.l2011)\nRecurrent network. A Recurrent network (Elman]1990) extends a feedforward network in that the current state of the hidden layer also depends on its previous state. The hidden state ht is updated according to the equation ht = o(Awt + Rht-1), where R is a weight matrix and xt is the one-hot representation of the current word wt. Computing the exact gradient for this model is challenging but it is possible to compute an efficient and stable approximation of it, using a truncated back-propagation through time (Werbos][1990] [Williams & Peng1990) and norm clipping (Mikolov et al.|2010].\nSince the model introduced by[Elman|(1990), many extensions have been proposed, such as Longe. Short Term Memory (LSTM) (Hochreiter & Schmidhuber1997), Gated recurrent units (Chung et al. 2014) or structurally constrained network (Mikolov et al.l|2014). These models have been successfull. used in the context of language modeling (Jozefowicz et al.]2016] [Mikolov et al.]2010] Mikolo & Zweig 2012). In this work, we focus on the standard word level LSTM architecture since it ha obtained state of the art performance on the challenging One Billion Word Benchmark (Jozefowic et al.]2016).\nThe goal of language modeling is to learn a probability distribution over a sequence of words from a given dictionary V. The joint distribution is defined as a product of conditional distribution of tokens given their past (Bahl et al.l[1983). More precisely, the probability of a sequence of T words E)T is given as\nht = o(APxt) Yt = f(Bht),\nClass-based hierarchical softmax. In neural language modeling, predicting the probability of the. next word requires to compute scores for every word in the vocabulary and to normalize them to form a probability distribution. This is typically achieved by applying a softmax function to the. unnormalized score z associate with each word w, where the softmax function is defined as\nFor a vocabulary comprising k = V| words, this function requires O(k) operations once the scores are computed. In the case of neural networks, the overall complexity is O(dk), where d is the size of the last hidden layer. When the vocabulary is large, this step is computationally expensive and often dominates the computation of the whole model (Jozefowicz et al.[2016f [Mikolov et al.]2014) as discussed in introduction and related work. A simple approach (Goodman2001a) to reduce this. computational cost is to assign each word w of the vocabulary to a unique class C(w) and to factorize. the probability distribution over words as.\np(wtht) =p1(C(wt)]ht) p2(wt|C(wt), ht)\n(wt ht) =p1(C(wt X P2(Wt C(wt), ht lt\nwhere p1 and p2 are obtained using the softmax function (Eq.4). If each class contains k word the computational cost is reduced from O(dk) to O(dk)\nIn this section, we propose the adaptive softmax, a simple speedup technique for the computation of. probability distributions over words. The adaptive softmax is inspired by the class-based hierarchical. softmax, where the word classes are built to minimize the computational complexity. Our method is designed to be efficient for GPUs, which are commonly used to train neural networks. For the sake of. clarity, we first present the intuition behind our method in the simple case where we simply split our. dictionary in two distinct clusters, before analyzing a more general case.."}, {"section_index": "3", "section_name": "4.1 GPU COMPUTATIONAL MODEL", "section_text": "The bottleneck of the model described in the previous section is the matrix multiplication betweer the matrix representing the hidden states (of size B d, where B denotes the batch size), and the matrix of word representations, of size d k. For a fixed size d of the hidden layer, we denote by g(k, B, d) the complexity of this multiplication, and simplify the notation wherever some parameters are fixed. Figure|1reports empirical timings as a function of k for typical parameters of B and d for two GPU models, namely K40 and Maxwell. We observe that the complexity g(k) is constant fo low values of k, until a certain inflection point ko ~ 50, and then becomes affine for values k > ko This suggests a cost function of the form\ng(k) = max(c+ Xko,c+ Xk) = cm + max|0, X(k - ko)]\nEmpirically, cm = 0.40 ms on a K40 and 0.22 ms on a Maxwell. We observe the same behavior when. measuring the timings as a function of the other parameters, i.e., it is inefficient to matrix-multiply. when one of the dimensions is small. This observation suggests that hierarchical organizations ol words with a low number of children per node, such as binary Huffman codes, are highly suboptima.\nIn natural languages, the distribution of the words notoriously follows a Zipf law (Zipf] 1949) Most of the probability mass is covered by a small fraction of the dictionary, e.g., 87% of the document is covered by only 20% of the vocabulary in the Penn TreeBank. Similar to the frequency binning hierarchical softmax (Mikolov et al., 2011c), this information can be exploited to reduce the computation cost.\nA simple strategy to reduce the overall complexity is to partition the dictionary V into two clusters as V. and Vt, where Vh denotes the head of the distribution consisting of the most frequent words, and. where Vt is the tail associated with a large number of rare words. The classifier frequently accesses. the head, which motivates the fact that it should be computed efficiently. In contrast, the tail occurs\nX w'Ey exp\n1 s K40 Max well 100 ms =0.0035 10 ms ^K40 0.002 lXWell= 1 ms 100 s 22 23 24 25 26 27 28 29 210211212213214215 2 number of vectors k.\nless frequently and the corresponding computation can be slower. This suggests to define clusters with unbalanced cardinalities |Vh] < |Vt| and probabilities P(Vh) > P(Vt), where P(A) = wEA Pi is the probability of a word to occur in the set V,. For instance, one may define the head would only contain 20% of the vocabulary (covering for 87% on PennTree Bank). These two clusters can be organized in two different ways: either they are both leaves of a 2-level tree (Mikolov et al.|2011c) or the head cluster is kept as a short-list in the root node (Le et al.f[2011). We now analyze what is the best structure and how to split the vocabulary by determining the corresponding complexities assuming that the head consists of the most frequent words. The next subsection shows the optimality of this choice.\nGiven a vocabulary of k words, we are looking for the number kh = Vh of words from the head of the distribution to be assigned to the first cluster. These words will cover for ph of the distribution The tail cluster will then contain the rest of the vocabulary, made of kt, = k - kn words and covering for pt = 1 - Ph of the overall distribution. We denote by g(k, d) the computational complexity of computing the softmax function over k words with d dimensional input features. Figure|1|show. an example of this function for a fixed d. The complexity of putting the head of the distribution ir the root of the tree is g(kh + 1, d) + ptg(kt, d), while the complexity associated with putting botl cluster in leaves is g(2, d) + phg(kh, d) + ptg(kt, d). Depending on the distribution of a corpus, it is then simple to choose the best assignment of words into the two clusters. For example, on PennTree Bank, with a hidden layer of size d = 128, the optimal configuration is to keep a short-list of 1400 classes in the root node, leading to an average cost of 0.33 ms per batch of size 512, while it takes 0.36 ms when both clusters are in the leaves. In comparison, the full softmax takes 0.80 ms for the same configuration, leading to a 2.4 speed-up.\nAdapting the classifier capacity for each cluster. Each cluster is accessed independently of each other, they thus do not need to have the same capacity. Frequent words need high capacity to be predicted correctly. In contrast, rare words cannot be learned very well, since we only see them a feu times. It would then be wasteful to associate them with high capacity. Like in Chen et al.[(2015) we exploit this observation to further reduce the computational cost of our classifier. Unlike Chen et al.(2015), we share the state of hidden layer across clusters and simply reduce the input size of the classifiers by applying a projection matrix. Typically, the projection matrix for the tail cluster reduces. the size from d to dt = d/4, reducing the complexity from g(kt, d) to g(dt, d) + g(kt, dt)..\nCompromising between efficiency and accuracy. We observe empirically that putting all the. clusters in the leaves of the tree leads to a significant drop of performance (around 5 - 10% perfor mance drop, Mikolov et al.]2011c]Zweig & Makarychev2013). The reason is that the probability of every word w belonging to a cluster c is multiplied by the probability of its class, i.e., it is equal to. P(c h) P(w c, h), while attaching a frequent word directly to the root associates it directly to the probability P(w h) making its inference sharper. For this reason, unless there is a significant differ. ence in computational complexity, we favor using a short-list, over the standard 2-level hierarchical softmax.\nFigure 1: GPU timings for multi-. plying two matrices in the dominant. step of the RNN model. We con-. sider matrices of size 2560 2048 and 2048 k representing hidden. states and word representations. We. report the timings as a function of. k (number of word representations). and we compute the averages (cir. cles) over 1000 measures, and the. minima and maxima for the K40 The standard deviation does not ex- ceed 5% of each timing..\nV1 Vh V2 V3\nLet us now consider the more general case where the dictionary is partitioned as V = Vh U V1 . .. VJ V, n V; = 0 if i j. We consider the hierarchical model depicted in Figure2l where the sub dictionary Vh is accessed at the first level, and the others in the second level. We now investigate the computational cost C of the forward (equivalently, backward) pass of this approximate softmax layer. For the time being, we fix the batch size B and the dimensionality d of the hidden layer, in order to analyze the complexity as a function of the sub-dictionary sizes and probabilities. We denote by pi = wev, p(w) the probability P(w E Vi) and k, = |Vi| the cardinality of each cluster.\nCh =Ph g(J + kh) and Vi, C; =pi[g(J+ kn) + g(ki)\nWe add the constraint k > ko to ensure that there is no penalty induced by the constant part of the cost model of Equation 5] the previous equation simplifies as.\npiki+pk=Pi(ki-kj)+Pi+jkj\nWithout loss of generality, we assume that k, > ky. The quantities pi+j, ki and k, being fixed, the second term of the right-hand side of this equation is constant, and the best strategy is trivially tc minimize the probability of the largest cluster V. In other terms, an optimal solution for Equation|9 requires that the most frequent words are assigned to the smallest cluster. This remark is true for any tuple (i, j), and we easily see that this point also holds for the head cluster. As a consequence for a fixed number of clusters of given sizes, the best strategy is to assign the words by decreasing probabilities to clusters of increasing size. Note, this analysis remains valid as long as the g is monotonically increasing in k.\nDetermining k with J fixed: dynamic programming.. .We now assume that the number o. clusters is fixed. Following our analysis above, the optimization solely depends on the cardinalitie k, for all clusters, which perfectly determines how to split the list of words ordered by frequency. W solve this problem by dynamic programming\nFinding the number of clusters. The only remaining free variable in our optimization is J, since. the other parameters are then determined by the aforementioned optimizations. For this step, th cost of Equation|9|over-estimates the number of clusters because we have neglected the effect of the . non-linearity of the batch size: in the second layer, the batches are typically smaller than the inflectior\nFigure 2: Our hierarchical model is organized as (i) a first level that includes both the most frequent words and vectors representing clus- ters, and (ii) clusters on the second level that are associated with rare words, the largest ones being associated with the less frequent words. The sizes are determined so as to min- imize our cost model on GPU.\nC = g(J+ kh) Pi g(ki).\nJ=c+X(J+kh) Pi(c+ \\ki c(2-Ph)+X|J+ kn+\nLet us discuss this equation, by first considering that the cardinalities of the sub-vocabularies are. fixed. The right-most term is the only one that depends on the word probabilities. For two distinct clusters V, and V, we can re-write p k. as (pi+; -. -Pi)ki, where Pi+i = Pi + Pi, so that\nfull softmax 144 83 min Our approach is. sampling 166 41 min approximate str HSM (freq) 166 34 min baseline D-soft D-softmax 195 53 min results. but is sl D-softmax [*] 147 54 min Note, approxim Ours 147 30 min esting for small. 200 Full softmax 180 Sampling A HSM 0 160 D-softmax [*] 0 Ours 140 120 100 80 0 100 200 300 400 500 600 700 800 Time (min)"}, {"section_index": "4", "section_name": "5 EXPERIMENTS", "section_text": "This section provides a set of experiments aiming at analyzing the trade-off between actual complexity. and effectiveness of several strategies, in particular the approach presented in the previous section First we describe our evaluation protocol, then we evaluate some of the properties of our model and. finally we compare it on standard benchmark against standard baselines..\nDatasets. We evaluate our method on standard datasets, and use the perplexity (ppl) as an evaluation metric, as the function of the training time or of the number of training data (epochs). The datasets have varying vocabulary sizes, in different languages, which allows us to better understand the strengths and weaknesses of the different approaches.\nImplementation details. We use an LSTM with one layer in all our experiments. On Text8 and Europarl, the models have d = 512 hidden units and are regularized with weight decay ( = 10-6)\n' http://mattmahoney.net/dc/textdata 2http://www.statmt.org/europarl/ https://code.google.com/archive/p/1-billion-word-language-modeling-benchmark/\nable 1: Text8. Perplexity and training time after 5 epochs ur approach is significantly better than other published oproximate strategies. We also show that improving the aseline D-softmax [*] as discussed in text improve the sults, but is slower than our proposal. ote, approximate strategies are comparatively less inter- ting for small vocabularies such as in this case.\nNote, approximate strategies are comparatively less inter esting for small vocabularies such as in this case.\nFigure3: Training on Europarl (Finnish): perplexity (on validation) as the function of time for our method and approaches from the state of the art. We represent the result after each epoch by a point. Our method fa- vorably compares with all other ap- proaches w.r.t. the tradeoff perplexity and training time, and of training data vs perplexity. Similar conclusions are drawn for the other languages\npoint ko. In practice, we optimize over small values of J = 1, 2, 3, 4 and empirically determine the best compromise speed/perplexity on training data. Note, having a lower number of clusters with numerous frequent words on the first level has another flavor: we empirically observe that it offers a better perplexity than word hierarchy with a large number of clusters. It is comparable to that of the exact softmax for large corpora, as shown later by our experiments.\nText8'is a standard compression dataset containing a pre-processed version of the first 100 million characters from Wikipedia in English. It has been recently used for language modeling (Mikolov et al.|2014) and has a vocabulary of 44k words.. Europar|2[is a machine translation corpus, containing 20 languages (Koehn2005). For most languages, there are 10M-60M tokens and the vocabulary is in between 44k and 250k. words. One Billion Word|is a massive corpus introduced byChelba et al.(2013). It contains 0.8B tokens and a vocabulary comprising almost 800k words..\nTable 2: One Billion Word benchmark. Perplexity on the test set for single models. Our result is obtained after 5 epochs.\nOn the One Billion Word benchmark, we use d = 2048 hidden units and no regularization. The dimension of the input word embeddings is set to 256, so that large models fit in GPU memory. For the backpropagation through time, we unroll the models for 20 steps. We use Adagrad (Duchi et al.. 2011), with a step size of 0.1 and 5 epochs, and we clip the norm of the gradients to 1. The batch size. B is set to 128, except on the Finnish portion of Europarl where B=64 due to memory constraints All the experiments were run on the same GPU with the Maxwell architecture..\nBaselines. Our method is compared to: (1) the full softmax, (2) the hierarchical softmax (HSM. with frequency binning (Mikolov et al.]2011b), (3) importance sampling (Bengio et al.]2003l Bengio & Senecalf[2008) and (4) the differentiated softmax (Chen et al.[2015). For HSM, we trie different strategies for the binning. We observe that using the square root function on the count befor computing the word bins is the most efficient. For the negative sampling method, we used a numbe of samples equal to 20% of the size of the vocabulary (Chen et al.|2015). For the differentiate softmax (D-softmax), we used the same partitions for the vocabulary as for our approach. We trie. two version of the differentiated softmax. The first is the one described byChen et al.[(2015), wher each word cluster uses a disjoint subset of the hidden representation. We also present an improve version, referred to as D-softmax [*1, which uses our choice to have the whole hidden representatio mapped to the different word clusters using projection matrices of different sizes..\nComparison with the state of the art. Table 1|reports the results that we achieve on Text8. On this small vocabulary, approximate methods are comparatively less interesting. Our approach is the only one to approach the result of the full soft-max (below by 3 points of perplexity), while being the fastest. Our improved variant D-softmax [*] of the work byChen et al.[(2015) obtains similar results but is slower by a factor 1.8.\nOn Europarl, we first present the convergence properties of our approach compared to other approxi. mate strategies in Figure|3|show the perplexity (ppl) as a function of training time. Our approach significantly outperforms all competitors by a large margin. For reference, we also show the perfor. mance (D-softmax [*]) obtained by improving the D-softmax, to make it more comparable to our method. Our method is 2 to 3 faster than this improved competitor, which demonstrates how critical is our optimization strategy. Similar conclusions are drawn from Table[3|for other languages from the Europal corpus\nTable[2lgives the test perplexity on One Billion Word benchmark: Our method achieves a perplexity. of 43.9 after five epochs, taking less than three days to train on a single GPU. In comparison, only Jozefowicz et al.(2016) achieves a lower perplexity, but with a model 8 bigger than ours and trained over 32 GPUs during 3 weeks. We also note that for models of similar size, we achieve similar perplexity than the method introduced by Jozefowicz et al.(2016). As far as we know, ours the first. method to achieve a perplexity lower than 50 on a single GPU..\nTable 3: Europarl. Perplexity after 5 epochs for different languages as a function of time t (minutes)\nIn this paper, we have proposed a simple yet efficient approximation of the softmax classifier. To. our knowledge, it is the first speed optimizing approximation that obtains performance on par with. the exact model. This is achieved by explicitly taking into account the computational complexity of parallel systems and combining it with a few important observations, namely keeping a short-list of frequent words in the root node (Schwenk|2007) and reducing the capacity of rare words (Chen et al.. 2015). In all our experiments on GPU, our method consistently maintains a low perplexity while. enjoying a speed-up going from 2 to 10 compared to the exact model. This type of speed-up. allows to deal with extremely large corpora in reasonable time and without the need of a large number. of GPUs. We believe our approach to be general enough to be applied to other parallel computing. architectures and other losses, as well as to other domains where the distributions of the class are unbalanced."}, {"section_index": "5", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank Jeff Johnson for his help with GPU benchmarking and Tomas Mikolov for insightful discussions.."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Jacob Andreas and Dan Klein. When and why are log-linear models self-normalizing. In ACL, 2014\nLalit R Bahl, Frederick Jelinek, and Robert L Mercer. A maximum likelihood approach to continuous speecl recognition. PAMI, 1983.\nYoshua Bengio and Jean-Sebastien Senecal. Adaptive importance sampling to accelerate training of a neura probabilistic language model. Neural Networks, 2008.\nPeter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. Class-based n-gran models of natural language. Computational linguistics, 1992.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014\nJacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard M Schwartz, and John Makhoul. Fast and robust neural network joint models for statistical machine translation. In ACL. 2014..\nLanguage: bg cs da de el es k= 50k 83k 128k 143k 100k 87k Method pp1 t ppl t ppl t ppl t ppl t ppl t Full 37 58 62 132 37 713 42 802 38 383 30 536 Sampling 40 29 70 53 40 247 45 262 41 144 32 217 HSM (freq) 43 17 78 29 42 114 51 124 45 73 34 110 D-softmax 47 36 82 75 46 369 56 397 50 211 38 296 D-softmax [*] 37 36 62 76 36 366 41 398 37 213 29 303 Ours 37 18 62 30 35 105 40 110 36 72 29 103\nYoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model JMLR, 2003a.\nYoshua Bengio, Jean-Sebastien Senecal, et al. Quick training of probabilistic neural nets by importance sampling In A1STATS, 2003b.\nJeffrey L Elman. Finding structure in time. Cognitive science. 1990\nJoshua Goodman. Classes for fast maximum entropy training. In ICASSP, 2001a\nJoshua T Goodman. A bit of progress in langt modeling. Computer. Speech & Language, 2001b\nMichael Gutmann and Aapo Hyvarinen. Noise-contrastive estimation: A new estimation principle for unnormal-. ized statistical models. In International Conference on Artificial Intelligence and Statistics, 2010.\nGeoffrey Hinton. Li Deng. Dong Yu. George E Dahl. Abdel-rahman Mohamed. Navdeep Jaitly. Andrew Senior Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, 2012..\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 1997\nSebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large target vocabulary for neural machine translation. 2015.\nArmand Joulin, Laurens van der Maaten, Allan Jabri, and Nicolas Vasilache. Learning visual features from large weakly supervised data. arXiv preprint arXiv:1511.02251, 2015.\nRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits o. language modeling. arXiv preprint arXiv:1602.02410, 2016.\nSlava M Katz. Estimation of probabilities from sparse data for the language model component of a speech recognizer. ICASSP, 1987.\nReinhard Kneser and Hermann Ney. Improved backing-off for m-gram language modeling. In ICASsP, 1995\nPhilipp Koehn. Europarl: A parallel corpus for statistical machine translation. In MT summit, 2005\nRoland Kuhn and Renato De Mori. A cache-based natural langu. age model for speech recognition. PAMI. 199(.\nQuoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941, 2015.\nTomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurrent neural network based language model. In INTERSPEECH, 2010.\nTomas Mikolov, Anoop Deoras, Stefan Kombrink, Lukas Burget, and Jan Cernocky. Empirical evaluation and combination of advanced language modeling techniques. In INTERSPEECH, 2011a.\nTomas Mikolov, Stefan Kombrink, Lukas Burget, Jan Honza Cernocky, and Sanjeev Khudanpur. Extensions o recurrent neural network language model. In ICASSP, 2011c.\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations vector space. arXiv preprint arXiv:1301.3781, 2013.\nTomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and Marc' Aurelio Ranzato. Learning longer memory in recurrent neural networks. arXiv preprint arXiv:1412.7753. 2014\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 2011.\nAlan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In ICASSP. 2013\nHai-Son Le, Ilya Oparin, Alexandre Allauzen, Jean-Luc Gauvain, and Francois Yvon. Structured output layer neural network language model. In ICASSP, 2011.\nTomas Mikolov and Geoffrey Zweig. Context dependent recurrent neural network language model. In SLT. 2012.\nTomas Mikolov, Anoop Deoras, Daniel Povey, Lukas Burget, and Jan Cernocky. Strategies for training large scale neural network language models. In ASRU, 2011b..\nAndriy Mnih and Geoffrey E Hinton. A scalable hierarchical distributed lang! ge model. In NIPS, 2009\nHolger Schwenk. Continuous space language models. Computer Speech & Language, pp. 492-518, 2007\nNoam Shazeer, Joris Pelemans, and Ciprian Chelba. Sparse non-negative matrix language modeling fo skip-grams. In Proceedings of Interspeech, pp. 1428-1432, 2015\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, 2014.\nAshish Vaswani, Yinggong Zhao, Victoria Fossum, and David Chiang. Decoding with large-scale neural language models improves translation. In EMNLP, 2013.\nPascal Vincent, Alexandre de Brebisson, and Xavier Bouthillier. Efficient exact gradient update for training deey networks with very large sparse targets. In NIPS, 2015.\nPaul J Werbos. Backpropagation through time: what it does and how to do it. 1990\nGeorge Kingsley Zipf. Human behavior and the principle of least effort. 1949\nAndriy Mnih and Yee Whye Teh. A fast and simple algorithm for training neural probabilistic language models arXiv preprint arXiv:1206.6426, 2012\nFrederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network langu ge model. In Aistats, 2005\nHolger Schwenk, Anthony Rousseau, and Mohammed Attik. Large, pruned or continuous space language models on a gpu for statistical machine translation. In NAACL-HLT Workshop, 2012.\nRonald J Williams and Jing Peng. An efficient gradient-based algorithm for on-line training of recurrent network trajectories. Neural computation. 1990.\nGeoffrey Zweig and Konstantin Makarychev. Speed regularization and optimality in word classing. In ICASSP 2013."}] |
Bk8N0RLxx | [{"section_index": "0", "section_name": "VOCABULARY SELECTION STRATEGIES FOR NEURAL MACHINE TRANSLATION", "section_text": "Gurvan LHostis\nDavid Grangier\nDavid Grangier Facebook AI Research Menlo Park. CA\nEcole polytechnique Palaiseau, France\nClassical translation models constrain the space of possible outputs by selecting a subset of translation rules based on the input sentence. Recent work on improving the efficiency of neural translation models adopted a similar strategy by restricting the output vocabulary to a subset of likely candidates given the source. In thi. paper we experiment with context and embedding-based selection methods anc extend previous work by examining speed and accuracy trade-offs in more detail We show that decoding time on CPUs can be reduced by up to 90% and training time by 25% on the WMT15 English-German and WMT16 English-Romaniar tasks at the same or only negligible change in accuracy. This brings the time tc decode with a state of the art neural translation system to just over 140 words pe. seconds on a single CPU core for English-German."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The efficiency of neural models depends on the size of the target vocabulary and previous work has. shown that vocabularies of well over 50k word types are necessary to achieve good accuracy (Jean. et al.]2015a]Zhou et al.]2016). Neural translation systems compute the probability of the next target word given both the previously generated target words as well as the source sentence. Estimating. this conditional distribution is linear in the size of the target vocabulary which can be very large for. many language pairs (Grave et al.2016). Recent work in neural translation has adopted sampling techniques from language modeling which do not leverage the input sentence (Mikolov et al.J|2011 Jean et al. 2015a Chen et al.[2016] Zhou et al.|2016).\nOn the other hand, classical translation models generate outputs in an efficient two-step selectioi procedure: first, a subset of promising translation rules is chosen by matching rules to the source sentence, and by pruning them based on local scores such as translation probabilities. Second translation hypotheses are generated that incorporate non-local scores such as language model prob abilities. Recently,Mi et al.(2016) proposed a similar strategy for neural translation: a selectior method restricts the target vocabulary to a small subset, specific to the input sentence. The subse is then scored by the neural model. Their results demonstrate that vocabulary subsets that are only about 1% of the original size result in very little to no degradation in accuracy.\nThis paper complements their study by experimenting with additional selection techniques and by analyzing speed and accuracy in more detail. Similar toMi et al.[(2016), we consider selecting target words based either on a dictionary built from Viterbi word alignments, or by matching phrases in a traditional phrase-table, or by using the k most frequent words in the target language. In addition, we investigate methods that do not rely on a traditional phrase-based translation model or alignment model to select target words. We investigate bilingual co-occurrence counts, bilingual embeddings as well as a discriminative classifier to leverage context information via features extracted from the entire source sentence (2).\nGurvan was interning at Facebook for this work\nMcnaelAun Facebook AI Research Menlo Park. CA.\nThis section presents different selection strategies inspired by phrase-based translation. We improve. on a simple word co-occurrence method by estimating bilingual word embeddings with Hellinger. PCA and then by using word alignments instead of co-occurrence counts. Finally, we leverage richer. context in the source via bilingual phrases from a phrase-table or by using the entire sentence in a Support Vector Machine classifier.\nThis is the simplest approach we consider. We estimate a co-occurrence table which counts how. many times each source word s co-occurs with each target word t in the training bitext. The table allows us to estimate the joint distribution P(s,t). Next, we create a list of the k target words. that co-occur most with each source word, i.e., the words t which maximize P(s,t) for a given s.. Vocabulary selection then simply computes the union of the target word lists associated with each. source word in the input.\nHowever, this estimate was deemed too unreliable for low P(t) in preliminary experiments and did not perform better than just P(s, t)."}, {"section_index": "2", "section_name": "2.2 BILINGUAL EMBEDDINGS", "section_text": "For selection, the estimated co-occurrence can be used instead of the raw counts as described in th above section. This strategy is equivalent to using the low rank representation of each source worc (source embedding, i.e., column vectors from the PCA) and finding the target word with the closes low rank representation (target embeddings, i.e., row vectors from the PCA).\nThis strategy uses word alignments learned from a bilingual corpus (Brown et al.1993). Word. alignment introduces latent variables to model P(t|s), the probability of source word t given target word s. Latent variables indicate the source position corresponding to each target position in a sentence pair (Koehn||2010). We use FastAlign, a popular reparameterization of IBM Model 2 (Dyer et al.]2013). For each source word s, we build a list of the top k target words maximizing P(t[s). The candidate target vocabulary is the union of the lists for all source words..\nOur experiments show speed-ups in CPU decoding by up to a factor of 10 at very little degradation in accuracy. Training speed on GPUs can be increased by a factor of 1.33. We find that word alignments as the sole selection method is sufficient to obtain good accuracy. This is in contrast to Mi et al.[(2016) who used a combination of the 2, 000 most frequent words, word alignments as well as phrase-pairs. Selection methods often fall short in retrieving all words of the gold standard human translation. However, we find that with a reduced vocabulary of ~ 600 words they can recover over 99% of the words that are actually chosen by a model that decodes over the entire vocabulary. Finally, the speed-ups obtained by vocabulary selection become even more significant if faster encoder models are used, since selection removes the burden of scoring large vocabularies d4D\nWe were concerned that this strategy over-selects frequent target words which have higher co. occurrence counts than rare words, regardless of the source word. Therefore, we experimented with selecting target words maximizing point-wise mutual information (PMI) instead, i.e..\nP(s,t) PMI(s,t) = P(s)P(t)\nWe build bilingual embeddings by applying Hellinger Principal Component Analysis (PCA) to the. bilingual co-occurrence count matrix M;,; = P(t = i[s = j); this extends the work on monolingual. embeddings of Lebret & Collobert[(2014) to the bilingual case. The resulting low rank estimate of the matrix can be more robust for rare counts. Hellinger PCA has been shown to produce embed. dings which perform similarly to word2vec but at higher speed (Mikolov et al.2013 Gouws et al.. 2015).\nCompared to co-occurrence counts, this strategy avoids selecting frequent target words when condi-. tioning on a rare source word. Word alignments will only link a frequent target word to a rare source word if no better explanation is present in the source sentence.."}, {"section_index": "3", "section_name": "2.4 PHRASE PAIRS", "section_text": "This strategy relies on a phrase translation table, i.e., a table pairing source phrases with corre sponding target phrases. The phrase table is constructed by reading off all bilingual phrases that are consistent with the word alignments according to an extraction heuristic (Koehn2010). For selec tion, we restrict the phrase table to the phrases present in the source sentence and consider the unior of the word types appearing in all corresponding target phrases (Mi et al.2016). Compared to word alignments, we hope this strategy to fetch more relevant target words as it can rely on longer source phrases to leverage richer source context."}, {"section_index": "4", "section_name": "2.5 SUPPORT VECTOR MACHINES", "section_text": "Support Vector Machines (SVMs) for vocabulary selection have been previously proposed in (Ban. galore et al.[2007). The idea is to determine a target vocabulary based on the entire source sentence rather than individual words or phrases. In particular, we train one SVM per target word taking as input a sparse vector encoding the source sentence as a bag of words. The SVM then predicts. whether the considered target word is present or absent from the target sentence..\nThis classifier-based method has several advantages compared to phrase alignments: the input i not restricted to a few contiguous source words and can leverage all words in the source sentence The model can express anti-correlation with negative weights, marking that the presence of a sourc word is a negative indicator for the presence of a target word. A disadvantage of this approach i that we need to feed the source sentence to all SVMs in order to get scores, instead of just readin from a pre-computed table. However, SVMs can be evaluated efficiently since (i) the features ar sparse, and (ii) only features corresponding to words from the source sentence are used at eacl evaluation. Finally, this framework formulates the selection of each target word as an independer binary classification problem which might not favor competition between target words"}, {"section_index": "5", "section_name": "2.6 COMMON WORDS", "section_text": "Following[Mi et al.(2016), we consider adding the k most frequent target words to the above selec tion methods. This set includes conjunctions, determiners, prepositions and frequent verbs. Pruning any such word through restrictive vocabulary selection may adversely affect the system output an is addressed by this technique"}, {"section_index": "6", "section_name": "3 RELATED WORK", "section_text": "The selection of a limited target vocabulary from the source sentence is classical topic in machine translation. It is often referred to as lexical selection. As mentioned above, word-based and phrase based systems perform implicit lexical selection by building a word or phrase table from alignments to constrain the possible target words. Other approaches to lexical selection include discriminative models such as SVMs and Maximum Entropy models (Bangalore et al.]2007) as well as rule-based systems (Tufis| 2002 Tyers et al.[2012).\nIn the context of neural machine translation, vocabulary size has always been a concern. Various strategies have been proposed to improve training and decoding efficiency. Approaches inspired by importance sampling reduce the vocabulary for training (Jean et al.f2015a), byte pair encoding segment words into more frequent sub-units (Sennrich et al.]2016a), while (Luong & Manning. 2016) proposes to segment words into characters. Related work in neural language modeling is also. relevant (Bengio et al.2003] [Mnih & Hinton]2008]Chen et al.] 2016). One can refer to (Sennrich 2016) for further references.\nCloser to our work, recent work (Mi et al.2016) presents preliminary results on using lexical selec tion techniques in an NMT system. Compared to this work, we investigate more selection methods\n(SVM, PCA, co-occurrences) and analyze the speed/accuracy trade-offs at various operating points We report efficiency gains and distinguish the impact of selection in training and decoding\nThis section presents our experimental setup, then discusses the impact of vocabulary selection at decoding time and then during training time."}, {"section_index": "7", "section_name": "4.1 EXPERIMENTAL SETUP", "section_text": "We use a an encoder-decoder style neural machine translation system based on Torch,Our encode is a bidirectional recurrent neural network (Long Short Term Memory, LSTM) and an LSTM de coder with attention. The resulting context vector is fed to an LSTM decoder which generates the output (Bahdanau et al.] 2015f Luong et al.]2015a). We use a single layer setup both in the encoder and as well as the decoder, each with 512 hidden units. Decoding experiments are run on a CPU since this is the most common type of hardware for inference. For training we use GPUs which are the most common hardware for neural network fitting. Specifically, we rely on 2.5GHz Intel Xeor 5 CPUs and Nvidia Tesla M40 GPUs. Decoding times are based on a single CPU core and training times are based on a single GPU card.\nWord alignments are computed with FastAlign (Dyer et al.]2013) in both language directions and then symmetrized with 'grow-diag-final-and'. Phrase tables are computed with Moses (Koehn et al. 2007) and we train support vector machines with SvmSgd (Bottou, 2010). We also use the Moses preprocessing scripts to tokenize the training data.\nWe experiment on two language pairs. The majority of experiments are on WMT-15 English to. German data (Bojar et al.||2015); we use newstest2013 for validation and newstest2010-2012 as well as newstest2014,2015 to present final test results. Training is restricted to sentences of no more than 50 words which results in 3.6m sentence pairs. We chose the vocabulary sizes following the same methodology. We use the 100k most frequent words both for the source and target vocabulary. At. decoding time we use a standard beam search with a beam width of 5 in all experiments. Unknown output words are simply replaced with the source word whose attention score is largest (Luong et al.. 2015b).\nWe also experiment with WMT-16 English to Romanian data using a similar setting but allowing sentences of up to 125 words (Bojar et al.J2016). Since the training set provided by WMT is limitec to 600k sentence pairs, we add the synthetic training data provided by|Sennrich et al.(2016b). Thi results in a total of 2.4m sentence pairs. Our source vocabulary comprises the 200k most frequen words and the target vocabulary contains 50k words."}, {"section_index": "8", "section_name": "4.2 SELECTION FOR EFFICIENT DECODING", "section_text": "Decoding efficiency of neural machine translation is still much lower than for traditional phrase based translation. For NMT, the running time of beam search on a CPU is dominated by the last linear layer that computes a score for each target word. Vocabulary selection can therefore have a large impact on decoding speed. Figure[1(left) shows that a reduced vocabulary of ~ 460 types (144 msec) can achieve a 10X speedup over using the full 100k-vocabulary (~ 1, 600 msec).\nNext we investigate the impact of reduced vocabularies on accuracy. Figure 1|(right) compares BLEU for the various selection strategies on a wide range of vocabulary sizes. Broadly, there are two groups of techniques: first, co-occurrence counts and bilingual embeddings (PCA) are not able to match the baseline performance (Full 100k) even with over 5k candidate words per sentence. Second, even with fewer than 1, 000 candidates per sentence, word alignments, phrase pairs and SVMs nearly match the full vocabulary accuracy.\n1we will release the code with the camera ready.\n400 20 350 18 300 BLEU 250 16 200 Full 100k 14 Word alignment 150 Phrase pairs SVM Co-occurrences 12 100 PCA 0 1000 2000 3000 4000 5000 Most frequent Average vocabulary size (per sent.) 0 1000 2000 3000 4000 5000 Average vocabulary size (per sent.)\nFigure 1: Left: Decoding time vs. vocabulary size on newstest2013 for WMT15 English to Germal translation. Right: BLEU vs. vocabulary size for different selection strategies..\n20.4 20.2 20 19.8 B 19.6 Full 100k No common words. 19.4 50 common words 500 common words 1000 common words 19.2 1500 common words 2000 common words 19 0 500 1000 1500 2000 2500 Average vocabulary size (per sent.)\nFigure 2: Impact of adding common words to word alignment selection for various vocabulary sizes\nAlthough co-occurrence counts and PCA have shown useful to measure semantic relatedness (Brown. et al.|1992f Lebret & Collobert2014), it seems that considering the whole source sentence as the explanation of a target word without latent alignment variables undermines their selection ability.. Overall, word alignments work as well or better than the other techniques relying on a wider input. context (phrase pairs and SVMs). Querying a word table is also more efficient than querying a. phrase-table or evaluating SVMs. We therefore use word alignment-based selection for the rest of our analysis.\nMi et al. (2016) suggest that adding common words to a selection technique could have a positive impact. We therefore consider adding the most frequent k words to our word alignment-based selection. Figure2|shows that this actually has little impact on BLEU in our setting. In fact, the overlap of the results for n = 0 and 50 indicates that most of the top 50 words are already selected. even with small candidate sets.\nNext we try to get a better sense of how precise selection is with respect to the words used by a. human translator or with respect to the translations generated by the full vocabulary model. We. use word alignments for this experiment. Figure 3 shows coverage with respect to the reference left) and with respect to the output of the full vocabulary system (right). We do not count unknowr words (UNK) in all settings, even if they may later be replaced by a source word (4.1). Not counting. UNKs is the reason why the full vocabulary models do not achieve 100% coverage in either setting. The two graphs show different trends: On the left, coverage with respect to the reference for the. full vocabulary is 95.1%, while selection achieves 87.5% with a vocabulary of 614 words (3rd poin. on graph). However, when coverage is measured with respect to the full vocabulary system output\nthe coverage of selection is very close to the full vocabulary model with respect to itself, i.e., wher unknown words are not counted. In fact, the selection model covers over 99% of the non-UNK words in the full vocabulary output. This result shows that selection can recover almost all of the words which are effectively selected by a full vocabulary model while discarding many words whicl are not chosen by the full model.\n0.96 0.96 0.94 0.94 rrerreeee 0.92 0.92 0.9 0.9 0.88 0.88 uo 0.86 0.86 0.84 0.84 0.82 0.82 0.8 Full 100k 0.8 Full 100k Word alignment Word alignment 0.78 0.78 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 Average vocabulary size (per sent.). Average vocabulary size (per sent.)\nFigure 3: Left: Coverage of the reference by the word alignment selection for different vocabular sizes and by the full 1o0k model. Right: Coverage of the full vocabulary model prediction by the word alignment selection method. Coverage does not count unknown words, therefore the full mode has non-perfect coverage on itself. Vocabulary selection never fully covers the reference (left) but i almost entirely covers the prediction of the full vocabulary model, even when very few candidates are selected.\n21 20 19 2 B 18 Full 100k Word alignment Phrase pairs 17 SVM Co-occurrences PCA 16 100 150 200 250 300 350 400 450 msec/sentence\nFigure 4: BLEU accuracy versus decoding speed for a beam size of 5 on CPU. Significant speed ups can be achieved with no decrease in BLEU accuracy, e.g., word alignment selection achieves 20.2. BLEU at 137 msec/sentence (156 words per sec) while the full vocabulary model requires 1, 581 msec/sentence (13.5 words per sec) at the same accuracy level, this is equivalent to an 11-fold speed up.\nOur test results (Table[1) confirm the validation experiments. On English-German translation we achieve more than a 10-fold speed-up over the full vocabulary setting. Accuracy for the word alignment-based selection matches the full vocabulary setting on most test sets or decreases only slightly. For example with word alignment selection, the largest drop is on newstest2015 which\nWhat is the exact speed and accuracy trade-off when reducing the output vocabulary? Figure plots BLEU against decoding speed. We pick a number of operating points from this graph for. our final test set experiments (Table|1). For our best methods (word alignments, phrase alignments. and SVMs) we pick points such that vocabularies are kept small while maintaining good accuracy. compared to the full vocabulary setting. For co-occurrence counts and bilingual PCA we choose settings with comparable speed\nTable 1: Final decoding accuracy results for WMT English-German and English-Romanian on var ious test sets (newstest 2010 - 2016, except 2013 our validation set). We report the average vocabu- lary size per sentence, coverage of the reference and decoding time in milliseconds per sentence for newstest2015 and newstest2016. Decoding speed is reported in words per second. All timings are measured on the same machine using a single CPU core. The Max. column indicates the maximum number of selected candidates per source word or phrase.."}, {"section_index": "9", "section_name": "4.3 SELECTION FOR BETTER TRAINING", "section_text": "So far our evaluation focused on vocabulary selection for decoding, relying on a model trained with the full vocabulary. Next we address the question of whether the efficiency advantages observed for decoding translate to training as well. Selection at training may impact generalization performance either way: it assimilates training and testing conditions which could positively impact accuracy However, training with a reduced vocabulary could result in worse parameter estimates, especially for rare words which would receive much fewer updates because they would be selected less often\nWe run training experiments on WMT English to German with word alignment-based selection. In. addition to the selected words, we include the target words of the reference and train a batch with the union of the sentence-specific vocabularies of all samples (Mi et al.[2016).\nFigure[5|compares validation accuracy of models trained with selection or with the full vocabulary Selection in both training and decoding gives a small accuracy improvement. However, this im. provements disappears for vocabulary sizes of 500 and larger; we found the same pattern on other test sets. Similar to our decoding experiments, adding common words during training did not im- prove accuracy. Table|2 shows the impact of selection on training speed. Our bi-directional LSTM model (BLSTM) can process the same number of samples in 25% less time on a GPU with a batch. size of 32 sentences. We do not observe changes in the number of epochs required to obtain the best validation BLEU.\nThe speed-ups for training are significantly smaller than for decoding (Table|1). This is because. training scores the vocabulary exactly once per target position, while beam search has to score multiple hypotheses at each generation step.\nEN-DE Max. 2010 2011 2012 2014 2015 Voc. Cov. Time Speed Full vocab 18.5 16.5 16.8 19.0 22.5 100,000 93.3% 1,524 13 Co-occur. 300 17.2 15.6 15.8 18.1 20.6 1,036 81.1% 156 141 PCA 100 15.4 13.7 14.2 14.5 18.6 966 74.8% 143 144 Word align 100 18.5 16.4 16.7 19.0 22.2 1,093 88.5% 143 144 Phrases 200 18.1 16.2 16.6 18.9 22.0 857 86.2% 153 135 SVM 18.3 16.2 16.6 18.8 21.9 1,284 86.6% EN-RO Max. 2016 Voc. Cov. Time Speed Full vocab 27.9 50,000 96.0% 966 26 Word align 50 28.1 691 89.3% 186 136\nachieves 22.2 BLEU compared to 22.5 BLEU for the full setting on English-German; the best single. system neural setup at WMT15 achieved 22.4 BLEU on this dataset (Jean et al.|2015b). On English- Romanian, we achieve a speed-up of over 5 times with word alignments at 28.1 BLEU versus 27.9 BLEU for the full vocabulary baseline. This matches the state-of-the-art on this dataset (Sennrich et al.J[2016b) from WTM16. The smaller speed-up on English-Romanian is due to the smaller vocab of the baseline in this setting which is 50k compared to 100k for English-German..\nWe suspect that training is now dominated by the bi-directional LSTM encoder. To confirm this. we replaced the encoder with a simple average pooling model which encodes source words as the. mean of word and position embeddings over a local context (Ranzato et al.]2016). Table[2 shows\n21 20 19 B 18 17 train full / decode full train full / decode selection train selection / decode selection 16 0 100 200 300 400 500 600 Average vocabulary size (per sent.)\nFigure 5: Accuracy on the validation set for different vocabulary sizes when using word alignment based selection during training and testing, or the full vocabulary.\nTable 2: Training times per epoch over 3.6m sentences in hours and minutes on German English for the full (1o0k) and reduced vocabulary settings (6k). Measurements include for ward/backward/update on a GPU for a batch of size 32. The 6k candidate words per batch cor respond to an average of 390 words per sentence.\nThis paper presents a comprehensive analysis of vocabulary selection techniques for neural machine. translation. Vocabulary selection constrains the output words to be scored to a small subset relevant. to the current source sentence. The idea is to avoid scoring a high number of unlikely candidate. with the full model which can be ruled out by simpler means..\nWe extend previous work by considering a wide range of simple and complex selection techniques including bilingual word co-occurrence counts, bilingual embeddings built with Hellinger PCA. word alignments, phrase pairs, and discriminative SVM classifiers. We explore the trade-off between speed and accuracy for different vocabulary sizes and validate results on two language pairs and several test sets.\nOur experiments show that decoding speed-up can be reduced by up to 90% without compromisin? accuracy. Word alignments, bilingual phrases and SVMs can achieve high accuracy, even wher. considering fewer than 1, O00 word types per sentence..\nAt training time, we achieve a speed-up of up to 1.33 with a bi-directional LSTM encoder and 1.66 with a faster alternative. Efficiency increases are less pronounced during training because of two combined factors. First, vocabulary scoring at the final layer of the model is a smaller part of the computation compared to beam search. Second, state-of-the-art bi-directional LSTM encoders (Bahdanau et al.]2015) are relatively costly compared to scoring the vocabulary on GPU hardware. Efficiency gains from vocabulary selection highlight the importance of progress towards efficient, accurate encoder and decoder architectures.\nthat in this setting the efficiency gains of vocabulary selection are more substantial (40% less time per epoch). This model is not as accurate and achieves only 18.5 BLEU on newstest2015 compared to 22.5 for the bi-directional LSTM encoder. However, it shows that improving the efficiency of the encoder is a promising future work direction.\nVocab. per batch. 100k 6k Avg. pooling encoder 5h 55 3h 34 (-40%) BLSTM encoder 9h 34 7h 13 (-25%)"}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Yoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Janvin. A neural probabilisti language model. Journal of Machine Learning Research (JMLR), 2003.\nPeter F. Brown, Peter V. deSouza, Robert L. Mercer, Vincent J. Della Pietra, and Jenifer C. Lai Class-based n-gram models of natural language. Computational Linguistics, 1992.\nPeter F. Brown. Vincent J. Della Pietra. Stephen A. Della Pietra. and Robert L. Mercer. The mathe matics of statistical machine translation: Parameter estimation. Computational Linguistics, 1993\nEdouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, and Herve Jegou. Efficien softmax approximation for GPUs. arXiv preprint arXiv:1609.04309, 2016\nSebastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. Montrea neural machine translation systems for wmt15. In Workshop on Statistical Machine Translatior (WMT), 2015b.\nPhilipp Koehn. Statistical Machine Translation. Cambridge University Press, 2010\nPhilipp Koehn, Franz Josef Och, and Daniel Marcu. Statistical Phrase-Based Translation. In North American Chapter of the Association for Computational Linguistics (NAACL). 2003\nPhilipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola. Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. Moses: Open source toolkit for statistical machine translation. In Annual Meeting of the Association for Computational Linguistics (ACL). 2007\nHaitao Mi, Zhiguo Wang, and Abe Ittycheriah. Vocabulary manipulation for neural machine trans. lation. In Annual Meeting of the Association for Computational Linguistics (ACL), 2016.\nTomas Mikolov, Anoop Deoras, Daniel Povey, Lukas Burget, and Jan Cernocky. Strategies fo. Training Large Scale Neural Network Language Models. In EEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 196-201. 2011.\nMarc' Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence Level Train ing with Recurrent Neural Networks. 2016.\nRico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with. subword units. In Annual Meeting of the Association for Computational Linguistics (ACL). 2016a\nRico Sennrich, Barry Haddow, and Alexandra Birch. Edinburgh neural machine translation systems for WMT 16. In Workshop on Machine Translation (WMT). 2016b.\nFrancis M Tyers, Felipe Sanchez-Martinez, Mikel L Forcada, et al. Flexible finite-state lexica selection for rule-based machine translation. 2012.\nJie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast-forwar connections for neural machine translation. TACL, 2016.\nAndriy Mnih and Geoffrey E. Hinton. A scalable hierarchical distributed language model. In Ad-"}] |
rkjZ2Pcxe | [{"section_index": "0", "section_name": "ADDING GRADIENT NOISE IMPROVES LEARNING FOR VERY DEEP NETWORKS", "section_text": "Arvind Neelakantan* t, Luke Vilnis*\nCollege of Information and Computer Sciences University of Massachusetts Amherst\nqvl, lukaszkaiser, kkurach}@google.com\nilyasu}@openai.com\njmartens}@cs.toronto.edu\nDeep feedforward and recurrent networks have achieved impressive results in. many perception and language processing applications. Recently, more complex. architectures such as Neural Turing Machines and Memory Networks have beer proposed for tasks including question answering and general computation, cre. ating a new set of optimization challenges. In this paper, we explore the low. overhead and easy-to-implement optimization technique of adding annealed Gaus. sian noise to the gradient, which we find surprisingly effective when training these. very deep architectures. Unlike classical weight noise, gradient noise injection is. complementary to advanced stochastic optimization algorithms such as Adam and. AdaGrad. The technique not only helps to avoid overfitting, but also can result in. lower training loss. We see consistent improvements in performance across an. array of complex models, including state-of-the-art deep networks for question. answering and algorithm learning. We observe that this optimization strategy al-. lows a fully-connected 20-layer deep network to escape a bad initialization with. standard stochastic gradient descent. We encourage further application of this. technique to additional modern neural architectures.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep neural networks have shown remarkable success in diverse domains including image recog. nition (Krizhevsky et al.]2012), speech recognition (Hinton et al.]2012) and language processing. applications (Sutskever et al.2014] Bahdanau et al. 2014). This broad success comes from a con fluence of several factors. First, the creation of massive labeled datasets has allowed deep network. to demonstrate their advantages in expressiveness and scalability. The increase in computing power. has also enabled training of far larger networks with more forgiving optimization dynamics (Choro. manska et al.2015). Additionally, architectures such as convolutional networks (LeCun et al.J|1998 and long short-term memory networks (Hochreiter & Schmidhuberf |1997) have proven to be easie. to optimize than classical feedforward and recurrent models. Finally, the success of deep network.\n*First two authors contributed equally Work was done when author was at Google, Inc"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Recent work has aimed to push neural network learning into more challenging domains, such as question answering or program induction. These more complicated problems demand more com plicated architectures (e.g.Graves et al.(2014); Sukhbaatar et al.(2015)), thereby posing new optimization challenges. While there is very active research in improving learning in deep feedfor ward and recurrent networks, such as layer-wise deep supervision (Lee et al.|2015), novel activatior functions (Maas et al.[2013), initialization schemes (He et al.]2015), and cell architectures (Chc et al.[2014af Yao et al.[ [2015), these are not always sufficient or applicable in networks with com plex structure over the latent variables. In order to achieve good performance, researchers have reported the necessity of additional techniques such as explicit labeling of latent variables (Weston et al., 2014), relaxing weight-tying constraints (Kaiser & Sutskever2016), warmstarts (Peng et al. 2015), random restarts, and the removal of certain activation functions in early stages of training (Sukhbaatar et al.2015).\nThe recurring theme is that commonly-used optimization techniques are not always sufficient t robustly optimize the models of interest. In this work, we explore a simple technique of addin annealed Gaussian noise to the gradient, which we find to be surprisingly effective in training dee neural networks with stochastic gradient descent. While there is a long tradition of adding randor weight noise in neural networks, it has been under-explored in the optimization of modern dee architectures. Furthermore, although weight and gradient noise are equivalent when using standar SGD updates, the use of adaptive and momentum based stochastic optimizers such as Adam an AdaGrad (Duchi et al.]2011f Kingma & Ba]2014) breaks this equivalence, allowing the noise t effectively adapt to the curvature of the optimization landscape. We find this property to be importar when optimizing the most complex models.\nWhile there exist theoretical and empirical results on the regularizing effects of conventional stochas. tic gradient descent, especially for the minimization of convex losses (Bousquet & Bottou] 2008) we find that in practice the added noise can actually help us achieve lower training loss by encourag-. ing active exploration of parameter space. This exploration proves especially necessary and fruitful. when optimizing neural network models containing many layers or complex latent structures. For. neural network learning, it has long been known that the noise in the stochastic gradient can help to. escape saddle points and local optima (Bottou1992). For this reason, neural network practitioners. sometimes avoid overly-large mini-batch sizes to achieve the best results. We find that the Gaussian noise added in our technique is complementary to the noisy stochastic gradient, and a combination of Gaussian noise and tuned mini-batch sizes is necessary for the most complex models..\nThe main contribution of this work is to demonstrate the broad applicability of this simple method to the training of many complex modern neural architectures. To our knowledge, neither the ex- ponentially decayed noise schedule nor the black box combination of injected gradient noise with adaptive optimizers have been used before in the training of deep networks. We consistently see improvements from Gaussian gradient noise when optimizing a wide variety of models, including very deep fully-connected networks, and special-purpose architectures for question answering and algorithm learning. For example, this method allows us to escape a poor initialization and success- fully train a 20-layer rectifier network on MNIST with standard gradient descent. It also enables a 72% relative reduction in error in question answering, and doubles the number of accurate binary multiplication models learned across 7,o00 random restarts. Gradient noise also possesses attractive robustness properties. We examine only two distinct settings of the noise variance hyperparameter in total across all experiments. We additionally observe that in cases where gradient noise fails to improve over other learning techniques, it rarely significantly hurts a models ability to generalize.\nWe hope that practitioners will see similar improvements in their own research by adding this simple technique, implementable in a single line of code, to their repertoire..\nWeight noise (Steijvers1996) and adaptive weight noise (Graves]2011;Blundell et al.]2015), which usually maintains a Gaussian variational posterior over network weights, similarly aim to improve learning by added noise during training. In adaptive weight noise, an extra set of parameters for the variance must be maintained. This adaptation is different than our use of an adaptive optimizer, as it aims to capture an accurate estimate of uncertainty in the weights and not guide the exploration of parameter space. They differ from our proposed method in that the noise is not annealed and at convergence will be non-zero.\nAn annealed Gaussian gradient noise schedule was used to train the highly non-convex Stochasti. Neighbor Embedding model in|Hinton & Roweis (2002). The gradient noise schedule that we founc. to be most effective is very similar to the Stochastic Gradient Langevin Dynamics (SGLD) algorithn of Welling & Teh(2011), who use gradients with added noise to accelerate MCMC inference fo. logistic regression and independent component analysis models. This use of gradient informatior. in MCMC sampling for machine learning to allow faster exploration of state space was previously. proposed by Neal (2011). However, standard SGLD analysis does not allow for the use of adap. tive optimizers or momentum, limiting the efficiency for very pathological optimization landscapes. Stochastic Gradient Riemannian Langevin Dynamics (Patterson & Teh2013) adapts the gradien and noise using the Fisher information matrix, effectively following trajectories along the same man. ifold as the natural gradient (Amaril1998), but is applied only to models for which that matrix is. tractable to estimate in closed form.\nVarious optimization techniques have been proposed to improve the training of neural networks Most notable is the use of momentum (Polyak]1964] Sutskever et al.]2013} Kingma & Ba]2014 or adaptive learning rates (Duchi et al. 2011f Dean et al.[2012 Zeiler2012). These methods ar normally developed to provide good convergence rates for the convex setting, and then heuristi cally applied to nonconvex problems. Similarly, batch normalization and related methods (Ioff & Szegedy2015} Arpit et al.[2016] Salimans & Kingma2016), natural gradient descent (Amari 1998Desjardins et al.2015), and K-FAC (Martens & Grosse 2015) can all be seen as variou preconditioning methods using approximations to the inverse Fisher information of the neural net work. While there has been some difficulty in combining batch normalization-type algorithms witl recurrent networks (Laurent et al.|2015), recent work has had success in this area (Cooijmans et al. 2016] Ba et al.[2016).\nInjecting noise in the gradient can be combined with any of the above methods, and can be seen as a complementary technique especially suitable for nonconvex problems. By adding additional artificial stochasticity to the gradient, this technique allows the model more chances to escape local minima or saddle-points (see a similar argument in|Bottou(1992)), or to traverse quickly through the \"transient'' plateau phase of early learning (see a similar analysis for momentum in Sutskever et al.(2013)). This is born out empirically in our observation that adding gradient noise can actually result in lower training loss. In this sense, we suspect adding gradient noise is similar to simulated annealing (Kirkpatrick et al.|1983) which exploits random noise to explore complex optimization landscapes. This can be contrasted with well-known benefits of stochastic gradient descent as a learning algorithm (Robbins & Monro 1951] Bousquet & Bottou2008), where both theory and practice have shown that the noise induced by the stochastic process aids generalization by reducing Overfitting.\nAdding random noise to the weights, inputs, or hidden units has been a known technique amongst neural network practitioners for many years (e.g.Murray & Edwardsf An(1996)). However, the benefits of gradient noise have not been fully explored with modern deep networks nor combined with advanced stochastic optimization techniques, which allow the noise to take into account the geometry of the optimization problem and the statistical manifold..\nRecently, there has been a surge in research examining the use of gradient and weight noise when training deep neural networks. Mobahi (2016) present an optimization technique for recurrent net- works that applies an annealed Gaussian kernel smoothing method to the loss function, of which annealed weight noise is a Monte Carlo estimator. Li et al.(2016) present a version of SGLD that\nincorporates both Gaussian noise and adaptively estimated learning rates (but no momentum term) Though significantly more complex than our proposed method, the most similar work is the Santa. algorithm ofChen et al.(2016). Santa combines SGLD with adaptive learning rates and adaptive per-coordinate momentum parameters, and shows that the scheme can approach global optima of. the objective function under certain assumptions.."}, {"section_index": "3", "section_name": "3 METHOD", "section_text": "We consider a simple technique of adding time-dependent Gaussian noise to the gradient g at every training step t:"}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "In the following experiments, we examine the effect of gradient noise on deep networks for MNIST digit classification, and consider a variety of complex neural network architectures: End To-End Memory Networks (Sukhbaatar et al.2015) and Neural Programmer (Neelakantan et al. 2016) for question answering, Neural Random Access Machines (Kurach et al.|2016) and Neural GPUs (Kaiser & Sutskever 2016) for algorithm learning. The models and results are described as follows.\nFor our first set of experiments, we examine the impact of adding gradient noise when training a very deep fully-connected network on the MNIST handwritten digit classification dataset (LeCur et al.]1998). Our network is deep: it has 20 hidden layers, with each layer containing 50 hidden units, posing a significant optimization and generalization problem. We use the ReLU activation function (Nair & Hinton|2010)\nIn this experiment, we train with SGD without momentum, using the fixed learning rates of O.1 anc 0.01. Unless otherwise specified, the weights of the network are initialized from a Gaussian with mean zero, and standard deviation of O.1, which we call Simple Init. When adding gradient noise, we tried both settings of the variance detailed in Section[3] and found that decaying variance according to the schedule in Equation (1) with n = 0.01 worked best.\nNext, we experiment with clipping the gradients with two threshold values: 100 and 10 (Table|1 Experiment 2. and 3). Here. we find training with gradient noise is insensitive to the gradient\n9t< gt+ N(O,o?)\nThe gradient gt is then used to update the weights 0t as if it were the original gradient of the loss. function, and can be used with any stochastic optimization algorithm. Our experiments indicate that adding annealed Gaussian noise by decaying the variance often works better and more robustly than using fixed Gaussian noise (see Section4.6). We use a schedule inspired from|Welling & Teh[(2011) in our experiments and take:\nn 0 t (1+t)?\nWe examine only 2 distinct noise hyperparameter configurations in our experiments, selecting n from {0.01, 1.0} and setting y = 0.55 in all experiments. We believe this shows that annealed gradient noise is robust to minimal tuning. For example, in the experiments on Neural Programmer and Neural GPUs, we tried only a single configuration of noise parameters, simply setting n = 1.0 and tuning only the model hyperparameters as normal.\nThe results of our experiment are in Table[1 when trained from Simple Init we can see that adding. noise to the gradient helps in achieving higher average and best accuracy over 20 runs using each learning rate for a total of 40 runs (Table[1] Experiment 1). We note that the average is closer to. 50% because the small learning rate of O.01 usually gives very slow convergence. We also try our. approach on a more shallow network of 5 layers, but adding noise does not improve the training in. that case.\nIn our fourth and fifth experiments (Table[1] Experiment 4), we use two analytically-derived ReLU initialization techniques (which we term Good Init 1 and 2) recently-proposed by Sussillo (2014. and He et al.(2015), and find that adding gradient noise does not help. Previous work has found that stochastic gradient descent with carefully tuned initialization, momentum, learning rate, and. learning rate decay can optimize such extremely deep fully-connected ReLU networks (Srivastava et al.|2015). It would be harder to find such a robust initialization technique for the more complex heterogeneous architectures considered in later sections. Accordingly, we find in later experiments (e.g., Section 4.3) that random restarts and the use of a momentum-based optimizer like Adam are not sufficient to achieve the best results in the absence of added gradient noise.\nTo test how sensitive the methods are to poor initialization, in addition to the sub-optimal Simpl Init, we run an experiment where all the weights in the neural network are initialized at zero. Th results (Table[1] Experiment 5) show that if we do not add noise to the gradient, the networks fail t learn. If we add some noise, the networks can learn and reach 94.5% accuracy. While the pessima performance of the noiseless model is unsurprising (initializing weights at O introduces symmetrie that make gradient-descent impossible), it is interesting to note that gradient noise can overcom what is perhaps the canonical \"bad initialization.\"'\nSetting Besl Iesl Acc. Avg.1est Acc No Noise 89.9% 43.1% With Noise 96.7% 52.7% No Noise + Dropout 11.3% 10.8% Experiment 2: Simple Init, Gradient Clip = 100 No Noise 90.0% 46.3% With Noise 96.7% 52.3% Experiment 3: Simple Init, Gradient Clip = 10 No Noise 95.7% 51.6% With Noise 97.0% 53.6% Experiment 4: Good Init 1 + Gradient Clip = 10 No Noise 97.4% 92.1% With Noise 97.5% 92.2% Experiment 5: Good Init 2 + Gradient Clip = 10 No Noise 97.4% 91.7% With Noise 97.2% 91.7% Experiment 6: Bad Init (Zero Init) + Gradient Clip = 10 No Noise 11.4% 10.1% With Noise 94.5% 49.7%\nTable 1: Average and best test accuracy on MNIST over 40 runs. Higher values are better\nIn summary, these experiments show that if we are careful with initialization and gradient clipping. values, it is possible to train a very deep fully-connected network without adding gradient noise However, if the initialization is poor, optimization can be difficult, and adding noise to the gradient. is a good mechanism to overcome the optimization difficulty. Additionally, the noise need not be. heavily tuned and rarely decreases performance.\nThis set of results suggests that added gradient noise can be an effective mechanism for training complex networks. This is because it is more difficult to initialize the weights properly for these\nExperiment 1: Simple Init, No Gradient Clip\narchitectures. In the following, we explore the training of more complex models such as End-To End Memory Networks and Neural Programmer. whose initialization is less well studied.."}, {"section_index": "5", "section_name": "4.2 END-TO-END MEMORY NETWORKS", "section_text": "We test added gradient noise for training End-To-End Memory Networks (Sukhbaatar et al.|2015 an approach for question answering using deep networks. Memory Networks have been demon strated to perform well on a relatively challenging toy question answering problem (Weston et al.. 2015).\nIn Memory Networks, the model has access to a context, a question, and is asked to predict ar answer. Internally, the model has an attention mechanism which focuses on the right clue to answe the question. In the original formulation (Weston et al.]2015), Memory Networks were providec with additional supervision as to what pieces of context were necessary to answer the question. Thi. was replaced in the End-To-End formulation by a latent attention mechanism implemented by a softmax over contexts. As this greatly complicates the learning problem, the authors implement a two-stage training procedure: First train the networks with a linear attention, then use those weights to warmstart the model with softmax attention.\nIn our experiments with Memory Networks, we use the same model hyperparameter settings as Sukhbaatar et al.(2015), and we try both settings of the variance detailed in Section3] finding. n = 0.01 worked best for this task. This noise is added to the gradient after clipping.\nWe set the number of training epochs to 200 because we would like to understand the behaviors. of Memory Networks near convergence. We test the effect of gradient noise with the published two-stage training approach, and additionally with a one-stage approach where we train the net-. works with softmax attention and without warmstarting. Following the experimental protocol of Sukhbaatar et al.(2015), we take the model with lowest training error out of 10 random restarts. Results are reported in Table|2 We find some fluctuations during each run of the training, but the reported results reflect the typical gains obtained by adding random noise..\nWe find that warmstarting does indeed help the networks. In all cases, adding random noise tc the gradient also helps the network both in terms of training errors and validation errors, and neve hurts. Added noise, however, is especially helpful for the training of End-To-End Memory Network. without the warmstarting stage.\nOne-Stage Training Setting No Noise With Noise. Train error:. 10.5% 9.6% Validation error: 19.5% 16.6% Two-Stage Training Train error:. 6.2% 5.9% Validation error: 10.9% 10.8%\nTable 2: The effects of adding gradient noise to End-to-End Memory Networks. Lower values are better.\nNeural Programmer is a neural network architecture augmented with a small set of built-in arithmetic and logic operations that learns to induce latent programs. It is proposed for the task of question answering from tables (Neelakantan et al.||2016). Examples of operations on a table include the sum of a set of numbers, or the list of numbers greater than a particular value. Key to Neural Programmer is the use of \"soft selection' to assign a probability distribution over the list of operations. This probability distribution weighs the result of each operation, and the cost function compares this weighted result to the ground truth. This soft selection, inspired by the soft attention mechanism of Bahdanau et al.(2014), allows for full differentiability of the model. Running the model for\nseveral steps of selection allows the model to induce a complex program by chaining the operations. one after the other. At convergence, the soft selection tends to become peaky (hard selection) Figure[1shows the architecture of Neural Programmer at a high level.\nTimestep t. t=1, 2, ..., T Arithmetic and logic operations Input - Controller Soft Apply Selection Data Memory > Outpu\nFigure 1: Neural Programmer, a neural network with built-in arithmetic and logic operations. At every time step, the controller selectes an operation and a data segment. Figure reproduced with permission fromNeelakantan et al.(2016).\nIn a synthetic table comprehension task, Neural Programmer takes a question and a table (o) database) as input and the goal is to predict the correct answer. To solve this task, the model has to induce a program and execute it on the table. A major challenge is that the supervision signal is n the form of the correct answer and not the program itself. The model runs for a fixed number of steps, and at each step selects a data segment and an operation to apply to the selected data segment Soft selection is performed at training time so that the model is differentiable, while at test time hard selection is employed.\nWe examine only the noise configuration with n = 1.0, and add noise to the gradient after clipping optimizing all other hyperparameters of the model. The model is optimized with Adam (Kingma & Ba,2014), which combines momentum and adaptive learning rates.\nFor our first experiment, we train Neural Programmer to answer questions involving a single column. of numbers. We use 72 different hyper-parameter configurations with and without adding annealed. random noise to the gradients. We also run each of these experiments for 3 different random ini- tializations of the model parameters and we find that only 1/216 runs achieve 100% test accuracy. without adding noise while 9/216 runs achieve 100% accuracy when random noise is added. The 9 successful runs consisted of models initialized with all the three different random seeds, demon-. strating robustness to initialization. We find that when using dropout (Srivastava et al.]2014) none. of the 216 runs give 100% accuracy.\nWe consider a more difficult question answering task where tables have up to five columns contain ing numbers. We also experiment on a task containing one column of numbers and another column of text entries. Table|3|shows the performance of adding noise vs. no noise on Neural Programmer.\nTable 3: The effects of adding random noise to the gradient on Neural Programmer. Higher values. are better. Adding random noise to the gradient always helps the model. When the models are. applied to these more complicated tasks than the single column experiment, using dropout and noise together seems to be beneficial in one case while using only one of them achieves the best result in. the other case.\nFigure2shows an example of the effect of adding random noise to the gradients in our experiment with 5 columns. The differences between the two models are much more pronounced than Table|3 indicates because that table reflects the results from the best hyperparameters. Figure|2lindicates a more typical training run.\nQuestion Answering Accuracy Setting Dropout No Noise With Noise Five columns No 95.3% 98.7% Text entries No 97.6% 98.8% Five columns Yes 97.4% 99.2% Text entries Yes 99.1% 97.3%\nTrain Loss:Noise Vs.No Noise Test Accuracy: Noise Vs.No Noise 3500 100 no noise no noise 3000 noise 80 noise Tronrnss 2500 60 2000 40 1500 20 1000 0 0 50 100 150 200 250 300 0 50 100 150 200 250 300 No. of epochs No.of epochs\nTrain Loss: Noise Vs.No Noise Test Accuracy: Noise Vs.No Noise 3500 100 no noise no noise noise noise 3000 80 2500 60 ran 2000 40 1500 20 1000 0 0 50 100 150 200 250 300 0 50 100 150 200 250 300 No.of epochs No. of epochs\nFigure 2: Noise Vs. No Noise in our experiment with 5 columns. The models trained with noise generalizes almost always better.\nIn all cases, we see that added gradient noise improves performance of Neural Programmer. Its performance when combined with or used instead of dropout is mixed depending on the problem. but the positive results indicate that it is worth attempting on a case-by-case basis.."}, {"section_index": "6", "section_name": "4.4 NEURAL RANDOM ACCESS MACHINES", "section_text": "We now conduct experiments with Neural Random-Access Machines (NRAM) (Kurach et al.|2016. NRAM is a model for algorithm learning that can store data, and explicitly manipulate and derefer. ence pointers. NRAM consists of a neural network controller, memory, registers and a set of built-il operations. This is similar to the Neural Programmer in that it uses a controller network to com. pose built-in operations, but both reads and writes to an external memory. An operation can eithe. read (a subset of) contents from the memory, write content to the memory or perform an arithmetic. operation on either input registers or outputs from other operations. The controller runs for a fixe. number of time steps. At every step, the model selects a \"circuit' to be executed: both the operations. and its inputs.\nThese selections are made using soft attention (Bahdanau et al.l2014) making the model end-to-en differentiable. NRAM uses an LSTM (Hochreiter & Schmidhuber1997) controller. Figure3[give an overview of the model..\nFor our experiment, we consider a problem of finding the k-th element's value in a linked list. The network is given a pointer to the head of the linked list, and has to find the value of the k-th element. Note that this is highly nontrivial because pointers and their values are stored at random locations in memory, so the model must learn to traverse a complex graph for k steps.\nBecause of this complexity, training the NRAM architecture can be unstable, especially when the number of steps and operations is large. We once again experiment with the decaying noise schedule from Equation (1), setting n = 0.01. We run a large grid search over the model hyperparameters (detailed in |Kurach et al.(2016)), and find the top 3 parameter settings separately for both noised\nbinarized LSTM finish? r1 r1 m1 m3 resstttrs r2 r2 - r3 r3 - m2 T4 - r4 1 memory tape\nFigure 3: One timestep of the NRAM architecture with R = 4 registers and a memory tape. m1, m2 and m3 are example operations built-in to the model. The operations can read and write from memory. At every time step, the LSTM controller softly selects the operation and its inputs. Figure reproduced with permission fromKurach et al.(2016).\nand un-noised models. For each model, for each of these 3 settings, we try 100 different randon initializations and look at the percentage of runs that give 100% accuracy across each one for training. both with and without noise.\nAs in our experiments with Neural Programmer, we find that adding the noise after gradient clipping is crucial. This is likely because the effect of random noise is washed away when gradients become too large. For models trained with noise we observed much better reproduce rates, which are pre sented in Table4 Although it is possible to train the model to achieve 100% accuracy without noise it is less robust across multiple random restarts, with over 10x as many initializations leading to a correct answer when using noise.\nTable 4: Percentage of successful runs on the k-th element task. All tests were performed with the same set of 100 random initializations (seeds). Higher values are better.."}, {"section_index": "7", "section_name": "4.5 CONVOLUTIONAL GATED RECURRENT NETWORKS (NEURAL GPUS)", "section_text": "Convolutional Gated Recurrent Networks (CGRN) or Neural GPUs (Kaiser & Sutskever2016) a a recently proposed model that is capable of learning arbitrary algorithms. CGRNs use a stack convolution layers, unfolded with tied parameters like a recurrent network. The input data (usually list of symbols) is first converted to a three dimensional tensor representation containing a sequen. of embedded symbols in the first two dimensions, and zeros padding the next dimension. The nultiple layers of modified convolution kernels are applied at each step. The modified kernel. a combination of convolution and Gated Recurrent Units (GRU) (Cho et al.]2014b). The use. convolution kernels allows computation to be applied in parallel across the input data, while the ga. ng mechanism helps the gradient flow. The additional dimension of the tensor serves as a workir nemory while the repeated operations are applied at each layer. The output at the final layer is tl predicted answer.\nThe key difference between Neural GPUs and other architectures for algorithmic tasks (e.g., Neural. Turing Machines (Graves et al.] 2014)) is that instead of using sequential data access, convolution kernels are applied in parallel across the input, enabling the use of very deep and wide models. The model is referred to as Neural GPU because the input data is accessed in parallel. Neural GPUs were shown to outperform previous sequential architectures for algorithm learning on tasks such as binary addition and multiplication, by being able to generalize from much shorter to longer data cases..\nIn our experiments, we use Neural GPUs for the task of binary multiplication. The input consists twc. concatenated sequences of binary digits separated by an operator token, and the goal is to multiply. the given numbers. During training, the model is trained on 20-digit binary numbers while at test. time, the task is to multiply 200-digit numbers. We add Gaussian noise with decaying variance according to the schedule in Equation (1) with n = 1.0, to the gradient after clipping. The model is. optimized using Adam (Kingma & Ba2014).\nTable [5|gives the results of a large-scale experiment using Neural GPUs with a 7290 grid search. The experiment shows that models trained with added gradient noise are more robust across many. random initializations and parameter settings. As you can see, adding gradient noise both allows us. to achieve the best performance, with the number of models with < 1% error over twice as large as without noise. But it also helps throughout, improving the robustness of training, with more models. training to higher error rates as well. This experiment shows that the simple technique of addec gradient noise is effective even in regimes where we can afford a very large numbers of random. restarts.\nTable 5: Number of successful runs on 7290 random trials. Higher values are better. The models ar. trained on length 20 and tested on length 200."}, {"section_index": "8", "section_name": "4.6 DISCUSSION", "section_text": "In this work we propose an annealed Gaussian gradient noise scheme for the optimization of com- plex neural networks. Our experiments show improvement from gradient noise on a variety of models. We conduct a small set of additional experiments below to examine the factors that make this technique successful, and report a failure mode.\nAnnealed vs. fixed noise We use a single fixed decay value y = 0.55 when applying Equation (1) in our experiments, inspired by Stochastic Gradient Langevin Dynamics, and recommend it as a default. We conduct several experiments to determine the importance of annealed vs. fixed noise added to the gradient. We find that for the End2End model, similar results can be achieved with fixed noise values, however requiring significantly more tuning (compared to trying only two differ- ent values of n in our experiments with annealed noise). We achieve nearly identical results on the End2End experiment using a fixed noise value of n = 0.001. We also experiment with fixed noise on the Neural Programmer and NRAM models, and find that they make a larger difference. For both models, we select fixed noise values log-uniformly from between 1e-4 and 0.1 and optimize the other hyperparameters. Using 216 runs per variance setting, the best Neural Programmer mod- els without annealing can achieve equivalent errors to the annealed models. However, only 5/216 achieve the best error compared to 9/216 for the model using annealing. For NRAM, using 180 runs per setting, fixed noise never achieves the perfect error of O that is achieved by the annealed model. While annealing shows the most benefit with the most complex models, we generally recommend it as a robust default that requires less hyperparameter tuning than fixed noise.\nGaussian noise vs. gradient stochasticity We assert that gradient noise helps the model explore the optimization landscape, escaping saddle points and local minima. Analysis of SGD for neural networks suggests that the stochasticity of the gradient serves much the same purpose (Bottou. 1992). This suggests a strategy: add noise to the gradient by simply reducing the minibatch size, increasing the variance of the gradient estimator. While arguments based on SGLD and kernel smoothing provide evidence that the specific form of the Gaussian noise is important, we run a pair of small experiments. For both Neural Programmer and NRAM, we tried batch sizes of 10, 25, and 50 (50 being the value used in the best results). For NRAM, after 100 tasks at each batch size and no gradient noise, 2 tasks at batch size 50 converged to 0 error, 1 task at batch size 10, and none at batch size 25. For Neural Programmer, over 216 experiments at each batch size we see none of the models without gradient noise converge to the best error. These results are far worse than our results using added noise, indicating that merely lowering the batch size does not introduce the same sort of helpful stochasticity.\nGradient noise vs. weight noise. While weight noise is relatively well-known, it is not equivalent. to gradient noise in the case of adaptive or momentum-based optimizers, which effectively adap1 the noise to the curvature of the optimization landscape. Both Neural Programmer and NRAM are greatly helped in training by the use of the Adam algorithm for optimization. We find here, using. the same experimental setup as when examining annealed vs. fixed noise, that the models fail to learn when adding noise directly to the weights. Even when using starting noise rates as low as 1e-6, with the usual annealing schedule, the models fail to train significantly, achieving 57% error. for NRAM and 68% for Neural Programmer at the lowest. Importantly, these noise rates are on. the same order as the adaptive learning rates. This indicates that the issue is not just the noise scale, but that the very poor conditioning of the loss functions makes it necessary to adapt the noise. Similar concerns motivated the development of very recent algorithms for preconditioned SGLD in. the Bayesian setting (Li et al.]2016).\nSetting Error < 1% < 2% < 3% < 5% No Noise 28 90 172 387 With Noise 58 159 282 570\nNegative results While we see improvements on a large number of neural network architectures we note a case where gradient noise does not improve over standard SGD. We conduct language. modeling experiments on the Penn Treebank (Marcus et al.|[1993), using the experimental setup anc architecture fromZaremba et al.(2014). We report results using a 200-unit LSTM with dropout but observe a similar lack of improvement from gradient noise when using models without dropout We try the two proposed noise rates from Section (method) and find the best results using n = 0.01 are slightly worse than the noiseless model, achieving a perplexity of 98 rather than 95. By furthe lowering the noise parameter to n = 0.001 we are able to achieve the same perplexity as the baseline. but do not see improvement. While adding gradient noise does not help in this case, it is simple tc try and does not significantly hurt the results.."}, {"section_index": "9", "section_name": "5 CONCLUSION", "section_text": "In this paper, we demonstrate the effectiveness of adding noise to the gradient when training deep neural networks. We find that adding noise to the gradient helps optimization and generalizatior of complicated neural networks and is compatible with and complementary to other stochastic op timization methods. We suspect that the effects are pronounced for complex models because they have many saddle points.\nWe believe that this surprisingly simple yet effective idea, essentially a single line of code, should. be in the toolset of neural network practitioners when facing issues with training neural networks."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 1998\nDevansh Arpit, Yingbo Zhou, Bhargava U Kota, and Venu Govindaraju. Normalization propagation A parametric technique for removing internal covariate shift in deep networks. ICML, 2016\nJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2014.\nLeon Bottou. Stochastic gradient learning in neural networks. In Neuro-Nimes, 1992\nOlivier Bousquet and Leon Bottou. The tradeoffs of large scale learning. In NIPs, 2008\nAnna Choromanska, Mikael Henaff, Michael Mathieu, Gerard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. In A1STATS. 2015.\nChangyou Chen, David Carlson, Zhe Gan, Chunyuan Li, and Lawrence Carin. Bridging the gap between stochastic gradient mcmc and stochastic optimization, 2016.\nGuillaume Desjardins, Karen Simonyan, Razvan Pascanu, and koray kavukcuoglu. Natural neura networks. In NIPS. 2015.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 2011.\nAlex Graves. Practical variational inference for neural networks. In NIPs, 2011.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. ICCV, 2015..\nGeoffrey Hinton and Sam Roweis. Stochastic neighbor embedding. In NIPs, 2002\nGeoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel rahman Mohamed, Navdeep Jaitly, An drew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, and Brian Kingsbury. Deep. neural networks for acoustic modeling in speech recognition. Signal Processing Magazine, 2012.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation. 1997\nLukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In ICLR, 2016\nScott Kirkpatrick, Mario P Vecchi, et al. Optimization by simulated annealing. Science, 1983\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet classification with deep convo lutional neural networks. In NIPS, 2012.\nKarol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural random-access machines. In ICLR, 2016.\nCesar Laurent, Gabriel Pereyra, Philemon Brakel, Ying Zhang, and Yoshua Bengio. Batch normal ized recurrent neural networks. arXiv preprint arXiv:1510.01378, 2015..\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998.\nChunyuan Li, Changyou Chen, David Carlson, and Lawrence Carin. Preconditioned stochasti gradient langevin dynamics for deep neural networks, 2016.\nAndrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural net. work acoustic models. ICML, 2013.\nMitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 1993.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proc. AISTATS, pp. 249-256, 2010.\nSergey Ioffe and Christian Szegedy. Batch normalization: accelerating deep network training by reducing internal covariate shift. In ICML, 2015.\nHossein Mobahi. Training recurrent neural networks by diffusion, 2016\nAlan F Murray and Peter J Edwards. Synaptic weight noise during mlp learning enhances fault tolerance, generalization and learning trajectory..\nVinod Nair and Geoffrey Hinton. Rectified linear units improve Restricted Boltzmann Machines. In ICML, 2010.\nArvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural Programmer: Inducing latent programs with gradient descent. In ICLR, 2016\nSam Patterson and Yee Whye Teh. Stochastic gradient riemannian langevin dynamics on the proba bility simplex. In NIPS, 2013.\nBaolin Peng, Zhengdong Lu, Hang Li, and Kam-Fai Wong. Towards neural network-based reason ing. arXiv preprint arxiv:1508.05508, 2015\nBoris Teodorovich Polyak. Some methods of speeding up the convergence of iteration methods USsR Computational Mathematics and Mathematical Physics, 1964\nTim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to ac celerate training of deep neural networks. 2016.\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 2014..\nRupesh Kumar Srivastava, Klaus Greff, and Jurgen Schmidhuber. Training very deep networks NIPS, 2015.\nMark Steijvers. A recurrent network that performs a context-sensitive prediction task. In CogSci 1996.\nSainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks In NIPS, 2015.\nDavid Sussillo. Random walks: Training very deep nonlinear feed-forward networks with smar initialization. CoRR, 2014.\nIlya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initializa tion and momentum in deep learning. In ICML, 2013.\nJason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards AI-complete questio. answering: a set of prerequisite toy tasks. In ICML, 2015..\nRadford M Neal. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo 2011.\nIlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks In NIPS, 2014.\nMax Welling and Yee Whye Teh. Bayesian learning via stochastic gradient Langevin dynamics. In ICML. 2011\nKaisheng Yao, Trevor Cohn, Katerina Vylomova, Kevin Duh, and Chris Dyer. Depth-gated recurren neural networks. arXiv preprint arXiv:1508.03790, 2015."}] |
H1VyHY9gg | [{"section_index": "0", "section_name": "DATA NOISING AS SMOOTHING IN NEURAL NETWORK LANGUAGE MODELS", "section_text": "Ziang Xie, Sida I. Wang, Jiwei Li, Daniel Levy Aiming Nie, Dan Jurafsky, Andrew Y. Ng.\nzxie,sidaw,danilevy, anie, ang}@cs.stanford.edu {jiweil, jurafsky}@stanford.edu"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In other application domains, data augmentation has been key to improving the performance o. neural network models in the face of insufficient data. In computer vision, for example, there exis well-established primitives for synthesizing additional image data, such as by rescaling or applying. affine distortions to images (LeCun et al.]1998] Krizhevsky et al.]2012).Similarly, in speecl. recognition adding a background audio track or applying small shifts along the time dimension has. been shown to yield significant gains, especially in noisy settings (Deng et al.]2o0o) Hannun et al.. 2014). However, widely-adopted noising primitives have not yet been developed for neural network. language models.\nClassic n-gram models of language cope with rare and unseen sequences by using smoothing meth. ods, such as interpolation or absolute discounting (Chen & Goodman 1996). Neural network mod- els, however, have no notion of discrete counts, and instead use distributed representations to combat. the curse of dimensionality (Bengio et al.] 2003). Despite the effectiveness of distributed represen. tations, overfitting due to data sparsity remains an issue. Existing regularization methods, however are typically applied to weights or hidden units within the network (Srivastava et al.[2014] Le et al.. 2015) instead of directly considering the input data.\nIn this work, we consider noising primitives as a form of data augmentation for recurrent neura. network-based language models. By examining the expected pseudocounts from applying the nois. ing schemes, we draw connections between noising and linear interpolation smoothing. Using thi. connection, we then derive noising schemes that are analogues of more advanced smoothing meth. ods. We demonstrate the effectiveness of these schemes for regularization through experiments or language modeling and machine translation. Finally, we validate our theoretical claims by examin. ing the empirical effects of noising.."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Data noising is an effective technique for regularizing neural network models While noising is widely adopted in application domains such as vision and speech commonly used noising primitives have not been developed for discrete sequence level settings such as language modeling. In this paper, we derive a connection oetween input noising in neural network language models and smoothing in n gram models. Using this connection, we draw upon ideas from smoothing tc levelop effective noising schemes. We demonstrate performance gains when ap lying the proposed schemes to language modeling and machine translation. Fi nally, we provide empirical analysis validating the relationship between noising and smoothing.\nanguage models are a crucial component in many domains, such as autocompletion, machine trans lation, and speech recognition. A key challenge when performing estimation in language modeling is the data sparsity problem: due to large vocabulary sizes and the exponential number of possi ble contexts, the majority of possible sequences are rarely or never observed, even for very short subsequences."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Our work can be viewed as a form of data augmentation, for which to the best of our knowledge there. exists no widely adopted schemes in language modeling with neural networks. Classical regulariza. tion methods such as L2-regularization are typically applied to the model parameters, while dropout is applied to activations which can be along the forward as well as the recurrent directions (Zaremba et al.[2014] [Semeniuta et a1.[[2016fGal[[2015). Others have introduced methods for recurrent neural networks encouraging the hidden activations to remain stable in norm, or constraining the recurrent. weight matrix to have eigenvalues close to one (Krueger & Memisevic|2015| Arjovsky et al.|2015 Le et al.]2015). These methods, however, all consider weights and hidden units instead of the input. data, and are motivated by the vanishing and exploding gradient problem..\nThe technique of randomly zero-masking input word embeddings for learning sentence represen- tations has been proposed by Iyyer et al.(2015), Kumar et al.(2015), and Dai & Le(2015), and adopted by others such as[Bowman et al.(2015). However, to the best of our knowledge, no analysis has been provided besides reasoning that zeroing embeddings may result in a model ensembling effect similar to that in standard dropout. This analysis is applicable to classification tasks involving sum-of-embeddings or bag-of-words models, but does not capture sequence-level effects. Bengio et al.(2015) also make an empirical observation that the method of randomly replacing words with fixed probability with a draw from the uniform distribution improved performance slightly for an image captioning task; however, they do not examine why performance improved."}, {"section_index": "4", "section_name": "3.1 PRELIMINARIES", "section_text": "We consider language models where given a sequence of indices X = (x1, x2, :-. , xT), over th vocabulary V, we model\nT I1 p(X) = p(xt|x<t) t=1\nIn n-gram models, it is not feasible to model the full context x<t for large t due to the exponentia. number of possible histories. Recurrent neural network (RNN) language models can (in theory. model longer dependencies, since they operate over distributed hidden states instead of modeling ar. exponential number of discrete counts (Bengio et al.|2003f Mikolov 2012)\nAn L-layer recurrent neural network is modeled as h .where l denotes the layer index, h(0) contains the one-hot encoding of X, and in its simplest form fe applies an affine transformation followed by a nonlinearity. In this work, we use RNNs with a more complex form of fe, namely long short-term memory (LSTM) units (Hochreiter & Schmidhuber| 1997), which have been shown to ease training and allow RNNs to capture longer dependencies. The output distribution over the vocabulary V at time t is pe(xt|x<t) = softmax(ge(h(L))), where g : R|h| > R|V! applies an affine transformation. The RNN is then trained by minimizing over its parameters 0 the sequence cross-entropy loss l(0) = t log Pe(xt|x<t), thus maximizing the likelihood pe(X).\nAs an extension, we also consider encoder-decoder or sequence-to-sequence (Cho et al. 2014 Sutskever et al. 2014) models where given an input sequence X and output sequence Y Of length Ty. we model\nTy p(Y|X) =IIz p(yt|X,y<t) t=1\nand minimize the loss l(0) = t log pe(yt|X, y<t). This setting can also be seen as conditional language modeling, and encompasses tasks such as machine translation, where X is a source lan-\nFeature noising has been demonstrated to be effective for structured prediction tasks, and has been interpreted as an explicit regularizer (Wang et al.[2013). Additionally, Wager et al.[(2014) show that noising can inject appropriate generative assumptions into discriminative models to reduce their. generalization error, but do not consider sequence models (Wager et al.[[2016)..\nguage sequence and Y a target language sequence, as well as language modeling, where Y is the given sequence and X is the empty sequence.."}, {"section_index": "5", "section_name": "3.2 SMOOTHING AND NOISING", "section_text": "Like n-gram models, RNNs are trained using maximum likelihood, and can easily overfit (Zaremba et al.[[2014). While generic regularization methods such L2-regularization and dropout are effective. they do not take advantage of specific properties of sequence modeling. In order to understand sequence-specific regularization, it is helpful to examine n-gram language models, whose properties are well-understood.\nSmoothing for n-gram models When modeling p(xt|x<t), the maximum likelihood estimate. c(x<t, xt)/c(x<t) based on empirical counts puts zero probability on unseen sequences, and thus. smoothing is crucial for obtaining good estimates. In particular, we consider interpolation, whicl. performs a weighted average between higher and lower order models. The idea is that when there. are not enough observations of the full sequence, observations of subsequences can help us obtair better estimatesFor example, in a bigram model, Pinterp(xt|xt-1) = Ap(xt|xt-1) + (1 - A)p(xt) where 0 < X < 1\nNoising for RNN models We would like to apply well-understood smoothing methods such as interpolation to RNNs, which are also trained using maximum likelihood. Unfortunately, RNN models have no notion of counts, and we cannot directly apply one of the usual smoothing methods. In this section, we consider two simple noising schemes which we proceed to show correspond to smoothing methods. Since we can noise the data while training an RNN, we can then incorporate well-understood generative assumptions that are known to be helpful in the domain. First consider the following two noising schemes:\nunigram noising For each x; in x<t, with probability y replace x with a sample from the unigram frequency distribution.. oblank noising For each x, in x<t, with probability y replace x; with a placeholder token \"_.\nWhile blank noising can be seen as a way to avoid overfitting on specific contexts, we will see tha both schemes are related to smoothing, and that unigram noising provides a path to analogues o. more advanced smoothing methods"}, {"section_index": "6", "section_name": "3.3 NOISING AS SMOOTHING", "section_text": "We now consider the maximum likelihood estimate of n-gram probabilities estimated using the pseudocounts of the noised data. By examining these estimates, we draw a connection between linear interpolation smoothing and noising.\nLet c(x) denote the count of a token x in the original data, and let cy(x) d Ex [c(x)] be the expected count of x under the unigram noising scheme. We then have.\nCy(Xt-1,Xt Cy(xt-1) [(1-y)c(xt-1,xt) +y p(xt-1)c(xt)]/c(xt-1 (1-Y)p(xtxt-1)+y p(xt)\nwhere cy(x) = c(x) since our proposal distribution q(x) is the unigram distribution, and the last line follows since c(xt-1)/p(xt-1) = c(xt)/p(xt) is equal to the total number of tokens in the training\n1 For a thorough review of smoothing methods, we defer to|Chen & Goodman (1996\nRecall that for a given context length l, an n-gram model of order l + 1 is optimal under the log likelihood criterion. Hence in the case where an RNN with finite context achieves near the lowest possible cross-entropy loss, it behaves like an n-gram model.\nUnigram noising as interpolation. To start, we consider the simplest case of bigram probabilities\nCy(Xt-1,Xt Cy(xt-1) =(1-y)c(xt-1,xt)+y p(xt-1)c(xt)]/c(xt-1 =(1-y)p(xt|xt-1) +y p(xt)\nMore generally, let x<t be noised tokens from x. We consider the expected prediction under noise\n|x<t) =Ex<t [p(xt|x<t)] ) t(|J|) t|X J,XK x K z Ex K p(|J|swaps) p(xt [noised context) p(drawing z.\nBlank noising as interpolation Next we consider the blank noising scheme and show that it cor responds to interpolation as well. This also serves as an alternative explanation for the gains that other related work have found with the \"word-dropout\"' idea (Kumar et al.]2015f Dai & Le]2015 Bowman et al.] 2015). As before, we do not noise the token being predicted xt. Let x<t denote the random variable where each of its tokens is replaced by \" \" with probability y, and let x J denote the sequence with indices J unchanged, and the rest replaced by \" \". To make a prediction, we use the expected probability over different noisings of the context\nP(xt|x<t) =Ez<t[p(xt|x<t)]= T(J) p(xt|xJ) p(|J| swaps) p(xt |noised context)\nwhere J C {1, 2, . . . , t-- 1}, which is also a mixture of the unnoised probabilities over subsequences of the current context. For example, in the case of trigrams, we have\npy(x3|x1,x2) = n(2) p(x3|x1,x2) + (1) p(x3|x1,_) + (1) p(x3|_,x2) + z(O) p(x3]\nWith the connection between noising and smoothing in place, we now consider how we can improv the two components of the noising scheme by considering:\nNoising Probability Although it simplifies analysis, there is no reason why we should choos fixed y; we now consider defining an adaptive y(x1:t) which depends on the input sequence. Con sider the following bigrams:\nThe first bigram is one of the most common in English corpora; its probability is hence well esti mated and should not be interpolated with lower order distributions. In expectation, however, using fixed yo when noising results in the same lower order interpolation weight yo for common as well as rare bigrams. Intuitively, we should define y(x1:t) such that commonly seen bigrams are less likely to be noised.\nThe second bigram, \"Humpty Dumpty, is relatively uncommon, as are its constituent unigrams However, it forms what Brown et al.(1992) term a \"sticky pair': the unigram \"Dumpty\" almost always follows the unigram \"Humpty\", and similarly, \"Humpty\" almost always precedes \"Dumpty\" For pairs with high mutual information, we wish to avoid backing off from the bigram to the unigram distribution.\nwhere the mixture coefficients are (|J]) = (1 - y)|J|t-1-|J| with j (|J|) = 1. J C {1,2,...,t - 1} denotes the set of indices whose corresponding tokens are left unchanged, and K the set of indices that were replaced.\n1. Adaptively computing noising probability y to reflect our confidence about a particular input subsequence. 2. Selecting a proposal distribution q(x) that is less naive than the unigram distribution by leveraging higher order n-gram statistics.\nTable 1: Noising schemes Example noising schemes and their bigram smoothing analogues. Here we consider the bigram probability p(x1, x2) = p(x2[x1)p(x1). Notation: y(x1:t) denotes the noising probability for a given input sequence x1:t, q(x) denotes the proposal distribution, and N1+(x, o) denotes the number of distinct bigrams in the training set where x is the first unigram. In all but the last case we only noise the context x1 and not the target prediction x2.\nwhere for 0 yo 1 we have 0 yAD 1, though in practice we can also clip larger nois ing probabilities to 1. Note that this encourages noising of unigrams that precede many possible other tokens while discouraging noising of common unigrams, since if we ignore the final token. r. c(x1, x2) = c(x1)\nBoth bigrams appear frequently in text corpora. As a direct consequence, the unigrams \"Francisco\". and \"York'' also appear frequently. However, since \"Francisco' and \"York'' typically follow \"San\". and \"New\", respectively, they should not have high probability in the proposal distribution as they. might if we use unigram frequencies (Chen & Goodman1996). Instead, it would be better to increase the proposal probability of unigrams with diverse histories, or more precisely unigrams that complete a large number of bigram types. Thus instead of drawing from the unigram distribution,. we consider drawing from\nq(x) x N1+,x\nTable1 summarizes the discussed noising schemes"}, {"section_index": "7", "section_name": "3.5 TRAINING AND TESTING", "section_text": "During training, noising is performed per batch and is done online such that each epoch of training sees a different noised version of the training data. At test time, to match the training objective we should sample multiple corrupted versions of the test data, then average the predictions (Srivastava et al.|2014). In practice, however, we find that simply using the maximum likelihood (uncorrupted input sequence works well; evaluation runtime remains unchanged."}, {"section_index": "8", "section_name": "3.6 EXTENSIONS", "section_text": "The schemes described are for the language model setting. To extend them to the sequence-to sequence or encoder-decoder setting, we noise both x<t as well as y<t. While in the decoder we\nNoised y(x1:2) q(x) Analogue x1 Y0 q(\"_\")=1 interpolation X1 Y0 unigram interpolation x1 yoN1+(x1,o)/c(x1) unigram absolute discounting X1, X2 Y0N1+(x1,o)/c(x1) q(x) x N1+(o,x) Kneser-Ney\nN1+(x1, YAD(x1) = Y0 c(x1, x2)\nProposal Distribution While choosing the unigram distribution as the proposal distribution q(x. preserves unigram frequencies, by borrowing from the smoothing literature we find another distri bution performs better. We again begin with two motivating examples:.\nNote that we now noise the prediction xt in addition to the context x1:t-1. Combining this new proposal distribution with the discounted yAD(x1) from the previous section, we obtain the noising. analogue of Kneser-Ney smoothing.\nTable 3: Perplexity on Text8 with different noising schemes\nhave y<t and yt as analogues to language model context and target prediction, it is unclear whethe noising x<t should be beneficial. Empirically, however, we find this to be the case (Table4)"}, {"section_index": "9", "section_name": "4.1 LANGUAGE MODELING", "section_text": "Penn Treebank We train networks for word-level language modeling on the Penn Treebank. dataset, using the standard preprocessed splits with a 10K size vocabulary (Mikolov2012). The PTB dataset contains 929k training tokens, 73k validation tokens, and 82k test tokens. Following Zaremba et al.(2014), we use minibatches of size 20 and unroll for 35 time steps when performing. backpropagation through time. All models have two hidden layers and use LSTM units. Weights. are initialized uniformly in the range [0.1, 0.1]. We consider models with hidden sizes of 512 and 1500.\nWe train using stochastic gradient descent with an initial learning rate of 1.0, clipping the gradient if its norm exceeds 5.0. When the validation cross entropy does not decrease after a training epoch we halve the learning rate. We anneal the learning rate 8 times before stopping training, and pick the model with the lowest perplexity on the validation set.\nFor regularization, we apply feed-forward dropout (Pham et al.[2014) in combination with our noising schemes. We report results in Table |2|for the best setting of the dropout rate (which we find to match the settings reported in Zaremba et al.(2014) as well as the best setting of noising.\nNoising scheme Validation Test Medium models (512 hidden size). none (dropout only) 84.3 80.4 blank 82.7 78.8 unigram 83.1 80.1 bigram Kneser-Ney 79.9 76.9 Large models (1500 hidden size). none (dropout only) 81.6 77.5 blank 79.4 75.5 unigram 79.4 76.1 bigram Kneser-Ney 76.2 73.4 Zaremba et al.(2014 82.2 78.4 Gal (2015) variational dropout (tied weights) 77.3 75.0 Gal(2015) (untied weights, Monte Carlo) 73.4\nTable 2: Single-model perplexity on Penn Treebank with different noising schemes. We also com pare to the variational method of|Gal|(2015), who also train LSTM models with the same hidden dimension. Note that performing Monte Carlo dropout at test time is significantly more expensive. than our approach, where test time is unchanged..\nNoising scheme Validation Test none 94.3 123.6 blank 85.0 110.7 unigram 85.2 111.3 bigram Kneser-Ney 84.5 110.6\n200 200 training, unnoised. 180 training, unnoised. 180 validation, unnoised validation, unnoised 160 training, % = 0.2 160 training, %o = 0.6 140 validation, 7o = 0.2 140 validation, 7o = 0.6 120 120 100 100 80 80 60 60 40 40 20 20 10 20 30 40 50 60 0 10 20 30 40 50 60 Epochs Epochs (a) Penn Treebank corpus (b) Text8 corpus..\nFigure 1: Example training and validation curves for an unnoised model and model regularizec using the bigram Kneser-Ney noising scheme.\nScheme Perplexity BLEU dropout, no noising 8.84 24.6 blank noising. 8.28 25.3 (+0.7) unigram noising. 8.15 25.5 (+0.9) bigram Kneser-Ney 7.92 26.0 (+1.4) source only. 8.74 24.8 (+0.2) target only 8.14 25.6 (+1.0)\nTable 4: Perplexities and BLEU scores for machine translation task. Results for bigram KN noising on only the source sequence and only the target sequence are given as well.\nOur large models match the state-of-the-art regularization method for single model performance. on this task. In particular, we find that picking yAD(x1) and q(x) corresponding to Kneser-Ney. smoothing yields significant gains in validation perplexity, both for the medium and large size mod-. els. Recent work (Merity et al.2016fZilly et al.[|2016) has also achieved impressive results on this task by proposing different architectures which are orthogonal to our data augmentation schemes.\nText8 In order to determine whether noising remains effective with a larger dataset, we perform. experiments on the Text8 corpus3] The first 90M characters are used for training, the next 5M for. validation, and the final 5M for testing, resulting in 15.3M training tokens, 848K validation tokens,. and 855K test tokens. We preprocess the data by mapping all words which appear 10 or fewer times. to the unknown token, resulting in a 42K size vocabulary. Other parameter settings are the same as described in the Penn Treebank experiments, besides that only models with hidden size 512 are considered, and noising is not combined with feed-forward dropout. Results are given in Table[3"}, {"section_index": "10", "section_name": "4.2 MACHINE TRANSLATION", "section_text": "For our machine translation experiments we consider the English-German machine translation track of IWSLT 20154 The IWSLT 2015 corpus consists of sentence-aligned subtitles of TED and TEDx talks. The training set contains roughly 190K sentence pairs with 5.4M tokens. FollowingLuong &. Manning(2015), we use TED tst2012 as a validation set and report BLEU score results (Papineni. et al.[2002) on tst2014. We limit the vocabulary to the top 50K most frequent words for each language.\nScheme Perplexity BLEU dropout, no noising 8.84 24.6 blank noising 8.28 25.3 (+0.7) unigram noising 8.15 25.5 (+0.9) bigram Kneser-Ney 7.92 26.0 (+1.4) source only 8.74 24.8 (+0.2) target only 8.14 25.6 (+1.0)\norobability yo on the validation set| Figure[1[shows the training and validation perplexity curves or a noised versus an unnoised run..\n200 y=0 %0 YAD 150 100 50 0 0.2 0.4 0.6 0.8 1 Yo (unscaled)\nFigure 2: Perplexity with noising on Penn Tree. bank while varying the value of yo. Using dis- counting to scale %o (yielding YAD) maintains gains for a range of values of noising probabil- ity, which is not true for the unscaled case.\nWe train a two-layer LSTM encoder-decoder network (Sutskever et al. 2014; Cho et al.]2014) with 512 hidden units in each layer. The decoder uses an attention mechanism (Bahdanau et al.2014) with the dot alignment function (Luong et al.] 2015). The initial learning rate is 1.0 and we start halving the learning rate when the relative difference in perplexity on the validation set between two consecutive epochs is less than 1%. We follow training protocols as described in Sutskever et al.(2014): (a) LSTM parameters and word embeddings are initialized from a uniform distribution between -0.1, 0.1, (b) inputs are reversed, (c) batch size is set to 128, (d) gradient clipping is performed when the norm exceeds a threshold of 5. We set hidden unit dropout rate to 0.2 across all settings as suggested inLuong et al.(2015). We compare unigram, blank, and bigram Kneser-Ney noising. Noising rate y is selected on the validation set.\nResults are shown in Table 4 We observe performance gains for both blank noising and unigram noising, giving roughly +0.7 BLEU score on the test set. The proposed bigram Kneser-Ney noising scheme gives an additional performance boost of +0.5-0.7 on top of the blank noising and unigram noising models, yielding a total gain of +1.4 BLEU.\nN1+ x1 YAD(x1) = Y0\nWe compare the performance of models trained with a fixed yo versus a yo rescaled using discount ing. As shown in Figure[2] bigram discounting leads to gains in perplexity for a much broader range of yo. Thus the discounting ratio seems to effectively capture the \"right'' tokens to noise.\nno noise 0 y = 0.1 4 y = 0.25 (d|d) Ia 3 2 1 uniform unigram p\nFigure 3: Mean KL-divergence over validation. set between softmax distributions of noised and. unnoised models and lower order distributions. Noised model distributions are closer to the uni. form and unigram frequency distributions..\nWe now examine whether discounting has the desired effect of noising subsequences according to their uncertainty. If we consider the discounting\nwe observe that the denominator c(x1) can dominate than the numerator Ni+(x1,o). Common. tokens are often noised infrequently when discounting is used to rescale the noising probability,. while rare tokens are noised comparatively much more frequently, where in the extreme case when. a token appears exactly once, we have YAD = Yo. Due to word frequencies following a Zipfian. power law distribution, however, common tokens constitute the majority of most texts, and thus discounting leads to significantly less noising..\nNoising Bigrams Trigrams none (dropout only) 2881 381 blank noising 2760 372 unigram noising 2612 365\nTable 5: Perplexity of last unigram for unseen bigrams and trigrams in Penn Treebank validatior set. We compare noised and unnoised models with noising probabilities chosen such that models have near-identical perplexity on full validation set."}, {"section_index": "11", "section_name": "5.2 NOISED VERSUS UNNOISED MODELS", "section_text": "Smoothed distributions In order to validate that data noising for RNN models has a similar effect. to that of smoothing counts in n-gram models, we consider three models trained with unigram noising as described in Section|4.1|on the Penn Treebank corpus with y = 0 (no noising), y = 0.1, and y = 0.25. Using the trained models, we measure the Kullback-Leibler divergence DkL(pl|q) = Pi log(pi/qi) over the validation set between the predicted softmax distributions, p, and the. uniform distribution as well as the unigram frequency distribution. We then take the mean KL divergence over all tokens in the validation set..\nUnseen n-grams Smoothing is most beneficial for increasing the probability of unobserved se- quences. To measure whether noising has a similar effect, we consider bigrams and trigrams in the validation set that do not appear in the training set. For these unseen bigrams (15062 occurrences) and trigrams (43051 occurrences), we measure the perplexity for noised and unnoised models with. near-identical perplexity on the full set. As expected, noising yields lower perplexity for these un seen instances."}, {"section_index": "12", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Will Monroe for feedback on a draft of this paper, Anand Avati for help running exper- iments, and Jimmy Wu for computing support. We also thank the developers of Theano (Theano Development Team2016) and Tensorflow (Abadi et al.[2016). Some GPUs used in this work were donated by NVIDIA Corporation. ZX, SW, and JL were supported by an NDSEG Fellowship, NSERC PGS-D Fellowship, and Facebook Fellowship, respectively. This project was funded in part by DARPA MUSE award FA8750-15-C-0242 AFRL/RIKF"}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.\nRecall that in interpolation smoothing, a weighted combination of higher and lower order n-gram models is used. As seen in Figure [3] the softmax distributions of noised models are significantly closer to the lower order frequency distributions than unnoised models, in particular in the case of the unigram distribution, thus validating our analysis in Section3.3.\nIn this work, we show that data noising is effective for regularizing neural network-based sequence models. By deriving a correspondence between noising and smoothing, we are able to adapt ad- vanced smoothing methods for n-gram models to the neural network setting, thereby incorporat-. ing well-understood generative assumptions of language. Possible applications include exploring. noising for improving performance in low resource settings, or examining how these techniques generalize to sequence modeling in other domains..\nYoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilisti language model. In Journal Of Machine Learning Research, 2003.\nSamuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Ben gio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.\nPeter F Brown. Peter V Desouza. Robert L Mercer. Vincent J Della Pietra. and Jenifer C Lai. Class based n-gram models of natural language. Computational linguistics, 1992.\nStanley F Chen and Joshua Goodman. An empirical study of smoothing techniques for language modeling. In Association for Computational Linguistics (ACL), 1996\nKyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014\nAndrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in Neural Infor mation Processing Systems, pp. 3061-3069, 2015.\nAwni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, et al. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014.\nSepp Hochreiter and Juirgen Schmidhuber. Long short-term .\nMinh-Thang Luong and Christopher D Manning. Stanford neural machine translation systems for spoken language domains. In Proceedings of the International Workshop on Spoken Language Translation, 2015.\nLi Deng, Alex Acero, Mike Plumpe, and Xuedong Huang. Large-vocabulary speech recognition under adverse acoustic environments. In ICSLP, 2000.\nDavid Krueger and Roland Memisevic. Regularizing rnns by stabilizing activations. arXiv preprint arXiv:1511.08400, 2015.\nAnkit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter On druska, Ishaan Gulrajani, and Richard Socher. Ask me anything: Dynamic memory networks for. natural language processing. arXiv preprint arXiv:1506.07285, 2015.\nStephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.\nTomas Mikolov. Statistical language models based on neural networks. PhD thesis, PhD thesis Brno University of Technology. 2012.[PDF], 2012.\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 2014.\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks In Advances in neural information processing systems, pp. 3104-3112, 2014.\nJulian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutnik, and Jurgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016.\nKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 4Oth annual meeting on association for computational linguistics, pp. 311-318. Association for Computational Linguistics, 2002"}, {"section_index": "14", "section_name": "SKETCH OF NOISING ALGORITHM", "section_text": "We provide pseudocode of the noising algorithm corresponding to bigram Kneser-Ney smoothing for n-grams (In the case of sequence-to-sequence tasks, we estimate the count-based parameters separately for source and target). To simplify, we assume a batch size of one. The noising algorithm is applied to each data batch during training. No noising is applied at test time.\nAlgorithm 1 Bigram KN noising (Language modeling setting\nyosalastnbuton 1+(o, x) puts X, Y batch of unnoised data indices, scaling factor Yo ocedure No1sEBGKN(X, Y) >X=x1.xt),Y=x2.,xt+1 X.YX,Y for j = 1,...,t do yY0N1+(xj,O)/c(xj) if ~ Bernoulli() then x; ~ Categorical(q) > Updates X yj ~ Categorical(q) end if end for return X, Y > Run training iteration with noised batch. d procedure\n>X=x1..,xt),Y=(x2,.,xt+1"}] |
HkJq1Ocxl | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "A central goal of Artificial Intelligence is the creation of machines that learn as effectively from. human instruction as they do from data. A recent and important step towards this goal is the inven-. tion of neural architectures that can learn to perform algorithms akin to traditional computers, using. primitives such as memory access and stack manipulation (Graves et al., 2014; Joulin & Mikolov. 2015; Grefenstette et al., 2015; Kaiser & Sutskever, 2015; Kurach et al., 2015; Graves et al., 2016).. These architectures can be trained through standard gradient descent methods, and enable machines. to learn complex behavior from input-output pairs or program traces. In this context the role of the. human programmer is often limited to providing training data. However, for many tasks training. data is scarce. In these cases the programmer may have partial procedural background knowledge:. one may know the rough structure of the program, or how to implement several sub-routines that. are likely necessary to solve the task. For example, in visual programming, a user often knows a. rough sketch of what they want to do, but need to fill in the specific components. In programming by. demonstration (Lau et al., 2001) and programming with query languages (Neelakantan et al., 2015a). a user conforms to a larger set of conditions on the data, and needs to settle details. In all these scenarios, the question then becomes how to exploit this type of prior knowledge when learning. algorithms.\nTo address the above question we present an approach that enables programmers to inject their pro cedural background knowledge into a neural network. In this approach the programmer specifies a program sketch (Solar-Lezama et al., 2005) in a traditional programming language. This sketch de fines one part of the neural network behaviour. The other part is learned using training data. The core insight that enables this approach is the fact that most programming languages can be formulated"}, {"section_index": "1", "section_name": "PROGRAMMING WITH A DIFFERENTIABLE FORTH IN- TERPRETER", "section_text": "jak, t.rocktaschel, j.narad, s.riedel}@cs.ucl.ac.uk"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "There are families of neural networks that can learn to compute any function, provided sufficient training data. However, given that in practice training data is scarce for all but a small set of problems, a core question is how to incorporate prior knowledge into a model. Here we consider the case of prior procedural knowledge, such as knowing the overall recursive structure of a sequence trans- duction program or the fact that a program will likely use arithmetic operations on real numbers to solve a task. To this end we present a differentiable interpreter for the programming language Forth. Through a neural implementation of the dual stack machine that underlies Forth, programmers can write program sketches with slots that can be filled with behaviour trained from program input-output data. As the program interpreter is end-to-end differentiable, we can optimize this behaviour directly through gradient descent techniques on user specified objec- tives, and also integrate the program into any larger neural computation graph We show empirically that our interpreter is able to effectively leverage different levels of prior program structure and learn complex transduction tasks such as se- quence sorting or addition with substantially less data and better generalisation over problem sizes. In addition, we introduce neural program optimisations based on symbolic computation and parallel branching that lead to significant speed im- provements.\nin terms of an abstract machine that executes the commands of the language. We implement these machines as neural networks, constraining parts of the networks to follow the sketched behaviour The resulting neural programs are consistent with our prior knowledge and optimised with respec to the training data.\nn this paper we focus on the programming language Forth (Brodie, 1980), a simple yet powerfu. stack-based language that is relatively close to machine code but enables modular programs anc. facilitates abstraction. Underlying Forth's semantics is a simple abstract machine. We introduce th. Forth Neural Abstract Machine (d4), an implementation of this machine that is differentiable witl. espect to the transition it executes at each time step, as well as distributed input representation in the machine buffers. As sketches that users define are also differentiable, any underspecifiec. program content contained within the sketch can be trained through backpropagation..\nFor two neural programming tasks introduced in previous work (Reed & de Freitas, 2015) we present. Forth sketches that capture different degrees of prior knowledge. For example, we define only the general recursive structure of a sorting problem. We show that given only input-output pairs, d4 can learn to fill the sketch and generalise well to problems of unseen size. We also use d4 to investigate the type and degree of structure necessary when solving tasks, and show how symbolic execution can significantly improve execution time when applicable..\nThe contribution of our work is fourfold: i) we present a neural implementation of a dual stacl machine underlying Forth, ii) we introduce Forth sketches for programming with partial procedural. background knowledge, iii) we apply Forth sketches as a procedural prior on learning algorithms. from data, and iv) we introduce program code optimisations based on symbolic execution that car. speed up neural execution.\nForth is a simple Turing-complete stack-based programming language (ANSI, 1994; Brodie, 1980) Its underlying abstract machine is represented by a state S = (D, R, H, c), which contains twc stacks: a data evaluation pushdown stack (data stack) D holds values for manipulation, and a returr address pushdown stack (return stack) R assists with return pointers and subroutine calls. These are accompanied by a heap or random memory access buffer H, and a program counter c.\nAn example of a Forth program that implements the Bubble sort algorithm, is shown in Listing 1. and a detailed description of how this program is executed by the Forth abstract machine is provided. in Appendix B. Notice that while Forth provides common control structures such as looping and. branching, these can always be reduced to low-level code that uses jumps and conditional jumps. (using the words BRANCH and BRANCHO, respectively). Likewise, we can think of sub-routine definitions as code blocks tagged with a label, and their invocation amounts to jumping to the tagged. label.\nWhen a programmer writes a Forth program, they define a sequence of Forth words, i.e., a sequence of known state transition functions. In other words, the programmer knows exactly how computation should proceed. To accommodate for cases when the developer's procedural background knowledge is incomplete, we extend Forth to support the definition of a program sketch. As is the case with Forth programs, sketches are sequences of transition functions. However, a sketch may contain transition functions whose behavior is learned from data.\nIn this work, we restrict ourselves to a subset of all Forth words, detailed in Appendix A\nA Forth program P is a flat sequence of Forth words (i.e. commands) P = w1 ... wn. The role. of a word varies, encompassing language keywords, primitives, and user-defined subroutines (e.g.. DROP, to discard the top element of the stack, or DUp, to duplicate the top element of the stack). Each word w; defines a transition function between machine states, wi : S -> S. Therefore, a. program P itself defines a transition function by simply applying the word at the current program. counter to the current state. Although usually considered as a part of the heap H, we consider Forth programs P separately to ease the analysis..\nBUBBLE DUP IF Low-level code Execution RNN >R { observe DO D-1 BUBBLE choOse NOP SWAP 1 DUP BUBBLE R> BRANCHO 8 ELSE DROP Rr [11 9 3] >R THEN [9 8 3] [10 10 2] [9 9 2] [11 8 3] 1 .. y [3 11 93] BUBBLE [8933] [10 2 10 3] R> [2 9 9 3] DROP Pe D [8 11 3 3] d C S; x\nFigure 1: Neural Forth Abstract Machine. Forth sketch Pe is translated to a low-level code, with the slot { . . .} substituted by a parametrised neural network. The slot is learnt from input-output examples (x, y) through the differentiable machine whose state S, comprises the low-level code and. program counter c, data stack D (with pointer d), return stack R (with pointer r), and the heap H..\nTo learn the behaviour of transition functions within a program we would like the machine output. to be differentiable with respect to these functions (and possibly representations of inputs to the program). This enables us to choose parameterized transition functions such as neural networks, and efficiently train their parameters through backpropagation and gradient methods. To this end we first provide a continuous representation of the state of a Forth abstract machine. We then present a recurrent neural network (RNN) that models program execution on this machine, parametrised. by the transition functions at each time step. Lastly, we discuss optimizations based on symbolic execution and the interpolation of conditional branches.."}, {"section_index": "3", "section_name": "3.1 MACHINE STATE ENCODING", "section_text": "readm(a) = aTM\nakin to the Neural Turing Machine (NTM) memory (Graves et al., 2014), where is the outer product, O is the Hadamard product, and a is the address pointer.2 In addition to the memory buffers D and R, the data stack and the return stack contain pointers to the current top-of-the-stack (TOS) element d, r E R'. This allows us to implement pushing as writing a value x into M and incrementing the TOS pointer as follows:\nLikewise, popping is realized by multiplying the TOS pointer and the memory buffer, and decreasing the TOS pointer:\npopmO) = readm(p) side-effect:p <- dec(p)\nFinally, the program counter c E RP is a vector that, when one-hot, points to a single word in a program of length p, and is equivalent to the c vector of the symbolic state machine.3 We will use S to denote the space of all continuous representations S..\n2The equal widths of H and D allow us to directly move vector representations of values between the hea and the stack.\nWe map the symbolic machine state S = (D,R, H,c) to a continuous representation S = (D, R, H, c) - into two differentiable stacks (with pointers), the data stack D = (D, d) and the return stack R = (R, r), a heap H, and an attention vector c indicating which word of the sketch Pe is being executed at the current time step. All three memory structures, the data stack, the re- turn stack and the heap, are based on differentiable flat memory buffers M E {D, R, H}, where D, R, H E Rlv, for a stack size l and a value size v. Each has a well-defined, differentiable read. Operation:\nwritem(x,a) : M M- (a 8 1) O M+x 8 a\n0 BUBBLE ( al ... an n-1 -- one pass ) BUBBLE a1 an n-1 one pass : 1 DUP IF >R 1 DUP IF >R 2 { observe DO D-1 -> permute D-1 D0 RO } 2 OVER OVER < IF SWAP THEN 3 R> SWAP >R 1- BUBBLE R> 3 1- BUBBLE R> 4 4 I ** Alternative sketch **. ELSE 5 5 DROP \\ { observe D0 D-1 -> choose NOP SwAP } 6 6 THEN \\ R> SWAP >R 1- BUBBLE R> 7; 7 ELSE SORT 8 8 DROP :. ( al .. an n -- sorted 9 9 1- DUP O DO >R R@ BUBBLE R> LOOP DROP THEN 10; 10 : 11 2 4 2 7 4 sORT \\ Example call\n0 BUBBLE al 0 BUBBLE an n-1 one pass) al : ann-_ one pass 1 DUP IF >R 1 DUP IF >R 2 2 { observe D0 D-1 -> permute D-1 D0 R0 } OVER OVER < IF SWAP THEN 3 R> SWAP >R 1- BUBBLE R> 3 1- BUBBLE R> 4 ELSE 4 \\ ** Alternative sketch **. 5 DROP 5 \\ { observe D0 D-1 -> choose NOP SwAP } 6 THEN 6 \\R> SWAP >R 1- BUBBLE R> 1 7 ELSE 8 8 SoRT ( al .. an n -- sorted DROP : 9 9 THEN 1- DUP O DO >R R@ BUBBLE R> LOOP DROP 10; 10 11 2 4 2 7 4 soRT Example call Listing 2: BUBBLE sketch with trainable permuta- I:ot: Dub11oC 41\nListing 1: BubbleSort in Forth\nNeural Forth Words It is straightforward to convert Forth words, defined as functions on discrete. machine states, to functions operating on the continuous space S. For example, consider the Forth word DUp, which duplicates the TOS element of the data stack. A differentiable version works by. first calculating the value e on the TOS address of D, as e = dTD. It then shifts the stack pointer. via d inc(d), and writes e to D using writep(e, d). We present the complete description of. implemented Forth Words in Appendix A and their differentiable counterparts in Appendix C.."}, {"section_index": "4", "section_name": "3.2 FORTH SKETCHES", "section_text": "We define a Forth sketch Pe as a sequence of continuous transition functions P = w1 .. . wn. Here w; E S -> S either corresponds to a neural Forth word, or is a trainable transition function. We wil call these trainable functions slots, as they correspond to underspecified \"slots\" in the program code that need to be filled by learned behaviour.\nEncoders We provide the following options for encoders.\nDecoders Users can specify the following decoders"}, {"section_index": "5", "section_name": "3.3 THE EXECUTION RNN", "section_text": "We model execution using an RNN which produces a state S+1 conditioned on a previous state S;. It does so by first passing the current state to each function w; in the program, and then weighing. each of the produced next states by the component of the program counter vector c; that corresponds to program index i, effectively using c as an attention vector over code. Formally we have:.\nListing 2: BUBBLE sketch with trainable permuta tion (trainable comparison in comments)..\nWe allow users to define a slot w by specifying a pair of a state encoder wenc that produces a latent. representation h of the current machine state using a multi-layer perceptron, and a decoder wdec that consumes this representation to produce the next machine state. We hence have w = wdec o wenc.. To use slots within Forth program code we introduce a notation that reflects this decomposition. In. particular, slots are defined using the syntax { encoder -> decoder } where encoder and. decoder are specifications of the corresponding slot parts as described below..\nst at ic: produces a static representation, independent of the actual machine state. . observe e1 ... em: concatenates the elements e1 ... em of the machine state. An element. can be a stack item Di at relative index i, a return stack item Ri, etc..\nchoose w1 ... wm: chooses from the Forth words w1 ... wm. Takes an input vector h of length m to produce a weighted combination of machine states m h,w;(S). manipulate e1...em: directly manipulates the machine state elements e1...em by writing the appropriately reshaped output of the encoder over the machine state elements with writeM. permute e1 ... em: permutes the machine state elements e1 ... em via a linear combina- tion of m! state vectors.\nSi+1=RNN(Si,P) =>`c;w;(Si)\n0 1 2 3 4 01234 01234 01 2 3 4 0 1 2 3 4 R|r 3 2 1 0 3 2 1 O D|d O : BUBBLE BUBBLE . BUBBLE : BUBBLE . BUBBLE 1 DUP DUP DUP DUP DUP 2 BRANCHO 8 BRANCHO 8 BRANCHO 8 BRANCHO 8 BRANCHO 8 3 >R >R >R >R >R 4 {...} {...} {...} 5 1 - 1- 1- 1 - 6 BUBBLE BUBBLE BUBBLE BUBBLE BUBBLE 7 R> R> R> R> R> 8 DROP P DROP DROP DROP DROP\nListing 2 defines the BUBBLE word as a sketch capturing several types of prior knowledge. In thi. section we describe the PeRMUTE sketch. In it, we assume BUBBLE involves a recursive call, th: terminates at length 1, and that the next BUBBLE call takes as input some function of the currer length and the top two stack elements..\nThe input to this sketch are the sequence to be sorted and its length decremented by one, n - 1 (line O). These inputs are expected on the data stack. After the length (n - 1) is duplicated for further use with DUP, the machine tests whether it is non-zero (using IF, which consumes the TOS during the check). If n - 1 > 0, it is stored on the R stack for future use (line 1)..\nAt this point (line 2) the programmer only knows that a decision must be made based on the top two data stack elements DO and D-1 (comparison elements), and the top return stack, R0 (length decremented by 1). Here the precise nature of this decision is unknown, but is limited to variants of permutation of these elements, the output of which produce the input state to the decrement -1 and the recursive BUBBLE call (line 3). At the culmination of the call, R0, the output of the learned slot behavior, is moved onto the data stack using R>, and execution proceeds to the next step.\nFigure 2 illustrates how portions of this sketch are executed on the d4 RNN. The program counter initially resides at >R (line 3 in P), as indicated by the vector c, next to program P. Both data and return stacks are partially filled (R has 1 element, D has 4), and we show the content both through horizontal one-hot vectors and their corresponding integer values (color coded). The vectors d and r point to the top of both stacks, and are in a one-hot state as well. In this execution trace the slot at line 4 is already showing optimal behaviour: it remembers the element on the return stack (4) is larger. and executes BUBBLE on the remaining sequence with the counter n subtracted by one. to 1"}, {"section_index": "6", "section_name": "3.4 PROGRAM CODE OPTIMIZATIONS", "section_text": "The d4 RNN requires one time step per transition. After each time step the program counter is eithe incremented or decremented by one, or explicitly set or popped from the stack to jump. In turn a new machine state is calculated by executing all words in the program, and then weighting the resul states by the activation of the program counter at the given word. This parallel execution of all word.\nDOP BRANCHO 8 >R ... 1- BUBBLE R> DROP\nFigure 2: d4 Segment of the RNN execution of a Forth sketch in Listing 2. The pointers (d, r) and values (rows of R and D) are all in one-hot state (colors simply denote values observed, defined by the top scale), while the program counter maintains the uncertainty. Subsequent states are discretised for clarity. Here the slot . . . } has learned its optimal behaviour.\nClearly this recursion, and its final state, are differentiable with respect to the program code P, and its inputs. Furthermore, for differentiable Forth programs it is easy to show that the final state of this RNN will correspond to the final state of a symbolic execution.\nis expensive, and it is therefore advisable to avoid full RNN steps wherever possible. We use twc strategies to significantly speed-up d4.\nSymbolic Execution Whenever we have a sequence of Forth words that contains no branch entry. or exit points, we collapse this sequence to a single transition. We do this using symbolic execu. tion (King, 1976): we first fill the stacks and heap of a standard Forth abstract machine with symbols. representing arbitrary values (e.g. D = d1...di and R = r1...ri), and execute the sequence of Forth words on the machine. This results in a new symbolic state. We use this state, and its differ. ence to the original state, to derive the transition function of the complete sequence. For example the sequence R> SwAP >R that swaps the top the data stack with the top of the return stack yields. the symbolic state D = r1d2 ... di. and R = d1r2 ... r1. Compared to the initial state we have only changed the top elements on both stacks, and hence the neural transition will only need to swap the. top elements of D and R\nInterpolation of If-Branches When symbolic execution hits a branching point we generally can. not simply continue execution, as the branching behaviour will depend on the current machine state. and we cannot symbolically resolve it. However, for branches arising from if-clauses that involve no function calls or loop structures, we can still avoid giving control back to the program counter and. evaluating all words. We simply execute both branches in parallel, and then let the resulting state. be the sum of the output states of both branches, weighted by the score given to the symbol TRUE. expected on top of the data stack."}, {"section_index": "7", "section_name": "3.5 TRAINING", "section_text": "We can use backpropagation and any variant of SGD to optimise our loss function. Note that it is trivial to also provide supervision of the intermediate states (trace-level), as done by the Neural Program Interpreter Reed & de Freitas (2015)"}, {"section_index": "8", "section_name": "4 EXPERIMENTS", "section_text": "We test d4 on the sorting and addition tasks presented in Reed & de Freitas (2015) with varying levels of program structure. For each problem we introduce two sketches.\nThe parameters of each sketch are trained using Adam (Kingma & Ba, 2014), with gradient clipping and gradient noise (Neelakantan et al., 2015b). Hyperparameters were tuned via random search or a development variant of each task, for 1oo0 epochs, repeating each experiment 5 times. During testing we employ memory element discretisation, replacing differentiable stacks and pointers witl their discrete counterparts, and effectively allowing the trained model to generalize to any sequence length if the correct sketch behavior has been learned..\nTo illustrate the generalization ability of this architecture, we compare against a Seq2Seq (Sutskever et al., 2014) baseline. All Seq2Seq models are single-layer, with a hidden size of 50, trained similarly for 1000 epochs using Adam"}, {"section_index": "9", "section_name": "4.1 SORTING", "section_text": "Sorting sequences of digits is a hard task for RNNs such as LSTMs, as they fail to generalize to sequences that are marginally longer than the ones they have been trained on (Reed & de Freitas\nOur training procedure assumes input-output pairs of machine start and end states (xi, y) only. The. output y; defines a target memory YP and a target pointer y on the data stack D, Additionally, we. may have a mask K, that indicates which components of the stack should be assessed and which should be ignored. For example, we do not care about values in the stack buffer above the target. stack depth, dependent on yd. We use DT(0, x) and dT(0, x;) to denote the final state of D and d. after T steps of execution RNN, when using initial state x;, and define the loss function:.\nL(0) = ) K;O(DT(0,x;) - YP)2 + (dT(0,xi) -yi)2 ~\nTest len. 8 64 Train len. 2 3 4 2 3 4 Seq2Seq 26.2 29.2 39.1 13.3 13.6 15.9 Permute 100.0 100.0 100.0 100.0 100.0 100.0 Compare 100.0 100.0 100.0 100.0\nTest len. 8 64 Train len. 2 3 4 2 3 4 Seq2Seq 26.2 29.2 39.1 13.3 13.6 15.9 Permute 100.0 100.0 100.0 100.0 100.0 100.0 Compare 100.0 100.0 100.0 100.0\nTable 1: Accuracy of Permute and Compare sketches in comparison to a Seq2Seq baseline on the sorting problem.\n2015). We investigate several strong priors based on BubbleSort for this transduction task and present two d4 sketches that enable us to learn sorting from only few training examples.\nIn both sketches, the outer loop can be specified in d4 (Listing 1, line 9), which repeatedly calls a function BUBBLE. In doign so, it defines sufficient structure so that the behavior of the network is invariant to the input sequence length.\nQuantitative Evaluation on BubbleSort A quantitative comparison of our models on the Bub- bleSort task is provided in Table 1. For a given test sequence length, we vary the training set sizes to illustrate the model's ability to generalize to sequences longer than those it observed during training. Here d4 is shown to quickly learn the correct sketch behavior, and is able to generalize perfectly to sort sequences of 64 elements after observing only two element sequences during training. In com- parison, the Seq2Seq baseline falters when attempting similar generalizations, and performs close to chance when tested on the longer sequences. The exception to d4's flawless performance on the BubbleSort task arises from computational difficulties when training from longer sequence lengths (as indicated by the absence of length 4 training set results when using the ComPARE sketch). We discuss this issue further in Section 5..\nWhen measuring the performance of the model as the number of training instances varies, we can. observe the benefit of additional prior knowledge to the optimization process. We show this in. Figure 3a, on both train and test accuracy, using train sequences of length 3 and test sequences of. length 8. When prior knowledge is provided (ComPARE), the model quickly maximizes the training accuracy. Providing less structure (PeRMUTE) results in lower training accuracy when only a few training examples have been observed. However, with additional training instances both sketches learn the correct behavior and generalize equally well..\nQualitative Analysis on BubbleSort It is interesting to analyse the program counter traces, de picted in Figure 4. The trace follows a single example from start, to middle, and the end of the training process. In the beginning of training, the program counter starts to deviate from the one-hot representation in the first 20 steps (not observed in the figure due to unobservable changes), and afte. 2 iterations of sORT, d4 fails to correctly determine the next word. After a few training epochs d4 learns better permutations which enables the algorithm to take crisp decisions and halt in the correct State.\nProgram Code Optimizations We measure the runtime of BubbleSort on sequences of varying length with and without the optimizations described in Section 3.4. The results of ten repeated run are shown in Figure 3b and demonstrate large relative improvements for symbolic execution anc interpolation of if-branches compared to non-optimized d4 code.\n1. PeRmuTE. The PeRmuTE sketch specifies that three elements (the top two elements of the stack, and the top of the return stack) must be permuted based on the former's values Both the value comparison and the permutation behavior must be learned from input-output examples. Code for this sketch is shown in Listing 2. 2. CoMPARE. The ComPARE sketch provides additional prior procedural knowledge to the model. In contrast to PeRmuTE, only the comparison between the top two elements on the stack must be learned. It is shown in the Listing 2 comments (lines 5 and 6),\n100 240 80 220 200 60 180 Rene anne 160 40 Compare [test] 140 Permute [test] 20 120 X Compare [train] Permute [train] 100 + 0 80 4 8 16 32 64 128 256 2 3 6 7 8 9 10 11 12 13 14 (a) (b)\n100 260 240 80 220 200 60 180 Runnnee 160 40 Compare [test] 140 Permute [test] 20 120 x Compare [train] + Permute [train] 100 0 80 4 8 16 32 64 128 256 2 3 4 5 6 7 8 9 10 11 12 13 14 (a) (b)\nFigure 3: Train and test accuracy for varying number of training examples (a) and relative spee improvements of program code optimizations for different input sequence lengths (b)\n(a) Program Counter trace in early stages of training (b) Program Counter trace in the middle of training (c) Program Co the end of training\nc) Program Counter trace at the end of training"}, {"section_index": "10", "section_name": "4.2 ADDITION", "section_text": "Next we applied d4 to the problem of learning to add two numbers of n digits each. We rely on the standard elementary school addition algorithm, where the goal is to iterate over pairs of the alignec digits, calculating the sum of each to yield the sum of the original numbers. The key complicatior. arises when two digits sum to a two-digit number, requiring that the correct extra digit (a carry) be. carried over to the subsequent column.\nHere d4 will assume aligned pairs of digits as input, with a carry for the least significant digit (po tentially O), and the length of the respective numbers. The sketches define the high-level operations through recursion, leaving the core addition to be learned from data..\nThe specified high-level behavior includes the recursive call template and the halting condition of the recursion (no remaining digits, line 1-2). The underspecified addition operation must take three digits from the previous call, the two digits to sum and a previous carry, and produce a single digil (the sum) and the resultant carry. We introduce two sketches for inducing this behavior:.\nFigure 4: Program Counter traces for a single example at different stages of training BubbleSort in. Listing 2 (red: successive recursion calls to BUBBLE, green: successive returns from the recursion and blue: calls to SORT). The last element in the last row is the halting command, which only gets. executed after learning the correct slot behavior..\n1. MANIPULATE. This sketch provides little prior procedural knowledge as it directly manip ulates the d4 machine state, filling in a carry and the result digits, based on the top three elements on the data stack (two digits and the carry). It is described in Listing 3\nTest len. 8 64 Train len. 2 4 8 2 4 8 Seq2Seq 37.9 57.8 99.8 15.0 13.5 13.3 Choose 100.0 100.0 100.0 100.0 100.0 100.0 Manipulate 100.0\nTest len. 8 64 Train len. 2 4 8 2 4 8 Seq2Seq 37.9 57.8 99.8 15.0 13.5 13.3 Choose 100.0 100.0 100.0 100.0 100.0 100.0 Manipulate 100.0\nTable 2: Accuracy of Choose and Manipulate sketches in comparison to a Seq2Seq baseline on the addition problem. Note that lengths corresponds to the length of the input sequence..\n0 : ADD-DIGITS 0 : ADD-DIGITS 1 DUP 0 = IF (al bl...an bn carry n -- rl r2...r_{n+l} ) 2 DROP 1 DUP O = IF 3 ELSE 2 DROP 4 >R 3 ELSE 5 { observe D0 D-1 D-2 -> choose 0 1 } 4 >R \\ put n on R 6 observe D-1 D-2 D-3 5 { observe D0 D-1 D-2 -> manipulate D-1 D-2 } -> choose 0 1 2 3 4 5 6 7 8 9 } 6 DROP R> 1- swAP >R \\ new_carry n-1 7 >R SWAP DROP SWAP DROP SWAP DROP R> 7 ADD-Digits I call add-digits on n-1 subseq.. 8 R> 1- SWAP >R 8 R> I put remembered results back on the stack. 9 ADD-DIGITS 9 THEN 10 R> 10; 11 THEN 12\nListing 3: Manipulate sketch for Elementary Addi tion. Input data is used to fill data stack externally\nQuantitative Evaluation on Addition In set of experiments analogous to those in our evaluatioi on BubbleSort, we demonstrate the performance of d4 on the addition task by examining test se sequence lengths of 8 and 64 while varying the lengths of the training set instances (Table 2). The Seq2Seq model again fails to generalize to longer sequences than those observed during training In comparison, for the ChoosE sketch d4 learns the correct sketch behavior and generalizes to al test sequence lengths. However, for the less structured MANIpuLATE sketch, the model is unabl to converge when training on sequence lengths greater than one. We discuss this result, and similar difficulties which occurred in the sorting experiments, in the following section."}, {"section_index": "11", "section_name": "5 DISCUSSION", "section_text": "04 bridges the gap between a traditional programming language and a modern machine learning architecture. However, as we have seen in our evaluation experiments, faithfully simulating the underlying abstract machine architecture introduces its own unique set of concerns.\nOne such concern is the additional complexity of performing even simple tasks when they are viewec in terms of operations on the underlying machine state. As illustrated in Table 1, d4 sketches car be effectively trained from the smallest of training sets, and generalize perfectly to sequences of any length. However, and perhaps unintuitively, difficulty arises when training from longer sequences ol modest lengths. Even when dealing with relatively short training length sequences, the underlying machine can unroll into a problematically large number states. For problems whose machine execu tion is quadratic, like the sorting task (which at input sequences of length 4 has 120 machine states) we observe instabilities during training from backpropagating through such long RNN sequences.\nWhile for many configurations we were able to train d4 effectively on the sorting problem, we were. unable to do so when using the MANIPULATE sketch on the addition problem. This is despite the addition experiments having comparatively shorter underlying execution RNNs, and highlights the.\nListing 4: Choose sketch for Elementary. Addition. Comments as in Listing 3 apply\n2. Choose. Incorporating additional prior information, ChoosE exactly specifies the results of the computation, namely the output of the first slot (line 5) is the carry, and the output of the second one (line 6) is the propagating result digit, both conditioned on the two digits and. the carry on the data stack. It is described in Listing 4..\nrelationship between the computational difficulty of the problem and the degree of prior knowledge. which must be provided in the sketch for successful learning. Using sketches with greater degrees of prior procedural knowledge d4 is able to learn in difficult problem scenarios, but for sketches with high degrees of freedom we require an easier scenario for effective gradient propagation..\nA second issue arises from incongruencies between how algorithmic data structures are typicall used in a traditional language, and our newly introduced desire for them to learn behaviors whicl generalize to unseen data. For instance, it is common in Forth implementations of sequence pro cessing (Sec. 3.3.1) to include the sequence length on the input, and to store it during computatior for use in the algorithm. This is a sensible approach when working purely in the traditional pro gramming paradigm, but in the context of learning it introduces information which influences th model's representations, prevents it from generalizing, and possibly leads to large memory require ments (when encoded with a one-hot vector). This motivates investigation into which traditiona language properties are most suitable to this new hybrid paradigm, and which representations cai be used to circumvent the problem.\nProgram Synthesis The idea of program synthesis is as old as Artificial Intelligence, and has long history in computer science (Manna & Waldinger, 1971). Whereas a large body of work ha focused on using genetic programming (Koza, 1992) to induce programs from the given input-outpu specification (Nordin, 1997), there are also various Inductive Programming approaches (Kitzelmann 2009) aimed at inducing programs from incomplete specifications of the code to be implementec (Albarghouthi et al., 2013; Solar-Lezama et al., 2006). We tackle the same problem of sketching but in our case we fill the sketches with neural networks able to learn the slot behavior.\nProbabilistic and Bayesian Programming Our work is closely related to probabilistic program ming languages such as Church (Goodman et al., 2008). They allow users to inject random choice primitives into programs as a way to define generative distributions over possible execution traces. In a sense, the random choice primitives in such languages correspond to the slots in our sketches A core difference lies in the way we train the behaviour of slots: instead of calculating their poste riors using probabilistic inference, we estimate their parameters using backpropagation and gradi- ent descent, similar to TerpreT (Gaunt et al., 2016), who induce code via backpropagation, and Aut ogr ad (Maclaurin et al., 2015), who enable automatic gradient computation in Python code. In addition, the underlying programming and probabilistic paradigm in these programming languages is often functional and declarative, whereas our approach focuses on a procedural and discrimina- tive view. By using an end-to-end differentiable architecture, it is easy to seamlessly connect our sketches to further neural input and output modules, such as an LSTM that feeds into the machine heap, or a neural reinforcement learning agent that operates the neural machine. However, we leave connecting d4 with neural upstream and downstream models for future work as it is out of the scope of this paper.\nNeural approaches Recently, there has been a surge of research in program synthesis, and execu- tion in deep learning, with increasingly elaborate deep models. Many of these models were based on differentiable versions of abstract data structures (Joulin & Mikoloy. 2015: Grefenstette et al. 2015; Kurach et al., 2015), and a few abstract machines, such as the NTM (Graves et al., 2014), Dif ferentiable Neural Computers (Graves et al., 2016), and Neural GPUs (Kaiser & Sutskever, 2015) All these models are able to induce algorithmic behavior from training data. Our work differs in that our differentiable abstract machine allows us to seemingly integrate code and neural networks, and train the neural networks specified by slots via backpropagation through code interpretation.\nThe work in neural approximations to abstract structures and machines naturally leads to more elab orate machinery able to induce and call code or code-like behavior. Neelakantan et al. (2015a) learned SQL-like behavior--querying tables from natural language with simple arithmetic opera- tions. Andreas et al. (2016) learn to compose neural modules to produce a desired behavior for a visual QA task. Neura1 Programmer-Interpreters (Reed & de Freitas, 2015) learn to represent and execute programs, operating on different modes of environment, and are able to incorporate decisions better captured in a neural network than in many lines of code (e.g. using image as an\ninput). Users inject prior procedural knowledge by training on program traces and hence require fu procedural knowledge. In contrast, we enable users to use their partial knowledge in sketches.\nNeural approaches to language compilation have also been researched, from compiling a language into neural networks (Siegelmann, 1994), over building neural compilers (Gruau et al., 1995) tc adaptive compilation (Bunel et al., 2016). However, that line of research did not perceive neural interpreters and compilers as a means of injecting procedural knowledge as we did. To the best of our knowledge, 04 is the first working neural implementation of an abstract machine for an actual programming language, and this enables us to inject such priors in a straightforward manner."}, {"section_index": "12", "section_name": "CONCLUSION AND FUTURE WORK", "section_text": "We have presented d4, a differentiable abstract machine for the Forth programming language, anc showed how it can be used to complement a programmer's prior knowledge through the learning oi unspecified behavior in Forth sketches. The d4 RNN successfully learns to sort and add, using only program sketches and program input-output pairs as input. We believe d4, and the larger paradign it helps establish, will be useful for addressing complex problems where low-level representations of the input are necessary, but higher-level reasoning is difficult to learn and potentially easier to specify.\nIn future work we aim to explore the relationship between backpropagation and programming lan guage semantics. For instance, when performing a single conceptual operation whose implementa tion requires a series of simple operations on the machine state (e.g. Listing 4, line 7), one may wish to shortcut the flow of gradients around such low-entropy sequences using residual connections (Sri vastava et al., 2015). An alternative solution to this problem would involve inducing hierarchies of actions, where a single more abstract action can be substituted for several repetitive actions, reduc- ing the length of the unrolled action sequence. We also plan to apply d4 to such problems in the NLP domain, like machine reading and knowledge base inference. In the long-term, we see the integration of non-differentiable transitions (such as those arising when interacting with a real envi- ronment), as an exciting future direction which sits at the intersection of reinforcement learning anc probabilistic programming. Additionally, connecting d4 with other differentiable models upstream and/or downstream is another direction we would like to tackle."}, {"section_index": "13", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Guillaume Bouchard, Dirk Weissenborn, Danny Tarlow, and the anonymous reviewers for. fruitful discussions and helpful comments on previous drafts of this paper. This work was supportec by a Microsoft Research PhD Scholarship, an Allen Distinguished Investigator Award, and a Marie. Curie Career Integration Award."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Aws Albarghouthi, Sumit Gulwani, and Zachary Kincaid. Recursive program synthesis. In Com puter Aided Verification, pp. 934950. Springer, 2013.\nANSI. Programming Languages - Forth, 1994. American National Standard for Information Sys tems, ANSI X3.215-1994\nLeo Brodie. Starting Forth. 1980\nRudy Bunel, Alban Desmaison, Pushmeet Kohli, Philip HS Torr, and M Pawan Kumar. Adaptive neural compilation. arXiv preprint arXiv:1605.07969. 2016\nFrederic Gruau, Jean-Yves Ratajszczak, and Gilles Wiber. A neural compiler. Theoretical Compute Science, 141(1):1-52, 1995.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.\nJohn R Koza. Genetic programming: on the programming of computers by means of natural selec tion, volume 1. MIT press, 1992\nTessa Lau, Steven A. Wolfman, Pedro Domingos, and Daniel S. Weld. Your wish is my com mand. chapter Learning Repetitive Text-editing Procedures with SMARTedit, pp. 209-226. Mor gan Kaufmann Publishers Inc., 2001. ISBN 1-55860-688-2. URL http://dl.acm.org/ citation.cfm?id=369505.369519\nJacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In The\nNoah Goodman, Vikash Mansinghka, Daniel M Roy, Keith Bonawitz, and Joshua B Tenenbaum Church: a language for generative models. Proceedings of UAI, pp. 220-229, 2008.\nArmand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems. pp. 190-198. 2015.\nArvind Neelakantan, Quoc V Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. arXiv preprint arXiv:1511.04834, 2015a\nPeter Nordin. Evolutionary program induction of binary machine code and its applications. Kreh Munster, 1997.\nHava T Siegelmann. Neural programming lang lage. In AAAI, pp. 877-882, 1994\nArmando Solar-Lezama, Rodric Rabbah, Rastislav Bodik, and Kemal Ebcioglu. Programming by Sketching for Bit-streaming Programs. In Proc. PLDI, pp. 281-294, 2005.\nArmando Solar-Lezama, Liviu Tancau, Rastislav Bodik, Sanjit Seshia, and Vijay Saraswat. Combi natorial sketching for finite programs. In ACM Sigplan Notices, volume 41, pp. 404-415. ACM,. 2006.\nZohar Manna and Richard J Waldinger. Toward automatic program synthesis. Communications oJ the ACM. 14(3):151-165. 1971.\nRupesh K Srivastava, Klaus Greff, and Juergen Schmidhuber. Training very deep networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems 28, pp. 2377-2385. Curran Associates, Inc., 2015. URL http : /papers.nips.cc/paper/5850-training-very-deep-networks.pdf..\nlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks In Proceedings of the 27th International Conference on Neural Information Processing Systems NIPS'14, pp. 3104-3112, Cambridge, MA, USA, 2014. MIT Press. URL http://dl.acm org/citation.cfm?id=2969033.2969173."}, {"section_index": "15", "section_name": "A FORTH WORDS", "section_text": "We implemented a small subset of available Forth words in d4. The table of these words, together with their descriptions is given in Table 3. The commands are roughly divided into 6 groups. These groups, line-separated in the table, are.\nTable 3: Forth words and their descriptions. TOS denotes top-of-stack, NOS denotes next-on-stack DSTACK denotes the data stack, RSTACK denotes the return stack, and HEAP denotes the heap.\nForth Word Description {num} Pushes {num} to DSTACK 1+ Increments DSTACK TOS by 1. 1- Decrements DSTACK TOS by 1. DUP Duplicates DSTACK TOS. SWAP Swaps TOS and NOS. OVER Copies NOS and pushes it on the TOS.. DROP Pops the TOS (non-destructive).. @ Fetches the HEAP value from the DSTACK TOS address. ! Stores DSTACK NOS to the DSTACK TOS address on the HEAP Consumes DSTACK NOS and TOS. Returns 1 (TRUE) if NOS > < = TOS respectivelly, 0 (FALSE) otherwise. >R Pushes DSTACK TOS to RSTACK TOS, removes it from DSTACK. R> Pushes RSTACK TOS to DSTACK TOS. removes it from RSTACK. @R Copies the RSTACK TOS TO DSTACK TOS IF..ELSE..THEN Consumes DSTACK TOS, if it equals to a non-zero number (TRUE), executes commands between IF and ELSE. Otherwise executes com- mands between ELSE and THEN. BEGIN..WHILE..REPEAT Continually executes commands between WHILE and REPEAT while. the code between BEGIN and WHILE evaluates to a non-zero number. (TRUE). DO..LOOP Consumes NOS and TOS, assumes NOS as a limit, and TOS as a. current index. Increases index by 1 until equal to NOS. At every in-. crement, executes commands between DO and LOOP. Denotes the subroutine, followed by a word defining it. {sub} Subroutine invocation, puts the program counter PC on RSTACK, sets. PC to the subroutine address. Subroutine exit. Consumest TOS from the RSTACK and sets the PC. : to it.\nAn example of a Forth program, that implements the Bubble sort algorithm, is shown in Listing 1 Here we provide a description of how the first iteration of this algorithm is executed by the Forth abstract machine:\nThe program begins at line 11, putting the sequence [2 4 2 7] on the data stack D, followed by the sequence length 44. It then calls the SORT word.\nD R C comment 1 [] [] 11 execution start 2 [2 4 2 7 4] [] 8 pushing sequence to D, calling SORT subroutine puts AsoRT to R\nFor a sequence of length 4, SORT performs a do-loop in line 9 that calls the BUBBLE sub-routine 3 times. It does so by decrementing the top of D with the 1- word to 3. Subsequently, 3 is duplicatec on D by using Dup, and 0 is pushed onto D..\n3 [2 4 2 7 3] [ASORT] 9 1- 4 [2 4 2 7 3 3] [AsORT] 9 DUP 6 [2 4 2 7 3 3 0] [ASORT] 9 0\nDO consumes the top two stack elements 3 and 0 as the limit and starting point of the loop, leaving the stack D to be [2,4,2,7,3]. We use the return stack R as a temporary variable buffer and push 3 onto it using the word >R. This drops 3 from D, which we copy from R with R@.\n7 [2 4 2 7 3] [AddrsoRT] 9 DO 8 [2 4 2 7] [AddrsoRT 3] 9 >R 9 [2 4 2 7 3] [AddrsoRT 3] 9 @R\nNext. we call BUBBLE to perform one iteration of the bubble pass, (calling BUBBLE 3 times inter- nally), and consuming 3. Notice that this call puts the current program counter onto R, to be used. for the program counter c when exiting BUBBLE..\nInside the BUBBLE subroutine, DUP duplicates 3 on R. IF consumes the duplicated 3 and interprets is as TRUE. >R puts 3 on R.\nCalling OvER twice duplicates the top two elements of the stack, to test them with <, which tests whether 2 < 7. IF tests if the result is TRUE (0). which it is. so it executes SwAP\n7 2 7] [ASORT 3 ABUBBLE 3] 2 OVER OVI 7 1] [AsORT 3 ABUBBLE 3] 2 7] [ASORT 3 ABUBBLE 3] 2 IF 2] [ASORT 3 ABUBBLE 3] 2 SWAP\nTo prepare for the next call to BuBBLE we move 3 back from the return stack R to the data stack. D via R>, SwAP it with the next element, put it back to R with >R, decrease the TOS with 1- anc. invoke BUBBLE again. Notice that R will accumulate the analysed part of the sequence, which wil be recursively taken back\n[AsORT 3 ABUBBLE] 3 R > [AsORT 3 ABUBBLE] 3 SWAP [AsORT 3 ABUBBLE 2] 3 >R [AsORT 3 ABUBBLE 2] 3 1- [AsORT 3 ABUBBLE 2] 0 ..BUBBLE\nNote that Forth uses Reverse Polish Notation and that the top of the data stack is 4 in this example\nWhen we reach the loop limit we drop the length of the sequence and exit sORT using the ; word which takes the return address from R. At the final point, the stack should contain the ordered sequence [7 4 2 2].\nTable 4: Forth words and their descriptions. TOS denotes top-of-stack, NOS denotes next-on-stack. DSTACK denotes data stack, and RSTACK denotes return stack..\nSymbol Explanation M Stack,M E{D.R} M Memory buffer, M E {D, R, H} p Pointer, p E {d, r, c}. M* Increment and decrement matrices (circular shift) S1 i1=j(modn)) For E{+, -}, M 0 otherwise Pointer manipulation Expression Increment a (or value x) inc(a) = aTM+ Decrement a (or value x) dec(a) = aTM- Conditional jump a jump(c,a) : p =popD() = TRUEc pc + (1-p)a p = pop c a-1 Next on stack, a <- aTM- Buffer manipulation READ from M readm(a) = aTM WRITE to M writem(x,a) : MM-a81 M+xO a PUSH x onto M pushm(x) : write(x, a)[side-effect: d inc(d)] POP an element from M popm() = readm(a)[side-effect: d dec(d)] Forth Word. Literal x pushD(x) 1+ writep(inc(readp(d)), d) 1- writep(dec(readp(d)), d) DUP pushp(readp(d)) SWAP x = readp(d), y = readp(d-1) :writep(d, y) , writep(d-1, x) OVER pushp(readp(d)) DROP popD() @ readH(d) ! writeh(d, d-1) < SWAP > e1= i +d, e=i *d1 > : 8-0 p = $pwi(e1 - e2) *** (define piecewise linear f) p1 + (p-1)0 p = $pwl(d,d-1) p1 + (p-1)0 >R pushr(d) R> popR( @R writep(d, readr(r)) IF..1ELSE..2THEN p = popD( = 0 p*.1+(1-p)*..2 BEGIN..1WHILE..2REPEAT ..1 jump(c, ..2). DO..LOOP : inc(p) p = p-1 jump(c, .) jump(c, beginning)"}] |
Bkab5dqxe | [{"section_index": "0", "section_name": "A COMPOSITIONAL OBJECT-BASED APPROACH TO LEARNING PHYSICAL DYNAMICS", "section_text": "Michael B. Chang*, Tomer Ullman *, Antonio Torralba*, and Joshua B. Tenenbaum *Department of Electrical Engineering and Computer Science, MIT ** Department of Brain and Cognitive Sciences. MIT\nWe present the Neural Physics Engine (NPE), a framework for learning simula. tors of intuitive physics that naturally generalize across variable object count and. different scene configurations. We propose a factorization of a physical scene. into composable object-based representations and a neural network architecture. whose compositional structure factorizes object dynamics into pairwise interac-. tions. Like a symbolic physics engine, the NPE is endowed with generic notions. of objects and their interactions; realized as a neural network, it can be trainec via stochastic gradient descent to adapt to specific object properties and dynam. ics of different worlds. We evaluate the efficacy of our approach on simple rigid. body dynamics in two-dimensional worlds. By comparing to less structured archi. tectures, we show that the NPE's compositional representation of the structure in. physical interactions improves its ability to predict movement, generalize across. variable object count and different scene configurations, and infer latent properties. of objects such as mass."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "For example, a foundational sense of intuitive physics is a prior that guides humans to decompose a scene into objects and carry expectations of object boundaries and motion across different scenario. (Spelke] 1990). Humans perceive balls on a billiard table not as meaningless patches of color bu. rather as impermeable objects. They expect balls moving toward each other to bounce a certain way. after a collision rather than pass through each other, crumble into pieces, or disperse into smoke Replace one billiard ball with a bowling ball and expectations for ball-to-ball interactions will differ. but the underlying sense of inertia and collisions remain. Arrange immovable wooden obstacles or the table and expectations for how a ball's surface interacts with wood remain constant regardless of how the obstacles are arranged. The ability to plan trajectories in this space without having tc relearn physics from scratch each time, regardless of whether there are three balls or eight balls whether there are obstacles or not, whether obstacles are arranged in one way or another, whethe. or not the configuration of objects has been seen before, suggests that humans leverage a prior or. physics to reason at a level of abstraction where objects, relations, and events are primitive..\nThis paper explores the question of building this prior into an agent as a program. We view this program as a simulator that takes input provided by a physical scene and the past states of objects. and outputs the future states and physical properties of relevant objects (Anderson] 1990] Battaglia et al.[2013f [Goodman and Tenenbaum2016). Our goal is to design a program that naturally gener- alizes across variable object count and different scene configurations without additional retraining Our proposed framework, the Neural Physics Engine (NPE), outlines several ingredients usefu1 for realizing these two generalization capabilities. We describe these ingredients in the context of a specific instantiation of the NPE applied to two-dimensional worlds of balls and obstacles.\nmbchang, tomeru, torralba, jbt}@mit.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Endowing an agent with a program for physical reasoning constrains the agent's representation. of the environment by establishing a prior on the environment's physics. The agent can leverage these constraints to rapidly learn new tasks, to flexibly adapt to changes in inputs and goals, and to. naturally generalize reasoning to novel scenes (Lake et al.]2016).\nThe NPE takes a step toward bridging the gap between expressivity and adaptability by combin. ing the strengths of both approaches. The NPE framework is realized as a differentiable physic. simulator that combines rough symbolic structure with gradient-based learning. It exhibits severa strong inductive biases that are explicitly present in symbolic physics engines, such as a notion ol. objects-specific properties and object interactions. Implemented as a neural network, the NPE car. also flexibly tailor itself to specific object properties and dynamics of a given world through training. By design, it can extrapolate to a variable number of objects and different scene configurations witl. only spatially and temporally local computation.\nOur framework proposes four key ingredients useful for generalization across variable object coun. and different scene configurations without additional retraining. The first ingredient is the viev of objects as primitives of physical reasoning. The second is a mechanism for selecting contex. objects given a particular object. Together, these ingredients reflect two natural assumptions about a. physical environment: There exist objects and these objects interact in a factorized manner..\nThe third and fourth ingredients are factorization and compositionality, which are both applied on two levels: the scene and the network architecture. On the level of the physical scene, the NPE factorizes the scene into object-based representations, and composes smaller building blocks to form larger objects. This method of representation adapts to scene configurations of variable complexity and shape. On the level of the network architecture, the NPE explicitly reflects a causal structure in object interactions by factorizing object dynamics into pairwise interactions. The NPE models the future state of a single object as a function composition of the pairwise interactions between itself and other context objects in the scene. This structure serves to guide learning towards object based reasoning and is designed for physical knowledge to transfer across variable number objects anywhere in the scene.\nWhile previous bottom-up approaches (Sec. 4) have coupled learning vision and learning physica. dynamics, we take a different approach for two reasons. First, we see that disentangling the visua. properties of an object from its physical dynamics is a step toward achieving the generality of . physics engine. Both vision and dynamics are necessary, but we believe that keeping these function. alities separate is important for common-sense generalization that is robust to cases where the visua appearance changes but the dynamics remain the same. Second, we are optimistic that those twc. components indeed can be decoupled, that a vision model can map visual input to an intermediate. state space, and a dynamics model can evolve objects in that state space through time. For example. there is work in object detection and localization (e.g.Eslami et al. 2016) for extracting position anc. velocity, as well as work for extracting latent object properties (Wu et al.|2015f 2016). Therefore. this paper focuses on learning dynamics in that state space, taking a small step toward emulating a. general-purpose physics engine, with the eventual goal of building a system that exhibits the compo.\nTwo general approaches have emerged in the search for a program that captures common-sense phys ical reasoning. The top-down approach (Bates et al.]2015) Battaglia et al. 2013) Hamrick et al. 2011, [Ullman et al.][2014] [Wu et al.[[2015) formulates the problem as inference over the parame- ters of a symbolic physics engine, while the bottom-up approach (Agrawal et al.||2016, Fragkiadaki et al.[2015b; Lerer et al.[2016] Li et al.[[2016][Mottaghi et al.[2015]2016] Sutskever et al.[2009] learns to directly map observations to motion prediction or physical judgments. A program under the top-down approach can generalize across any scenario supported by the entities and operators in its description language. However, it may be brittle under scenarios not supported by its descrip tion language, and adapting to these new scenarios requires modifying the code or generating new code for the physics engine itself. In contrast, gradient-based bottom-up approaches can apply the same model architecture and learning algorithm to specific scenarios without requiring the physical dynamics of the scenario to be pre-specified. This often comes at the cost of reduced generality: transferring knowledge to new scenes may require extensive retraining, even in cases that seem trivial to human reasoning.\norld State t-1 World State t World State 01 Physics 01 Physics 01 Program Program 02 02 02 03 03 03 ... ... ... Ok Ok Ok\nFigure 1: Physics Programs: We consider the space of physics programs over object-based repre sentations under physical laws that are Markovian and translation-invariant. We consider each object in turn and predict its future state conditioned on the past states of itself and its context obiects\nsitionality, modularity, and generality of a physics engine whose internal components can be learne. through observation.\nIn Sec.2|we present a specific instantiation of the NPE that uses a neighborhood mask to selec. context objects. In Sec. 3|we apply that instantiation to investigate variations on two-dimensiona worlds of balls and obstacles from the matter-js physics engine (Brummitt2014) as a testbed fo. exploring the NPE's capabilities to model simple rigid-body dynamics. While these worlds are. generated from a simplified physics engine, we believe that learning to model such simple physic. under the NPE's framework is a first and necessary step towards emulating the full capacity of a. general physics engine, while maintaining a differentiability that can allow it to eventually learr. complex real-world physical phenomena that would be challenging to engineer into conventiona physics engines. This paper establishes that important step.\nWe consider in detail a specific instantiation of the NPE that uses a neighborhood mask to select. context objects. This section discusses each of the four ingredients of the NPE framework, that,. when combined, comprise a neural network-based physics simulator that learns from observation.\nObject-Based Representations We make two observations (Fig.1) in our factorization of the scene. The first regards spatially local computation. Because physics does not change across inertia frames, it suffices to separately predict the future state of each object conditioned on the past states of itself and the other objects in its neighborhood, similar to Fragkiadaki et al.(2015b). Sec. 3.5 shows that when large structures are represented as a composition of smaller objects, a spatially local attention window helps achieve invariance to scene configuration. The second observatior regards temporally local computation. Because physics is Markovian, this prediction need only be for the immediate next timestep, which we show in Sec. 3|is enough to predict physics effectively over long timescales. Given these two observations, it is natural to choose an object-based state representation. A state vector comprises extrinsic properties (position, velocity, orientation, angulai velocity), intrinsic properties (mass, object type, object size), and global properties (gravitational. frictional, and pairwise forces) at a given time instance.\nPairwise Factorization Letting a particular object be the focus object f and all other ob .[t+1] as jects in the scene be context objects c, the NPE models the focus object's velocity vlt+ omposition of the pairwise interactions between itself and other neighboring context objects i he scene during time t - 1 and t. This input is represented as pairs of object state vector {(0f, Oc1)[t-1,t], (0f, Oc2)[t-1,t], ..}. As shown in Fig.2p, the NPE composes an encoder functio1 and a decoder function. The encoder function fenc summarizes the interaction of a single object pai The sum of encodings of all pairs is then concatenated with the focus object's past state as input t he decoder function. The focus object is a necessary input to the decoder because if there are n neighboring context objects, the summed encoder output would be zero. The decoder function the [t+1]\nredicts the focus object's velocity. n practice. the NPE predicts the change v between\nand t + 1 to compute v[t+1] = y[t] + v, and updates position using the velocity as a first-order approximation' We predict velocity rather than position to help avoid memorizing the environment: training the network to predict position conditions the network on the worlds in the training domain making it more difficult to transfer knowledge across environments. We do not include acceleration in the state representation because position and velocity fully parametrize an object's state. Thus acceleration (e.g. collisions) can be learned by observing velocity for two consecutive timesteps hence our choice for two input timesteps. We explored longer input durations as well and found no additional benefit.\nContext Selection Each (of, oc) pair is selected to be in the set of neighbors of f by the neigh borhood masking function 1 [||pc - pf)|l < N(of)], which takes value 1 if the Euclidean distance between the positions pc and pc of the focus and context object respectively at time t is less the neighborhood threshold N(of). Many physics engines use a collision detection scheme with twc phases. Broad phase is used for computational efficiency and uses a neighborhood threshold tc select objects that might, but not necessarily will, collide an object. Narrow phase performs the actual collision detection on that smaller subset of objects and also resolves the collisions for the objects that do collide. Analogously, our neighborhood mask implements broad phase, and the NPI implements narrow phase. The mask only constrains the search space of context objects, and th network figures out how to detect and resolve collisions. This mask is a specific case of a mor general attention mechanism to select contextual elements of a scene.\nFunction Composition Symbolic physics engines evolve objects through time based on dynamics that dictate their independent behavior (e.g. inertia) and their behavior with other objects (e.g collisions). Notably, in a particular object's reference frame, the forces it feels from other object are additive. The NPE architecture incorporates several inductive biases that reflect this recipe. The composition of fenc and fdec induce a causal structure on the pairs of objects. We provide a loose interpretation of the encoder output ec.f as the effect of object c on object f, and require that these effects are additive as forces are. This design allows the NPE to scale naturally to different numbers of neighboring context objects. These inductive biases have the effect of strongly constraining the space of possible simulators that the NPE can learn, focusing on compositional programs that reflect pairwise causal structure in object interactions."}, {"section_index": "3", "section_name": "2.2 BASELINES", "section_text": "The purpose of contrasting the NPE with the following two baselines is to illustrate the benefit of pairwise factorization and function composition, which are the key architectural features of the NPE. As the architectures for both baselines have been shown to work well in similar tasks, it is not immediately clear whether the NPE's assumptions are useful or necessary, so these are good baselines for comparison. Viewed in another way, comparing with these baselines is a lesion study on the NPE because each baseline lacks an aspect of the NPE structure.\nNo-Pairwise The No-Pairwise (NP) baseline is summarized by Fig. 2c. It is very similar to th. NPE but does not compute pairwise interactions; otherwise its encoder and decoder are the same a. the NPE's. Therefore the NP most directly highlights the value of the NPE's pairwise factorization. The NP is also a Markovian variant of the Social LSTM (Alahi et al.]2016); it sums the encoding. of context objects after encoding each object independently, similar to the Social LSTM's \"socia. pooling.\" Information for modeling how objects interact would only be present after the encoding. step. A possible mechanism for predicting dynamics with the NP is if the encoder's object encodin. consists of an abstract object representation and a force field created by that object. Therefore th decoder could apply the sum of the force fields of all context objects to the focus object's abstrac. object representation to predict the focus object's velocity. As Alahi et al.[(2016) has demonstrate. the Social LSTM's performance in modeling human trajectories, it would be interesting to see hov. the same architectural assumptions perform for the physics of moving objects.\n1 The NPE as currently implemented also predicts angular velocity along with velocity, but for the experi ments in this paper we always set angular velocity, as well as gravity, friction, and pairwise forces, to zero. We included these parameters in the implementation because in future work we are planning to test situations and scenarios in which angular velocity is important, such as block towers, magnetism. However, in the current work they are vestigial and set to zero and do not appear in the evaluation.\n(a) OC neighborhood 03 t + 1 (b) NPE applied on object 3 (c) NP applied on object 3 (d) LSTM applied on object 3 04 03 03 0 e2,3 03 02 02 V3 02 N3 03 0 e4,3 04 03 04 V3\nO4 03 03 0 e2,3 03 02 02 02 03 0 e4,3 04 03 04 V3\nFigure 2: Scenario and Models: This figure compares the NPE, the NP and the LSTM architectures in predicting the velocity of object 3 for an example scenario [a] of two heavy balls (cyan) and two light balls (yellow-green). Objects 2 and 4 are in object 3's neighborhood, so object 1 is ignored [b]: The NPE encoder consists of a pairwise layer (yellow) and a feedforward network (red) and its decoder (blue) is also a feedforward network. The input to the decoder is the concatenation of the summed pairwise encodings and the input state of object 3. [c]: The NP encoder is the same as the NPE encoder, but without the pairwise layer. The NP decoder is the same as the NPE decoder. The input to the decoder is the concatenation of the summed context encodings and the encoding of object 3. [d]: We shuffle the context objects inputted into the LSTM and use a binary flag to indicate whether an object is a context or focus object.\nLSTM Long Short-Term Memory (LSTM) networks (Hochreiter and Schmidhuber1997) have been shown to sequentially attend to objects (Eslami et al.|2016), so it is interesting to test whether a LSTM is well-suited for modeling object interactions, when the object states are explicitly given as input. From a cognitive science viewpoint, an LSTM can be interpreted as a serial mechanism in object tracking (Pylyshyn and Annan2006). Our LSTM architecture (Fig.2H) accepts the state of each context object until the last step, at which it takes in the focus object's state and predicts its velocity. Because the LSTM moves through the object space sequentially, its lack of factorized compositional structure highlights the value of the NPE's function composition of the independent interactions between an object and its neighbors. Our notion of compositionality treats each ob- ject and pairwise interaction as independently encapsulated in a separate computational entity that can be reused and rearranged; the NPE encoder is a function that is applied to each (of, Oc) pair This function encapsulates this computation and can be repeatedly applied to all neighboring con- text objects equally, such that the NPE composes this repeated encoding function with the decoder function to predict velocity. The LSTM does not exhibit this notion of compositionality because it is not designed to take advantage of the factorized structure of the scene. Unlike the NPE and NP the LSTM's structure does not differentiate between focus and context object, so we add a flag to the state representation to indicate to whether an object is a context or focus object. We shuffle the order of the context objects to account for an ordering bias.\nObject-based representations (ingredient 1) are necessary for the other three ingredients, and having. explained the motivation for object-based representations in Sec. [1.3|and Sec. [2.1] we now analyze the other three ingredients in the context of several experiments. In the prediction task (Sec. 3.1). we first test if the NPE is even capable of predicting physics when the number of objects is held. constant. In the generalization task (Sec.3.2, we test the NPE's capability to generalize across. variable object count. In the inference task (Sec.3.3), we test if the NPE can be inverted to infer. mass in both the prediction and generalization settings. In these experiments, we compare against the NPE-NN. a modified NPE without the neighborhood mask, to analyze the context selection\n01 02 V1 V3 neighborhood V2 03 V4 O4 + t+1\nmechanism (ingredient 2), the NP to analyze factorization (ingredient 3), the LSTM to analyze compositionality (ingredient 4). Sec. 3.4 analyzes the neighborhood mask in depth. We test the NPE's capability to generalize across different scene configurations in Sec.3.5.\nUsing the matter-js physics engine, we evaluate the NPE on worlds of balls and obstacles. These worlds exhibit nonlinear dynamics and support a wide variety of scenarios. Bouncing balls have been of interest in cognitive science to study causality and counterfactual reasoning, as in|Gersten berg et al.(2012). We trained on 3-timestep windows in trajectories of 60 timesteps (10 timesteps ~ 1 second). For a world of k objects, we generate 50,o00 such trajectories. For experiments where we train on multiple worlds together, we shuffle the examples across all training worlds and train without a curriculum schedule. All worlds have a vertical dimension of 600 pixels and a horizontal dimension of 800 pixels, and we constrain the maximum velocity of an object to be 60 pixels/second We normalize positions to [0, 1] by dividing by the horizontal dimension, and we normalize veloci ties to [-1, 1] by dividing by the maximum velocity.\nLike those of|Battaglia et al.[(2016) the NPE predictions can be effective over long timescales even when the NPE is only trained to predict the immediate next time step. Randomly selected simulation videos can be found at https : //goo. g1/BwYuOF Plots show results over three independent runs averaged over held-out test data with different random seeds. As shown in the graphs in Fig.3 (top two rows) and Fig. 5 both the NP and LSTM's predicted trajectories diverge from the ground truth, but for different reasons, which the videos illuminate. While the NP and LSTM fail to predict plausible physical movement entirely, the NPE's predictions initially adhere closely to the ground truth, then slowly diverge due to the accumulation of subtle errors, just as the human perceptual system also accumulates errors (Smith and Vul]2013). However, the NPE preserves the general intuitive physical dynamics that may roughly be consistent with people's intuitive expectations."}, {"section_index": "4", "section_name": "3.1 PREDICTION TASK", "section_text": "We consider simple worlds of four balls of uniform mass (Fig. 3a). To measure performance ir. simulation, we visualize the cosine similarity between the predicted velocity and the ground truth. velocity as well as the relative error in magnitude between the predicted velocity and the ground truth velocity over 50 timesteps of simulation. The models take timesteps 1 and 2 as initial input, and then. use previous predictions as input to future predictions. To measure progress through training, we. also display the Mean Squared Error (MSE) on the normalized velocity.."}, {"section_index": "5", "section_name": "3.2 GENERALIZATION TASK", "section_text": "We test whether learned knowledge of these simple physics concepts can be transferred and extrap- olated to worlds with a number of objects previously unseen (Fig. 3b). The unseen worlds (6, 7, 8 balls) in the test data are combinatorially more complex and varied than the observed worlds (3, 4, 5 balls) in the training data. All objects have equal mass. During simulation, the NPE's predictions are more consistent, whereas the NP and LSTM's prediction begin to diverge wildly towards the end of 50 timesteps of simulation (Fig.3b, middle row). The NPE consistently outperforms the baselines by O.5 to 1 order of magnitude in velocity prediction (Fig.3b, bottom row)."}, {"section_index": "6", "section_name": "3.3 INFERENCE TASK", "section_text": "We now show that the NPE can infer latent properties such as mass. This proposal is motivated by. the experiments in|Battaglia et al. (2013), which uses a probabilistic physics simulator to infer vari. ous properties of a scene configuration. Whereas the physical rules of their simulator were manually pre-specified, the NPE learns these rules from observation. We train on the same worlds used in botl. the prediction and generalization tasks, but we uniformly sampled the mass for each ball from the. log-spaced set {1, 5, 25}. We chose to use discrete-valued masses to simplify our qualitative under. standing of the model's capacity to infer. For future work we would like to investigate continuousl valued masses and evaluate with binary comparisons (e.g. \"Which is heavier?\")..\nAs summarized by Fig. 3 and Fig.4h, we select scenarios exhibiting collisions with the focus. object, fix the masses of all other objects, and score the NPE's prediction under all possible mass hypotheses for the focus object. The prediction is scored against the ground-truth under the same. MSE loss used in training. The hypothesis whose prediction yields the lowest error is the NPE's.\n(a) (b) (c) Prediction Task Generalization Task. Inference Task Maximum Likelihood Estimate of Mass LSTM4-4 LSTM: 3,4,5 - 3,4,5 LSTM3.4.5-6.78 NP:44 1.0 NP3,4,5-3,4,5 NP3.4,5-6,78 0.8 NPE:44 0.8 NPE: 3,4,5 - 3,4,5 0.8 4-4 NPE-NN44 NPE-NN: 3,4,5 - 3.4,5 NPE-NN:3,4,5-6.7,8 0.6 0.8 3.4.5-6.7.8 ACeunrey 0.4 0.6 0.2- 0.2 0.2 0.4 0.0- 0.0 0.0- 0 10 20 30 40 50 0 10 20 0 10 20 10 0.2 Prnnrnnee 1.0 1.0 1.0 0.0 Random LSTM NP NPE-NN NPE g0.8 0.8 0.8 Model o. (d) 0.4 Neighborhood Analysis 0.4 LSTM:44 0.2 NP: 3,4,5 - 3,4,5 LSTM: 3,4,5 - 6,7,8 NP:44 0.2 NP:3,4,5-6,7,8 NPE4-4 NPE:3,4,5-3,4,5 NPE:3,4,5-6,7,8 102 NPE-NN44 NPE-NN: 3,4,5 - 3,4,5 NPE-NN:3,4,5-6,78 E Neighborhood Size 0.0g 10 20 30 40 50 10 20 30 40 50 0.0g 10 20 Timesteps 30 40 50 Timesteps Timesteps LSTM4-4 LSTM3.4.5-3.4.5 LSTM3.4,5-6.7.8 NP44 NP3,453,45 NPE:44 NPE:3.4.53.4.5 NPE:3.4.5-6,7.8 4NPE-NN:4 - NPE-NN:3,4,5-3,4,5 NPE-NN:3,4,5-6,7,8 104 22.5 33.544.555.56 None 100 104 12 104 10 Size of Neighborhood (x ball radius) 2 10 12 : 10 12 Iterations (x 100000) terations (x 100000) Train on 4 objects, Test on 4 objects. Train on 3, 4, 5 objects, Test on 3, 4, 5 objects. Train on 3, 4, 5 objects, Test on 6, 7, 8 objects.\nFigure 3: Quantitative evaluation (balls): [a,b]: Prediction and generalization tasks. Top two rows: The cosine similarity and the relative error in magnitude. Bottom row: The MSE of velocity on the test set over the course of training. Because these worlds are chaotic systems, it is not surprising that all predictions diverge from the ground truth with time, but NPE consistently outperforms the other two baselines on all fronts, especially when testing on 6, 7, and 8 objects in the generalization task. The NPE's performance continues to improve with training while the NPE-NN (an NPE without a neighborhood mask, see Sec. 3.4), NP and LSTM quickly plateau. We hypothesize that the NPE's structured factorization of the state space guides it from wasting time exploring suboptimal pro grams. [c]: The NPE's accuracy is significantly greater than the baseline models' in mass inference. Notably, the NPE achieves similar inference performance whether in the prediction or generaliza- tion settings, further showcasing its strong generalization capabilities. The LSTM performs poorest, reaching just above random guessing (33% accuracy). [d]: We analyze the effectiveness of differ- ent neighborhood thresholds for the NPE on the constant-mass prediction task. The neighborhood threshold is quite robust from 3 to 5 ball radii.\nmaximum likelihood estimate of the focus object's mass. Outperforming all baselines, the NPI achieves about 90% accuracy, meaning it has 90% probability of inferring the correct mass.\nThe NPE predicts outputs given inputs and infers inputs given outputs. Though we adopted a par. ticular parametrization of an object, the NPE is not limited to the semantic meaning of the elements. of its input, so we expect other latent object properties can be inferred this way. Because the NPE is differentiable, we expect that it can also infer object properties by backpropagating prediction error. to its a randomly sampled input. This would be useful for inferring non-categorical values, such as positions of \"invisible\" objects, whose effects are felt but whose positions are unknown.."}, {"section_index": "7", "section_name": "3.4 NEIGHBORHOOD MASK", "section_text": "In Fig. 3d we vary the NPE's neighborhood threshold N(of) and evaluate performance on the. constant-mass prediction task. N(of) is in units of ball radii, so N(o) = 2 means that a context. object is only detected if it is exactly touching the focus object. Because ball radii are 60 pixels anc the maximum velocity is 60 pixels per timestep, the maximum distance two balls can initially be. before touching at the next timestep is 4 ball radii. Given that velocities were sampled uniformly. it makes sense that the NPE performs well in and is robus[[to the range N(of) E [3, 5], but per. formance drops off with smaller and larger N(of). It is important to note that different N(of) may. work better for different domains and object geometries..\n2The results reported in this paper were with N(of) = 3.5 ball radii, which we found initially with a coarser search than the results in Fig.[3, although any threshold in the range N(of) E 3, 5] performs similarly\n-\nFigure 4: Visualizations: The NPE scales to complex dynamics and world configurations while the NP and LSTM cannot. The masses are visualized as: cyan = 25, red = 5, yellow-green = 1. [a] Consider the collision in the 7 balls world (circled). In the ground truth, the collision happens between balls 1 and 2, and the NPE correctly predicts this. The NP predicts a slower movement for ball 1, so ball 2 overlaps with ball 3. The LSTM predicts a slower movement and incorrect angle off the world boundary, so ball 2 overlaps with ball 3. [b] At first glance, all models seem to handle collisions well in the \"O' world (diamond), but when there are internal obstacles (cloud), only the NPE can successfully resolve collisions. This suggests that the NPE pairwise factorization handles. object interactions well, letting it generalize to different world configurations, whereas the NP and LSTM have only memorized the geometry of the \"O' world..\nWe include analysis in the prediction and generalization tasks on an NPE without the neighborhooc. mask, the NPE-NN (NN = No Neighborhood). The neighborhood mask gives the NPE about ar order of magnitude improvement in velocity prediction loss (Fig. 3a,b: bottom row and Fig. 6) While the NPE loss continues to improve through training, the NPE-NN loss quickly plateaus. I is interesting that the NPE-NN performs no better than both the NP and LSTM in predictive error but outperforms the LSTM in mass inference. These two observations suggest that computing the interactions the focus object shares with each context object is more effective for inferring a property. of the focus object than disregarding these factorized effects. They also suggest that the additiona. spatial structure from constraining the context space with the neighborhood mask prevents the NPE. from naively finding associations with objects that cannot influence the focus object.\nIn our experiments, the neighborhood mask has the additional practical benefit of reducing com. putational complexity from O(k) to O(1), where k is the number of objects in the scene, because. the number of context-focus object pairs the NPE considers is bounded above by the neighborhood. mask at a constant number. Though beyond the scope of this work, to extend the functionality of such context selection mechanism to include worlds that contain forces that act from a distance. future instantiations of the NPE may investigate a more general context selection mechanism that. can be learned jointly with the other model parameters.."}, {"section_index": "8", "section_name": "3.5 DIFFERENT SCENE CONFIGURATIONS", "section_text": "We demonstrate representing large structures as a composition of smaller objects as building blocks This is important for testing the NPE's invariance to scene configuration; the scene configuration should not matter if the underlying physical laws remain the same. These worlds contain 2 balls bouncing around in variations of 4 different wall geometries. \"O\"' and \"L' geometries have no. internal obstacles and are in the shape of a rectangle and \"L' respectively. \"U\"' and \"I' have internal obstacles. Obstacles in \"U' are linearly attached to the wall like a protrusion, while obstacles in \"I\".\n(a) (b) 6 Balls 7 Balls \"O\" World \"L\" World \"U\" World \"|\" World Crnin ernnh 3dN dN\n- 1\nGeneralization to Different Wall and Obstacle Configurations Train on O, L; Test on O Train on O,L; Test on L Train on O, L; Test on U Train on O, L; Test on I 1.0 1.0 1.0 1.0 LSTM:O,L - O LSTM:O,L - L LSTM:O,L - U LSTM:OL-1 NP:O,L-O NP:OL- L 0.8 NP:O,L-U 0.8- NP:O,L-1 0.8 NPE:O,L-O 0.8 NPE:O,L- L NPE:O,L-U NPE:O,L-1 0.4 0.2- 0.2- 0.2- 0.2 0.0% 0.0% 0.0% 0.0% 10 20 30 40 50 10 20 30 40 50 10 20 30 40 50 10 20 30 40 50 Timesteps Timesteps Timesteps Timesteps 1.0r 1.0 1.0r 1.0 LSTM: O,L - O LSTM: O,L - L LSTM: O,L - U LSTM: O,L - 1 NP: O,L - 0 NP: O,L - L NP: O,L - U 0.8 NP: O,L - 1 0.8 0.8 0.8 NPE: O,L - O NPE: O,L - L NPE: O,L - U NPE: O,L -1 0.2 0.2 0.0 30 0.00 0.0 0.00 10 20 40 50 10 20 30 40 50 10 20 30 40 50 10 20 30 40 50 Timesteps Timesteps Timesteps Timesteps\nFigure 5: Quantitative evalution (walls and obstacles): The compositional state representation simplifies the physical prediction problem to only be over local arrangements of context balls and obstacles, even when the wall geometries are more complex and varied on a macroscopic scale. Therefore, it is not surprising that the models perform consistently across wall geometries. Note. that the NPE consistently outperforms the other models, and this gap in performance increases with. more varied internal obstacles for the cosine similarity of the velocity angle. This gap is more prominent in \"L' and \"U\" geometries for relative error in magnitude..\nhave no constraint in position. We randomly vary the position and orientation of the \"L' concavity. and the \"U'' protrusion. We randomly sample the positions of the \"I'' internal obstacles.\nWe train on conceptually simpler \"O\"' and \"L' worlds and test on more complex \"U' and \"I\" worlds Variations in wall geometries adds to the difficulty of this extrapolation task. At most 12 contex objects are present in the focus object's neighborhood at a time. The \"U\"' geometries have 33 objects in the scene, the most out of all the wall geometries. As shown in Fig. 4p and 5] the NPE is robust to scenes with internal obstacles, even when it has not observed such scenes during training."}, {"section_index": "9", "section_name": "3.6 ANALYSIS", "section_text": "We explain the NPE's superior performance in generalization from the perspective of context selec tion, factorization, and compositionality. By design, all three ingredients transform the testing data distribution to be similar to the training data distribution, such that generalization across variable object count and different scene configurations happens naturally.\nConsider generalizing across variable object count. The neighborhood mask selects context objects. such that the NPE need only focus on a bounded subset of the objects regardless of the total number. of objects. Factorizing the scene into pairwise interactions induces a causal structure between each. context object and the focus object, such that no matter the object count, this causal structure remains consistent because the input is merely a set of object pairs. Composing these pairwise interactions. together with a summation encourages the encoder output to be additive, such that the decoder. receives the appropriate net effect from the context objects, regardless of how many there are..\nConsider generalizing across different scene configurations. Our state representation composes. larger structures from smaller objects, just as many real-world objects are composed of smaller. components. Therefore, even when wall geometries are complex and varied on a macroscopic scale the input distribution to the NPE remains roughly the same, because the prediction problem still. remains only over objects in a local glimpse the entire scene.."}, {"section_index": "10", "section_name": "4 RELATED WORK", "section_text": "Top-down and bottom-up approaches A recent set of top-down approaches investigate prob- abilistic game physics engines as computational models for physical simulation in humans (Bates et al.||2015f Battaglia et al. 2013 Hamrick et al. Jllman et al.2014). However, these models\nBottom-up approaches attempt to bypass the intermediate step of finding physics representations anc directly map visual observations to physical judgments (Lerer et al.]2016] Li et al.]2016] Mottagh et al.[[2015] 2016) or passive (Lerer et al.]2016] [Srivastava et al.2015] Sutskever et al.[2009 and action-conditioned (Agrawal et al.2016] Finn et al.2016, Fragkiadaki et al.2015b) motior prediction. Because these work historically have not been compositional in nature, they have hac limited flexibility to transfer knowledge to conceptually similar worlds where the physics remair the same, but the number of objects or complexity of object configurations varies. Moreover, these approaches above do not infer latent properties as the NPE does.\nOther work have taken similar hybrid approaches as the NPE, such as the NeuroAnimator. [Grzeszczuk et al.|1998), one of the first work to train a neural network to emulate a physics sim ulator, and the interaction network (Battaglia et al.]2016), which learns to simulate physics over a. graph of objects and their relations..\nSketching The NPE combines a symbolic structure that assumes generic objects and interactions. with a differentiability that allows the specific nature of these interactions to be learned from training This approach of starting with a general sketch of a program and filling in the specifics is inspired by. ideas from the program synthesis community (Ellis et al.2015) Gaunt et al.]2016} Solar-Lezama 2008). Examples of other work that combine symbolic with neural approaches via sketching include. graph-based neural networks (Jain et al.[[2016f Li et al.[[2015||Scarselli et a1.2009) and transforming autoencoders (Hinton et al.]2011).\nComposing functions for reuse Just as the NPE repeatedly applies the same encoder to each. object pair, iteratively applies itself to each object in the scene as a focus object, and recursively. predicts future timesteps using predictions from previous timesteps, employing function reuse to. achieve generalization is also featured in work such as|Abelson et al.(1996);[Andreas et al.(2016); Lake et al.[(2015); Reed and de Freitas(2015); Socher et al.[(2011). These work all assemble small subprograms to form larger programs. The NPE also dynamically composes its internal modules. (encoder and decoder) based on the number of objects and the arrangement of context objects.\nObject-based approaches Fragkiadaki et al.(2015b) and Battaglia et al.(2016) are two notably similar work in the sense that our work and theirs all take an object-based approach to model the. bouncing balls environment. Our work was inspired by Fragkiadaki et al.(2015b)'s iterative ap proach to predicting the motion of each object in turn, conditioned on a context. The key contrast is that their model assumes no relational structure between objects beyond a visual attention window centered around the focus object, whereas ours explicitly processes the interaction between the focus and each context object.\nIf we compare their simulation videos (Fragkiadaki et al.]2015a) to ours, we see some specific and significant improvements evident in our approach. For example, in their work, the balls appea attracted to each other and to the walls; the balls appear to bounce along the walls even when nc attractive force should be present. The balls rarely touch during collisions, but magnetically repe each other when at a short distance. The NPE does not exhibit these behaviors and tends to preserve the intuitive physical dynamics of colliding balls. In addition to these differences, we show strong predictive performance on generalizing to eight balls, five more than the balls in their videos. We. also crucially show this performance under stronger generalization conditions, variable mass, anc. more complex scene configurations.\nRecently, Battaglia et al. (2016) independently and in parallel developed an architecture that they call. the interaction network for learning to model physical systems. They show how such an architecture can apply to several different kinds of physical systems, including n-body gravitational interactions.\nrequire a full specification of the physical laws and object geometries. Given such a specification. inferring how physical laws compose and apply to a given scenario are their strength, but automat. ically inferring from visual data what physical laws and object properties are present requires more work in inverse graphics (Chen et al.] 2016} Kulkarni et al.[2014] 2015a bf Whitney et al.]2016 and physics-based visual understanding (Brand1997|Wu et al.2015 2016). The NPE builds on top of the key structural assumptions of these top-down approaches, but its differentiable architec-. ture opens a possible path for joint training with a vision model that can automatically adapt to the specific physical properties of the scene..\nand a string falling under gravity. Like their work, our model can simulate over many timesteps very effectively when only trained for next-timestep prediction, and can generalize to different world configurations and different numbers of objects\nCompared to the interaction network, a main difference in our architecture is that ours does not take object relations as explicit input, but instead learns the nature of these relations by constraining atten tion to a neighborhood set of objects. Another difference is in function reuse: we demonstrated tha a trained NPE can automatically infer properties of its input such as mass without further retraining. In contrast, they train an additional classifier on top of their model to do inference. Their work alsc exhibits the four ingredients in our framework, and we view the similarities between their and ou. work as converging evidence for the utility of object-based representations and compositional mode. architectures in learning to emulate general-purpose physics engines.."}, {"section_index": "11", "section_name": "5 DISCUSSION", "section_text": "While this paper is not the first to explore learning a physics simulator, here we take the opportunity. to highlight the value of this paper's contributions. We hope these contributions can seed further research that builds on the NPE framework this paper proposes..\nWe showed that object-based representations, a context selection mechanism, factorization, and compositionality are useful ingredients for learning a physics simulator that generalizes across vari able object count and different scene configurations with only spatially and temporally local com putation. This generalization is possible because these ingredients transform the testing data distri bution to be similar to the training data distribution.\nThe NPE makes few but strong assumptions about the nature of objects in a physical environment These assumptions are inductive biases that not only give the NPE enough structure to help constrain it to model physical phenomena in terms of objects but also are general enough for the NPE to learn. physical dynamics almost exclusively from observation..\nWe applied the NPE to simple two-dimensional worlds of bouncing balls ranging in complexity. We showed that NPE achieves low prediction error, extrapolates learned physical knowledge to previ ously unseen number of objects and world configurations, and can infer latent properties such as mass. We compared against several baselines designed to test the ingredients of the NPE frame work and found superior performance when all these ingredients are combined in the NPE. Thougl we demonstrated the NPE in the balls environment with nonlinear dynamics and complex scene configurations, the state representation and NPE architecture we propose are quite general-purpose because they assume little about the specific dynamics of a scene.\nThis paper works toward emulating a general purpose physics engine under a framework where. visual and physical aspects of a scene are disentangled. Next steps include linking the NPE witl. perceptual models that extract properties such as position and mass from visual input. Learning tc. simulate is unsupervised learning of the structure of the environment. When a simulator like the NPF is incorporated into an agent in the context of model-based planning and model-based reinforcement. learning, it becomes a prior on the environment that guides learning and reasoning. By combining. the expressiveness of physics engines and the adaptability of neural networks in a compositiona. architecture that supports generalization in fundamental aspects of physical reasoning, the Neura. Physics Engine is an important step towards lifting an agent's ability to think at a level of abstractior. where the concept of physics is primitive.."}, {"section_index": "12", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Tejas Kulkarni for insightful discussions and guidance. We thank Ilker Yildirim, Erin Reynolds, Feras Saad, Andreas Stuhlmuller, Adam Lerer, Chelsea Finn, Jiajun Wu, and the anony- mous reviewers for valuable feedback. We thank Liam Brummit, Kevin Kwok, and Guillermo Webster for help with matter-js. This work was supported MIT's SuperUROP and UROP programs. and by the Center for Minds, Brains and Machines under NSF STC award CCF-1231216 and an ONR grant N00014-16-1-2007."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": ". R. Anderson. Cognitive ycnology and its implications. WH Freeman/Times Books/Henry Holt. & Co, 1990. J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Learning to compose neural networks for ques tion answering. In Proceedings of NAACL-HLT, pages 1545-1554, 2016. C. J. Bates, I. Yildirim, J. B. Tenenbaum, and P. W. Battaglia. Humans predict liquid dynamics using. probabilistic simulation. 2015. P. Battaglia, R. Pascanu, M. Lai, D. Jimenez Rezende, and K. Koray. Interaction networks for learning about objects, relations and physics. In Advances in Neural Information Processing. Systems, 2016. P. W. Battaglia, J. B. Hamrick, and J. B. Tenenbaum. Simulation as an engine of physical scene. understanding. Proceedings of the National Academy of Sciences, 110(45):18327-18332, 2013. M. Brand. Physics-based visual understanding. Computer Vision and Image Understanding, 65(2).\nX. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in. Neural Information Processing Systems, pages 2172-2180, 2016. R. Collobert, K. Kavukcuoglu, and C. Farabet. Torch7: A matlab-like environment for machine. learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011. K. Ellis, A. Solar-Lezama, and J. Tenenbaum. Unsupervised learning by program synthesis. In. Advances in Neural Information Processing Systems, pages 973-981, 2015. S. Eslami, N. Heess, T. Weber, Y. Tassa, K. Kavukcuoglu, and G. E. Hinton. Attend, infer, repeat:. Fast scene understanding with generative models. arXiv preprint arXiv:1603.08575, 2016. C. Finn, I. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through video. prediction. arXiv preprint arXiv:1605.07157, 2016. K. Fragkiadaki, P. Agrawal, S. Levine, and J. Malik.Intuitive physics.. https://sites google.com/site/intuitivephysicsnips15/|2015a. (Accessed on 03/03/2017). K. Fragkiadaki, P. Agrawal, S. Levine, and J. Malik. Learning visual predictive models of physics. for playing billiards. arXiv preprint arXiv:1511.07404, 2015b. A. L. Gaunt, M. Brockschmidt, R. Singh, N. Kushman, P. Kohli, J. Taylor, and D. Tarlow. Terpret:. A probabilistic programming language for program induction. arXiv preprint arXiv:1608.04428,. 2016. T. Gerstenberg, N. Goodman, D. A. Lagnado, and J. B. Tenenbaum.Noisy newtons: Unifying. process and dependency accounts of causal attribution. In In proceedings of the 34th. Citeseer. 2012. N. D. Goodman and J. B. Tenenbaum. Probabilistic models of cognition, 2016. URL http:. 7/probmods.org\nR. Grzeszczuk, D. Terzopoulos, and G. Hinton. Neuroanimator: Fast neural network emulation and control of physics-based models. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pages 9-20. ACM, 1998. J. Hamrick, P. Battaglia, and J. B. Tenenbaum. Internal physics models guide probabilistic judg ments about object dynamics. 2011. G. E. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. In Artificial Neural Networks and Machine Learning-ICANN 2011, pages 44-51. Springer, 2011. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780 1997. A. Jain, A. R. Zamir, S. Savarese, and A. Saxena. Structural-rnn: Deep learning on spatio-temporal graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5308-5317, 2016. T. D. Kulkarni, V. K. Mansinghka, P. Kohli, and J. B. Tenenbaum. Inverse graphics with probabilistic cad models. arXiv preprint arXiv:1407.1339, 2014. T. D. Kulkarni, P. Kohli, J. B. Tenenbaum, and V. Mansinghka. Picture: A probabilistic program- ming language for scene perception. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4390-4399, 2015a. T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum. Deep convolutional inverse graphics network. In Advances in Neural Information Processing Systems, pages 2530-2538, 2015b. B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Human-level concept learning through proba bilistic program induction. Science, 350(6266):1332-1338, 2015. B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman. Building machines that learn and think like people. arXiv preprint arXiv:1604.00289, 2016. N. Leonard, S. Waghmare, and Y. Wang. rnn: Recurrent library for torch. arXiv preprint arXiv:1511.07889, 2015. A. Lerer, S. Gross, R. Fergus, and J. Malik. Learning physical intuition of block towers by example arXiv preprint arXiv:1603.01312, 2016. W. Li, S. Azimi, A. Leonardis, and M. Fritz. To fall or not to fall: A visual approach to physical stability prediction. arXiv preprint arXiv:1604.00066, 2016. Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493, 2015. R. Mottaghi, H. Bagherinezhad, M. Rastegari, and A. Farhadi. Newtonian image understanding: Unfolding the dynamics of objects in static images. arXiv preprint arXiv:1511.04048, 2015.\nR. Grzeszczuk, D. Terzopoulos, and G. Hinton. Neuroanimator: Fast neural network emulation and control of physics-based models. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pages 9-20. ACM, 1998. J. Hamrick, P. Battaglia, and J. B. Tenenbaum. Internal physics models guide probabilistic judg ments about object dynamics. 2011. G. E. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. In Artificial Neural Networks and Machine Learning-ICANN 2011, pages 44-51. Springer, 2011. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997. A. Jain, A. R. Zamir, S. Savarese, and A. Saxena. Structural-rnn: Deep learning on spatio-temporal graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 0ages.5308-5317. 2016\nS. Reed and N. de Freitas. Neural programmer-interpreters. arXiv p print arXiv:1511.06279, 2015\n.O1OO,OO K. A. Smith and E. Vul. Sources of uncertainty in intuitive physics. Topics in cognitive science, 5 (1):185-199, 2013. R. Socher, E. H. Huang, J. Pennington, A. Y. Ng, and C. D. Manning. Dynamic pooling and unfold. ing recursive autoencoders for paraphrase detection. 2011.\nA. Solar-Lezama. Program synthesis by sketching. ProQuest, 2008\nE. S. Spelke. Principles of object. erception. Cognitive science, 14(1):29-56, 1990\nN. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representatior using lstms. 2015. I. Sutskever, G. E. Hinton, and G. W. Taylor. The recurrent temporal restricted boltzmann machin In Advances in Neural Information Processing Systems, pages 1601-1608, 2009. T. Tieleman and G. Hinton. Lecture 6.5-RmsProp: Divide the gradient by a running average of i\nT. Ullman, A. Stuhlmuller, and N. Goodman. Learning physics from dynamical scenes. 2014\nW. F. Whitney, M. Chang, T. Kulkarni, and J. B. Tenenbaum. Understanding visual concepts with. continuation learning. arXiv preprint arXiv:1602.06822, 2016. J. Wu, I. Yildirim, J. J. Lim, B. Freeman, and J. Tenenbaum. Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. In Advances in Neural Information. Processing Systems, pages 127-135, 2015. J. Wu, J. J. Lim, H. Zhang, J. B. Tenenbaum, and W. T. Freeman. Physics 101: Learning physical. object properties from unlabeled videos. In British Machine Vision Conference, 2016.."}, {"section_index": "14", "section_name": "A IMPLEMENTATION", "section_text": "We trained all models using the rmsprop (Tieleman and Hinton2012) backpropagation algorithm. with a Euclidean loss for 1,200,000 iterations with a learning rate of O.0003 and a learning rate decay of 0.99 every 2,500 training iterations, beginning at iteration 50,000. We used minibatches of size. 50 and used a 70-15-15 split for training, validation, and test data..\nAll models are implemented using the neural network libraries built by Collobert et al. (2011 eonard et al.(2015). The NPE encoder consists of a pairwise layer of 25 hidden units and a 5-laye eedforward network of 50 hidden units per layer each with rectified linear activations. Because w use a binary mask to zero out non-neighboring objects, we implement the encoder layers withou oias such that non-neighboring objects do not contribute to the encoder activations. The encoding oarameters are shared across all object pairs. The decoder is a five-layer network with 50 hidde inits per layer and rectified linear activations after all but the last layer. The NP encoder architectur s the same as the NPE encoder, but without the pairwise layer. The NP decoder architecture is th same as the NPE decoder. The LSTM has three layers of 100 hidden units and a linear layer afte he last layer. It has rectified linear activations after each layer.\nWe informally explored several hyperparameters, varying the number of layers from 2 to 5, the hidden dimension from 50 to 100, and learning rates in { 10-5, 3 10-5, 10-4, 3 10-4, 10-3, 3 10-3}. Though this is far from an exhaustive search, we found that the above hyperparameter settings work well.\nExperiments Train - Test LSTM NP NPE-NN NPE Prediction Task 4 - 4 2.177e-03 2.276e-02 1.822e-03 1.923e-02 2.684e-03 2.283e-02 2.469e-04 4.362e-03 Prediction Task Variable Mass 4 - 4 3.521e-03 2.725e-02 2.534e-03 1.829e-02 4.278e-03 2.562e-02 5.312e-04 6.379e-03 345 - 3 1.783e-03 1.872e-02 5.844e-04 8.118e-03 1.667e-03 1.700e-02 1.651e-04 3.523e-03 345 - 4 2.237e-03 2.336e-02 1.172e-03 1.329e-02 2.554e-03 2.222e-02 2.372e-04 4.508e-03 345 - 5 2.839e-03 2.909e-02 1.944e-03 1.959e-02 3.543e-03 2.810e-02 3.069e-04 5.514e-03 Generalization Task 345 - 6 3.757e-03 3.636e-02 2.897e-03 2.665e-02 4.542e-03 3.381e-02 4.066e-04 6.676e-03 345 - 7 5.085e-03 4.546e-02 3.894e-03 3.395e-02 5.654e-03 3.944e-02 4.951e-04 7.858e-03 345 - 8 6.943e-03 5.595e-02 5.091e-03 4.182e-02 6.913e-03 4.604e-02 5.992e-04 9.174e-03 345 - 3 2.663e-03 2.218e-02 2.228e-03 1.638e-02 2.785e-03 1.913e-02 3.546e-04 4.790e-03 345 - 4 3.588e-03 2.784e-02 3.486e-03 2.375e-02 4.291e-03 2.563e-02 5.393e-04 6.215e-03 345 - 5 4.719e-03 3.472e-02 4.918e-03 3.164e-02 5.848e-03 3.273e-02 6.983e-04 7.719e-03 Generalization Task Variable Mass 345 - 6 6.389e-03 4.302e-02 6.733e-03 3.982e-02 7.927e-03 4.092e-02 9.414e-04 9.398e-03 345 - 7 8.581e-03 5.276e-02 8.746e-03 4.853e-02 1.012e-02 4.998e-02 1.196e-03 1.130e-02 345 - 8 1.153e-02 6.469e-02 1.086e-02 5.724e-02 1.244e-02 5.967e-02 1.592e-03 1.367e-02 OL - 0 5.967e-03 5.546e-02 1.010e-03 1.358e-02 N/A N/A 3.338e-04 5.921e-03 OL - L 8.658e-03 6.995e-02 2.680e-03 2.663e-02 N/A N/A 7.117e-04 1.019e-02 Different Scene Configurations OL - U 1.083e-02 7.765e-02 4.152e-03 3.201e-02 N/A N/A 8.193e-04 1.141e-02 OL - 1 1.201e-02 7.947e-02 6.206e-03 3.565e-02 N/A N/A 1.605e-03 1.482e-02\nFigure 6: Error analysis on velocity and position: We summarize the error in velocity and posi tion for each train-test variant of each experiment. Normalized velocity MsE is shown in the gray. columns (multiplying these values by the maximum velocity of 60 would give the actual velocity in. pixels/timestep, where each timestep is about O.1 seconds). The white columns show the error in Eu-. clidean distance between the predicted position and the ground truth position of the ball. These have been normalized by the radius of the ball (60 pixels), so multiplying these values by 60 would give the actual Euclidean distance in pixels. The NPE consistently outperforms all baselines by O.5 to 1. order of magnitude, and this is also reflected in the bottom row of Fig.3h,b. Notice that experiments. with variable mass exhibit only slightly higher error than their constant-mass variants, even when. the variable mass experiments contain masses that differ by a factor of 25. For the experiments with different scene configurations, we do not report error for NPE-NN; the unnecessary computational. complexity of operating on over 30 objects, and the degradation in performance without this mask. evident from the other experiments, make the need for the neighborhood mask clear.."}] |
Skn9Shcxe | [{"section_index": "0", "section_name": "HIGHWAY AND RESIDUAL NETWORKS LEARN UNROLLED ITERATIVE ESTIMATION", "section_text": "Klaus Greff\nThe Swiss AI Lab IDSIA (USI-SUPSI)\nRupesh K. Srivastava & Jurgen Schmidhuber\nThe Swiss AI Lab IDSIA (USI-SUPSI) & NNAISENSE, Lugano, Switzerlanc {klaus,rupesh, juergen}@idsia.ch.\nThe past year saw the introduction of new architectures such as Highway net. works (Srivastava et al., 2015a) and Residual networks (He et al., 2015) which for the first time, enabled the training of feedforward networks with dozens to. hundreds of layers using simple gradient descent. While depth of representation has. been posited as a primary reason for their success, there are indications that these. architectures defy a popular view of deep learning as a hierarchical computation of. increasingly abstract features at each layer.\nIn this report, we argue that this view is incomplete and does not adequately explain. several recent findings. We propose an alternative viewpoint based on unrolled. iterative estimation-a group of successive layers iteratively refine their estimates of the same features instead of computing an entirely new representation. We. demonstrate that this viewpoint directly leads to the construction of Highway and Residual networks. Finally we provide preliminary experiments to discuss the. similarities and differences between the two architectures."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep learning can be thought of as learning many. levels of representation of the input which form a. hierarchy of concepts (Deng & Yu, 2014; Goodfel-. ow et al., 2016: LeCun et al., 2015) (but note that. this is not the only view: cf. Schmidhuber (2015)). With fixed computational budget, deeper architec-. tures are believed to possess greater representational. power and, consequently, higher performance than. shallower models. Intuitively, each layer of a deep. neural network computes a new level of represen-. ation. For convolutional networks. Zeiler & Fer gus (2014) visualized the features computed by each. ayer, and demonstrated that they in fact become in-. creasingly abstract with depth. We refer to this way of. thinking about neural networks as the representation. view, which probably dates back to Hubel & Wiesel. 1962). The representation view links the layers in a. network to the abstraction levels of their representa. tions, and as such represents a pervasive assumption. n many recent publications including He et al. (2015) who describe the success of their Residual networks ike this: \"Solely due to our extremely deep represen. on the COCO object detection dataset'."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "dimensionality change O S layer gate transform block residual network hiahway network\nFigure 1: Illustrating our usage of blocks and stages in Highway and Residual networks.\nRecently, training feedforward networks with hundreds of layers has become feasible through the. invention of Highway networks Srivastava et al. (2015a) and Residual networks (ResNets; He et al 2015). The latter have been widely successful in computer vision, advancing the state of the art. on many benchmarks and winning several pattern recognition competitions (He et al., 2015), while. Highway networks have been used to improve language modeling (Kim et al., 2015; Jozefowicz et al.. 2016; Zilly et al., 2016) and translation (Lee et al., 2016). Both architectures have been introduced. with the explicit goal of training deeper models.\nThere are, however, some surprising findings that seem to contradict the applicability of the rep. resentation view to these very deep networks. For example, it has been reported that removing. almost any layer from a trained Highway or Residual network has only minimal effect on its overal performance (Srivastava et al., 2015b; Veit et al., 2016). This idea has been extended to a layerwise. dropout as a regularizer for ResNets (Huang et al., 2016b). But if each layer supposedly builds a. new level of representation from the previous one, then removing any layer should critically disrupi. the input for the following layer. So how is it possible that doing so seems to have only a negligible. effect on the network output? Veit et al. (2016) even demonstrated that shuffling some of the layers ir. a trained ResNet barely affects performance.\nIn this paper, we propose a new interpretation that reconciles the representation view with the. operation of Highway and Residual networks: functional blocks' in these networks do not compute. entirely new representations; instead, they engage in an unrolled iterative estimation of representation that refine/improve upon their input representation, thus preserving feature identity. The transition tc a new level of representation occurs when a dimensionality change--through projection---separate two groups of blocks which we refer to as a stage (Figure 1). Taking this perspective, we are able. to explain previously elusive findings such as the effects of lesioning and shuffling. Furthermore we formalize this notion and use it to directly derive Residual and Highway networks. Finally, w. present some preliminary experiments to compare these two architectures and investigate some o. their relative advantages and disadvantages..\nThis section provides a brief survey of some the findings and points of contention that seem to contradict a representation view of Highway and Residual networks.\nStaying Close to the Inputs. The success of ResNets has been partly attributed to the fact that the. obviate the need to learn the identity mapping, which is difficult. However, learning the negativ. identity (so that a feature can replaced by a higher level one) should be at least as difficult. Th. fact that the residual form is useful indicates that Residual blocks typically stay close to the inpu representation, rather than replacing it.\nSurprisingly, increasing the depth of a network beyond a certain point often leads to a decline in performance even on the training set (Srivastava et al., 2015a). Since adding more layers cannot decrease representational power, this phenomenon is usually attributed to the vanishing gradient problem (Hochreiter, 1991). Therefore, even though deeper models are more powerful in principle. they often fall short in practice.\nIt has been argued that ResNets are better understood as ensembles of shallow networks (Huang et al., 2016b; Veit et al., 2016; Abdi & Nahavandi, 2016). According to this interpretation, ResNets implicitly average exponentially many subnetworks, each of which only use a subset of the layers But the question remains open as to how a layer in such a subnetwork can successfully operate with changing input representations. This, along with other findings, begs the question as to whether the representation view is appropriate for understanding these new architectures.\nThe analysis by Srivastava et al. (2015a) shows that in trained Highway networks, the activity of the transform gates is often sparse for each individual sample, while their average activity over all training samples is non-sparse. Most units learn to copy their inputs and only replace features selectively Again, this means that most of the features are propagated unchanged rather than being combined\n1We refer to the building blocks of a ResNet--a few layers with an identity skip connection--as a Residual block (He et al., 2015). Analogously, in a Highway network, we refer to a collection of layers with a gated skip connection as a Highway block. See Figure 1 for an illustration.\n3 (3 a2 a3 3 a1 C1 C2 C3 2 (2) a2 (2) b1 b2 a3 b3 a2 1 (1) a1 a3 (1) 1 a a3 a1 a2 a3 b)\nand changed between layers-an observation that contradicts the idea of building a new level o abstraction at each layer.\nLesioning. If it were true that each layer computes a completely new set of features, then removing. a layer from a trained network would completely change the input distribution for the next layer. We would then expect to see the overall performance drop to almost chance level. This is in fact what Veit et al. (2016) find for the 15-layer VGG network on CIFAR-10: removing any layer from the trained network sets the classification error to around 90%. But the lesioning studies conducted on Highway networks (Srivastava et al., 2015a) and ResNets (Veit et al., 2016) paint an entirely different picture only a minor drop in performance is observed for any removed layer. This drop is more pronounced for the early layers and the layers that change dimensionality (i.e. number of filter maps and map. sizes), but performance is always still far superior to random guessing..\nHuang et al. (2016b) take lesioning one step further and drop out entire ResNet layers as a regularize during training. They describe their method as \"[...] a training procedure that enables the seemingly. contradictory setup to train short networks and use deep networks at test time'. The regularizatiol effect of this procedure is explained as inducing an implicit ensemble of many shallow networks akii. to normal dropout. Note that this explanation requires a departure from the representation view in tha. each layer has to cope with the possibility of having its entire input layer removed. Otherwise, mos. shallow networks in the ensemble would perform no better than chance level, just like the lesionec. VGG net.\nReshuffling. The link between layers and representation levels may be most clearly challenged. by an experiment in Veit et al. (2016) where the layers of a trained 110-layer ResNet are reshuffled.. Remarkably, error increases smoothly with the amount of reshuffling, and many re-orderings result. only in a small increase in error. Note, however, that only layers within a stage are reshuffled, since the dimensionality of the swapped layers must match. Veit et al. (2016) take these results as evidence. that ResNets behave as ensembles of exponentially many shallow networks..\nThe representation view has guided neural networks research by providing intuitions about the meaning\" of their computations. In this section we will augment the representation view to deal with. the incongruities and hopefully enable future research on these very deep architectures to reap the. same benefits. The target of our modification is the mapping of layers/blocks of the network to levels. of abstraction.\nAt this point it is interesting to note that the one-to-one mapping of neural network layers to levels of. abstraction is an implicit assumption rather than a stated part of the representation view. A recent deep learning textbook (Goodfellow et al., 2016) explicitly states: \"[...] the depth flowchart of the. computations needed to compute the representation of each concept may be much deeper than the graph of the concepts themselves.\"' So in a strict sense the evidence from Section 2 does not in fact. contradict a representation view of Residual and Highway networks. It only conflicts with the idea.\nFigure 2: (a) A single neural network layer that directly computes the desired represen- tation. (b) The unrolled iterative estimation stage (e.g. from a Residual network) stretches the computation over three layers by first pro viding a noisy estimate of that representation. but then iteratively refines it over the next to layers. (c) A classic group of three layers can also distribute the computation, but they would produce a new representation at each layer. The iterative estimation stage in (b) can be seen as a middle ground between a single classic neural network layer, (a), and multiple classic layers, (c).\nthat each layer forms a new level of representation. We can therefore reconcile very deep network with the representation view by explicitly giving up this assumption.\nA good initial estimate for a representation should on average be correct even though it might have high variance. We can thus formalize the notion of \"preserving feature identity\" as being an unbiased estimator for the target representation. This means the units a; in different layers k E {1... L} are all estimators for the same latent feature A,, where A, refers to the (unknown) value towards which the i-th feature is converging. The unbiased estimator condition can then be written as the expected difference between the estimator and the final feature:\nFeature Identity. A stage that performs iterative estimation is different from one that computes a new level of representation at each block because it preserves the feature identity. They operate differently even if their structure and their final representations are equivalent, because of the way they treat intermediate representations. This is illustrated in Figure 2, where the iterative estimatior stage, (b), is contrasted with a single classic block (a), and multiple classic blocks, (c). In the iterative estimation case (middle), all the blocks within the stage produce estimates of the same representatior (indicated by having different shades of blue). Whereas, in a classical stage, (c), the intermediate representations would all be different (represented by different colors)."}, {"section_index": "3", "section_name": "3.1 HIGHWAY AND RESIDUAL NETWORKS", "section_text": "Both Highway and Residual networks address the problem of training very deep architectures by improving the error flow via identity skip connections that allow units to copy their inputs on tc the next layer unchanged. This design principle was originally introduced in Long Short-Term Memory (LSTM) recurrent networks (Hochreiter & Schmidhuber, 1997) and mathematically these architectures correspond to a simplified LSTM network, \"unrolled\" over time.\ny(x)= H(x)\nyx)=F(x)+x\nUnrolled Iterative Estimation. We propose to think of blocks in Highway and Residual networks. as performing unrolled iterative estimation of representations. By that we mean that the blocks in a stage work together to estimate and iteratively refine a single level of representation. The first layer in that stage already provides a (rough) estimate for the final representation. Subsequent layer in the stage then refine that estimate without changing the level of representation. So if the first layer in a stage detects simple shapes, then the rest of the layers in that stage will work at that level too..\nE[a-A]=0. xEX\nNote that both the as and A, depend on the samples x of the data-generating distribution X and are thus random variables. The fact that they both depend on the same x is also the reason we need to keep them within the same expectation and cannot just write E[a;] = A;.\nIn Highway Networks, for each unit there are two additional gating units, which control how much (typically non-linear) transformation is applied (transform gate T) and how much to just copy of the activation from the corresponding unit in the previous layer (carry gate C). Let H(x) be a nonlinear parametric function of the inputs, x, (typically an affine projection followed by pointwise non-linearity). Then a traditional feed-forward network layer can be written as:\ny(x)=H(x)T(x)+xC(x)\nyx)=Hx)Tx)+x1-Tx))\nResNets simplify the Highway networks approach by reformulating the desired transformation as. the input plus a residual F(x). The rationale behind this is that it is easier to optimize the residual form than the original function. For the extreme case where the desired function is the identity, this amounts to the trivial task of pushing the residual to zero:.\nAs with Highway networks, Residual networks can be viewed as unfolded recurrent neural networks of the particular mathematical form (one with an identity self-connection) of an LSTM cell. This has been explicitly pointed out by Liao & Poggio (2016), who also argue that this could allow Residual networks to emulate recurrent processing in the visual cortex and thus adds to their biological plausibility. Setting F(x) = T(x)[H(x) - x] converts Equation 5 to Equation 4 showing that both formulations differ only in the precise functional form for F. Alternatively, Residual networks can be seen as a particular case of Highway networks where C(x) = T(x) = 1 and are not learned.\nEquation 1 can be used to directly derive the ResNet equation (Equation 5). First, it follows that the expected difference between outputs of two consecutive blocks in a stage is zero:."}, {"section_index": "4", "section_name": "3.3 DERIVING HIGHWAY NETWORKS", "section_text": "k-1.(1-Ti) =HT+a"}, {"section_index": "5", "section_name": "4.1 IMPLICATIONS FOR HIGHWAY NETWORKS", "section_text": "In Highway networks with coupled gates the mixing coefficients always sum to one. This ensures. that the expectation of the new estimate will always be correct (cf. Equation 14). The precise value of. mixing will only determine the variance of the new estimate. We can bound this variance to be less or equal to the variance of the previous layer by restricting both mixing coefficients to be positive. In. Highway networks this is done by using the logistic sigmoid activation function for the transform. gate T;. This restriction is equivalent to the assumption of a1 and Q2 having the same sign. This. assumption holds, for example, if the error of the new estimate H, - A, is independent of the old. ak-1 - A,. Because in that case their covariance is zero and thus both alphas are positive..\nUsing the logistic sigmoid as activation function for the transform gate further means that the pre activation of T; implicitly estimates log( &2 ). This is easy to see because the logistic sigmoid of that.\nE[a-Ai]-E[a} k-1_A=0 E[ak-ak-1]=0.\nk =a >E[F]=0.\nTherefore, if the residual block F has a zero mean over the training set, then Equation 1 holds and it can be said to maintain feature identity. Note that this is a reasonable assumption, especially when using batch normalization.\nThe coupled Highway formula (Equation 4) can be directly derived as an alternative way of ensuring. Equation 1 if we assume a H, to be a new estimate of A,. Highway layers then result from the k-1 with H, such that the resulting a; is a optimal way to linearly combine the former estimate a;. minimum variance estimate of A, i.e. requiring E[a; - A] = 0 and that Var[ak - A,] is minimal..\nLet Q1=Var[a-A]-Cov[ak-A, a-Hi] and Q2= Var[H-A]-Cov[ak-Ai, a-H], then the optimal linear way of combining them is then given by the following estimator (see Section A.1 for derivation):\nQ2 Q1 k+1 k Hi. Q1 + Q2 Q1 +Q2\n0.04 0.15 0.15 0.4 rror 0.02 0.10 0.2 E 0.10 0.00 essnnnon 0.0 0.05 0.02 0.05 0.2 -0.04 0.00 -0.4 -0.06 0.00 Aeerege -0.05 0.6 -0.08 0.05 0.10 -0.10 -0.8 -0.12 0.15 0.10 -1.0 Stage 1 Stage 2 Stage 3 Stage 4\nFigure 3: Experimental corroboration of Equation 1. The average estimation error - an empirical estimate of the LHS in Equation 1 - for each block of each stage (x-axis). It stays close to zerc in all stages of a 50-layer ResNet trained on the ILSVRC-2015 dataset. The standard deviation of the estimation error decreases as depth increases in each stage (left to right), indicating iterative refinement of the representations."}, {"section_index": "6", "section_name": "4.2 EXPERIMENTAL CORROBORATION OF ITERATIVE ESTIMATION VIEW", "section_text": "The primary prediction of the iterative estimation view is that the estimation error for Highway o Residual blocks within the same stage should be zero in expectation. To empirically test this claim we extract the intermediate layer outputs for 5000 validation set images using the 50-layer ResNe trained on the ILSVRC-2015 dataset from He et al. (2015). These are then used to compute the empirical mean and standard deviation of the estimation error over the validation subset, for all block. in the four Residual stages in the network. Finally the mean of the empirical mean and standard deviation is computed over the three spatial dimensions.\nFigure 3 shows that for the first three stages, the mean estimation error is indeed close to zero. This indicates that it is valid to interpret the role of Residual blocks in this network as that of iteratively refining a representation. Moreover, in each stage the standard deviation of the estimation error decreases over successive blocks, indicating the convergence of the refinement procedure. We note that stage four (with three blocks) appears to be underestimating the representation values, indicating a probable weak link in the architecture.\nResNets (He et al., 2015) and many other derived architectures share some common characteristics They are divided into stages of Residual blocks that share the same dimensionality. In between these stages the input dimensionality changes, typically by down-sampling and an increase in the number of channels. These stages typically also increase in length: the early stages consist of fewer layers compared to later ones.\n1 1 Q1 log 1 + Q2 1 - Q1 +Q2 Q1\nFor the simple case of independent estimates (Cov[a; A,, a - H,] = O), this gives us another way of understanding the transform gate bias: It controls our initial belief in the variance of the layers estimate as compared to the previous one. A low bias means that the layers on average produce a high variance estimate, and should thus only contribute little, which seems a reasonable assumption for initialization.\nWe can now interpret these design choices from an iterative estimation point of view. From this. perspective the level of representation stays the same within each stage, through the use of identity. shortcut connections. Between stages, the level of representation is changed by the use of a projection. to change dimensionality. This means that we expect the type of features that are detected to be very. similar within a stage and jump in abstraction between stages. This view also suggests that the first. few stages can be shorter, since low level representations tend to be relatively simple and need little\nFigure 4: Feature visualization from Chu et al. (2017), reproduced with kind permission of the authors. It shows how the response of a single filter (unit) evolves over the three blocks (shown fron left to right) of stage 1 in a 50-layer ResNet trained on ImageNet. On the left of each visualization are the top 9 patches from the ImageNet validation set that maximally activated that filter. To the right the corresponding guided backpropagation (Springenberg et al., 2014) visualizations are shown\niterative refinement. The features of later stages on the other hand are likely complex with numerou. inter-dependencies and therefore benefit more from iterative refinement..\nIndeed, visualization of Residual network features supports the iterative estimation view. In Figure 4 we reproduce visualizations from a study by Chu et al. (2017) who observe: \"[...1 residual layers o the same dimensionality learn features that get refined and sharpened'. These visualizations show how the response of a single filter changes over three Residual blocks within the same stage of a 50-layer Residual network trained for image classification. Note that the filter appears to refine its response by including surrounding context, rather than changing it across blocks in the same stage. In the first block, the top nine activating patches for the filter include three light sources and six speculai highlights. In later blocks, through the incorporation of spatial context, eight out of nine maximally activating patches are specular highlights. Similar refinement behavior is observed throughout the different stages of the network.\nAnother finding in line with this implication of the iterative estimation view is that in some cases sharing weights of the Residual blocks within a stage doesn't deteriorate performance much (Liao & Poggio, 2016). Similarly Lu & Renals (2015) shared the weights of the transform and carry gates of a thin and deep highway network, while still achieving better performance than both normal deep neural networks and Residual networks."}, {"section_index": "7", "section_name": "4.4 REVISITING EVIDENCE AGAINST THE REPRESENTATION VIEW", "section_text": "Staying Close to the Inputs. When iteratively re-estimating a variable, staying close to the old value should be a more common operation than changing it significantly. This is the reason why the ResNet formulation makes sense: learning the identity is hard and it is needed frequently. It also explains sparse transform gate activity in trained Highway networks: These networks learn to dynamically and selectively update individual features, while keeping most of the representation intact.\nLesioning. Another implication of the iteration view is that processing in layers is incremental and. somewhat interchangeable. Each layer (apart from the first) refines an already reasonable estimate o1. the representation. It follows that removing layers, like in the lesioning experiments, should have only a mild effect on the final result because doing so does not change the overall representation the next layer receives, only its quality. The following layer can still perform mostly the same operation. even with a somewhat noisy input. Layer dropout (Huang et al., 2016b) amplifies this effect by. explicitly training the network to work with a variable number of iterations. By dropping randon layers it further penalizes iterations relying on each other, which could be another explanation for the. regularization effect of the technique.\nShuffling. The layers within a stage should also be interchangeable to a certain degree, because they all work with the same input and output representations. Of course. this interchangeability i\nMany visualization studies (such as those by Zeiler & Fergus (2014)) have examined the activities in trained convolutional networks and found evidence supporting the representation view. However, these studies were conducted on networks not designed for iterative estimation. The interpretation above paints a different picture for networks which learn unrolled iterative estimation. In these networks, we should observe stages and not layers corresponding to levels of representation.\nVariant Functional Form.. Perplexity Plain H(x) 92.60 Residual H(x)+x 91.32 T-Only H(x).T(x)+x 82.94 C-Only H(x)+xC(x) 79.15 Coupled H(x)T(x)+x(1-T(x)) 79.13 Full H(x).T(x)+xC(x) 79.09\n(a) Comparing of various variants of the Highway formulation for character-aware neural language models (Kim et al., 2015)\nTable 1: Comparison of several Highway network and Residual network variants.\nnot without limitations. The network could learn to depend on a specific order of refinements, which would be disturbed by shuffling and lesioning. But we can expect these effects to be moderate in many cases, which is indeed what has been reported in the literature..\nThe preceding sections show that we can construct both Highway and Residual architectures math. ematically grounded in learning unrolled iterative estimation. The common feature between these. architectures is that they preserve feature identities, and the primary difference is that they have. different biases towards switching feature identities. Unfortunately, since our current understanding. of the computations required to solve complex problems is limited, it is extremely hard to say a priori. which architecture may be more suitable for which type of problems. Therefore, in this section we. perform two case studies comparing and contrasting their behavior experimentally. The studies are. each based on applications for which Residual and Highway layers respectively have been effective"}, {"section_index": "8", "section_name": "5.1 IMAGE CLASSIFICATION", "section_text": "We train a 50-layer convolutional Highway network based on the 50-layer Residual network from H et al. (2015). The design of the two networks are identical (including use of batch normalizatio (BN) after every convolution operation), except that unlike Residual blocks, the Highway blocks us two sets of layers to learn H and T and then combine them using the coupled Highway formul tion. We train two slight variations of the Highway network: Highway, in which H has the sam design as in a Residual block before addition i.e. Conv-BN-ReLU-Conv-BN-ReLU-Conv-BI and Highway-Full, in which an additional third ReLU operation is added. The design of T i Conv-BN-ReLU-Conv-BN-ReLU-Conv-BN-Sigmoid. As proposed initially for Highwa layers, both H and T are learned using the same receptive fields and number of parameters. Th transform gate biases are set to -1 at the start of training. For fair comparison, the number of featur maps throughout the Highway network is reduced such that the total number of parameters is close t the Residual network. The training algorithm and learning rate schedule are kept the same as thos used for the Residual network.\nVariant Top5 Error Highway 10.03 0.17 Highway-Full 10.21 0.03 Resnet 9.40 0.18 Highway + BN 7.53 0.05 Highway-Full + BN 7.29 0.11 Resnet + BN 7.17 0.14\nb) Comparing ILSVRC-2012 top5 classi fication error. Mean and std over 3 runs\nDeep Residual networks outperformed all other entries at the 2016 ImageNet classification challenge In this study we compare the performance of 50-layer convolutional Highway and Residual networks for ImageNet classification. Our aim is not to examine the importance of depth for this task- shallower networks have already outperformed deep Residual networks on all original Residual network benchmarks (Huang et al., 2016a; Szegedy et al., 2016). Instead, our goal is to fairly. compare the two architectures, and test the following claims regarding deep convolutional Highway. networks (He et al., 2015; 2016; Veit et al., 2016):\n1. They are harder to train, leading to stalled training or poor results.. 2. They require extensive tuning of the initial bias, and even then produce much worse results compared to Residual networks.. 3. They are wasteful in terms of parameters since they utilize extra learned gates, doubling the total parameters for the same number of units compared to a Residual layer..\nThe plots in Figure 5a show that the Residual network fits the data better---its final training loss is. lower than the Highway network. The final performance of both networks on the validation set (see. Table 1b) is very similar, with the Residual network producing a slightly better top-5 classification error of 7.17% vs. 7.53% for the Highway network. The Highway-Full network produces even closer. results with a mean error of 7.29%. These results contradict claims 1 and 2 above, since the Highway. networks are easy to train without requiring any bias tuning. However, there is some support for. claim 3 since the Highway network appears to slightly underfit compared to the Residual network. suggesting lower capacity for the same number of parameters..\nImportance of Expressive Gating. The mismatch between the results above and claims 1 and 2 made by He et al. (2016) can be explained based on the importance of having sufficiently expressive transform gates. For experiments with Highway networks (which they refer to as Residual networks with exclusive gating), He et al. (2016) used 1 1 convolutions for the transform gate, instead of having the same receptive fields for the gates as the primary transformation (H), as done by Srivastava et al. (2015a). This change in design appears to be the primary cause of instabilities in learning since the gates can no longer function effectively. Therefore, it is important to use equally expressive transformations for H and T in Highway networks.\nRole of Batch Normalization. Since both architectures have built-in ease of optimization compared to plain networks, it is interesting to investigate the necessity of batch normalization for training these networks. Our derivation in Section 3.2 suggest that BN in Residual networks could take the role of an inductive bias towards iterative estimation by keeping the expected mean of the residual zero (cf Equation 9). To investigate its role we train the networks above without any batch normalization. The resulting training curves are shown in Figure 5b of the supplementary.\nWe find that without BN both networks reach an even lower training error than before while performing. worse on the validation set indicating increased overfitting for both. This shows that BN is no. necessary for training these networks and does not speed up learning. Interestingly, the effect is. more pronounced for the Highway network, which now fits the data better than the ResNet. Thi. contradicts claim 3, since a Highway network with the same number of parameters as a Residua network demonstrates slightly higher capacity. On the other hand both networks produce a highe. validation error--10.03% and 9.40% for the Highway and Residual network respectively-indicating. a clear case of overfitting. This means that batch normalization provides regularization benefits tha. can't easily be explained by either improved optimization nor by the inductive bias for Residua. networks."}, {"section_index": "9", "section_name": "5.2 LANGUAGE MODELING", "section_text": "Next we compare different functional forms (or variants) of the Highway network formulation fo the case of character-aware language modeling. Kim et al. (2015) have shown that utilizing a fe Highway fully connected layers instead of conventional plain layers improves model performanc for a variety of languages. The architecture consists of a stack of convolutional layers followe y Highway layers and then an LSTM layer which predicts the next word based on the histor Similar architectures have since been utilized for obtaining substantial improvements for large-scal language modeling (Jozefowicz et al., 2016) and character level machine translation (Lee et al., 2016 Highway layers with coupled gates have been used in all these studies.\nOnly two to four Highway layers were necessary to obtain significant modeling improvements in the. studies above. Thus, it is reasonable to assume that the central advantage of using Highway layers for. this task is not easing of credit assignment over depth, but an improved modeling bias. To test how well Residual and other variants of Highway networks perform, we compare several language models. trained on the Penn Treebank dataset using the same setup and code provided by Kim et al. (2015). We use the LSTM-Char-Large model, only changing the two Highway layers to different variants. The following variants are tested:\nCoupled The most commonly used Highwa variant, derived in Section 3.3.\nC-Only A Highway variant with a carry gate but no transform gate (always set to one).\nT-Only A Highway variant with a transform gate but no carry gate (always set to one)\nThe test set perplexity of each model is shown in Table 1a. We find that the the Full, Coupled and C-Only variants have similar performance, better than the T-Only variant and substantially better than the Residual variant. The Residual variant results in performance close to that obtained by using a single plain layer, even though four Residual layers are used. Learned gating of the identity connection is crucial for improving performance for this task.\nInterestingly, Table 1a shows a significant advantage for all variants with a multiplicative gate on the inputs. These results suggest that in this setting it is crucial to dynamically replace parts of the input representation. Some features need to be changed drastically conditioned on other detected features such as word type while other features need to be retained. As a result, even though Residual networks are compatible with iterative estimation, they may not be the best choice for tasks where mixing adaptive feature transform/replacement and reuse is required."}, {"section_index": "10", "section_name": "6 CONCLUSION", "section_text": "This paper offers a new perspective on Highway and Residual networks as performing unrollec iterative estimation. As an extension of the popular representation view, it stands in contrast to the optimization perspective from which these architectures have originally been introduced. According tc he new view, successive layers (within a stage) cooperate to compute a single level of representation Therefore, the first layer already computes a rough estimate of that representation, which is ther iteratively refined by the successive layers. Unlike layers in a conventional neural network, which each compute a new representation, these layers therefore preserve feature identity.\nWe have further shown that both Residual and Highway networks can be directly derived from this. new perspective. This offers a unified theory from which these architectures can be understood as two approaches to the same problem. This view further provides a framework from which to understand. several surprising recent findings like resilience to lesioning, benefits of layer dropout, and the mild. negative effects of layer reshuffling. Together with the derivations these results serve as compelling. evidence for the validity of our new perspective..\nThe preliminary evidence presented in this report is meant as a starting point for further investigation. We hope that the unrolled iterative estimation perspective will provide valuable intuitions to help. guide research into understanding, improving and possibly combining these exciting techniques"}, {"section_index": "11", "section_name": "ACKNOWLEDGEMENTS", "section_text": "The authors wish to thank Faustino Gomez, Bas Steunebrink, Jonathan Masci, Sjoerd van Steenkiste. and Christian Osendorfer for their feedback and support. We are grateful to NVIDIA Corporation fo providing us a DGX-1 as part of the Pioneers of AI Research award. This research was supported by. the EU project \"INPUT\" (H2020-ICT-2015 grant no. 687795).\nRecall that the Highway layers transform character-aware representations before feeding them into an LSTM layer. Thus the non-contextual word-level representations resulting from the convolutional layers are transformed into representations better suited for contextual language modeling. Since it is unlikely that the entire representation needs to change completely, this setting fits well with the iterative estimation perspective.\nMotivated by their conceptual similarities we set out to compare Highway and Residual networks In preliminary experiments we found that they give very similar results for networks of equal size. thus refuting some claims that Highway networks would need more parameters, or that any form of gating impairs the performance of Residual networks. In another example, we found non-gated identity skip-connections to perform significantly worse, and offered a possible explanation: If the task requires dynamically replacing individual features, then the use of gating is beneficial."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Deng, Li and Yu, Dong. Deep Learning Methods and Applications. Foundations and Trends in Signa Processing, pp. 199-200, 2014\nGoodfellow, Ian, Bengio, Yoshua, and Courville, Aaron. Deep Learning. Book in preparation for MIT Press, 2016.\nHe, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep Residual Learning for Image Recognition. arXiv:1512.03385 [cs], December 2015.\nHe, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Identity Mappings in Deep Residua Networks. In Computer Vision-ECCV 2016, 2016.\nHochreiter, Sepp. Untersuchungen zu dynamischen neuronalen Netzen. Diploma, Technische Universitat Miinchen, pp. 91, 1991.\nHochreiter, Sepp and Schmidhuber, Jurgen. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997.\nHuang, Gao, Sun, Yu, Liu, Zhuang, Sedra, Daniel, and Weinberger, Kilian. Deep Networks with Stochastic Depth. arXiv:1603.09382 Jcs1, March 2016b\nHubel. David H. and Wiesel. Torsten N. Receptive fields. binocular interaction and functional architecture in the cat's visual cortex. The Journal of physiology. 160(1):106-154. 1962\nJozefowicz, Rafal, Vinyals, Oriol, Schuster, Mike, Shazeer, Noam, and Wu, Yonghui. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.\nKim, Yoon, Jernite, Yacine, Sontag, David, and Rush, Alexander M. Character-aware neural language models. arXiv preprint arXiv:1508.06615, 2015.\nLee, Jason, Cho, Kyunghyun, and Hofmann, Thomas. Fully Character-Level Neural Machine Translation without Explicit Segmentation. arXiv preprint arXiv:1610.03017, 2016.\nSpringenberg, Jost Tobias, Dosovitskiy, Alexey, Brox, Thomas, and Riedmiller, Martin. Striving for. Simplicity: The All Convolutional Net. arXiv:1412.6806 Jcs1. December 2014\nSrivastava, Rupesh K, Greff, Klaus, and Schmidhuber, Juergen. Training Very Deep Networks. In. Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 28, pp. 2377-2385. Curran Associates, Inc., 2015a\nSrivastava, Rupesh Kumar, Greff, Klaus, and Schmidhuber, Jurgen. HighwayNetworks arXiv:1505.00387 [cs], May 2015b\nLeCun, Yann, Bengio, Yoshua, and Hinton, Geoffrey. Deep learning. Nature, 521(7553):436-444 May 2015. 1SSN 0028-0836. doi: 10.1038/nature14539\nVeit, Andreas, Wilber, Michael, and Belongie, Serge. Residual Networks are Exponential Ensembles of Relatively Shallow Networks. arXiv:1605.06431 [cs], May 2016..\nZilly, Julian Georg, Srivastava, Rupesh Kumar, Koutnik, Jan, and Schmidhuber, Jurgen. Recurren Highway Networks. arXiv:1607.03474 Jcs], July 2016.\n6 80 Resnet-50 with BN Resnet-50 with BN HighwayNet-50 with BN HighwayNet-50 with BN 5 (%) 4 50 D3 set uon 30 2 20 1 10 0 0 0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60 70 80 Epochs Epochs (a) with batch normalization 6 70 Resnet-50 without BN Resnet-50 without BN HighwayNet-50 without BN HighwayNet-50 without BN 60 5 4 40 3 set 30 2 20 1 10 0 0 0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60 70 80 90 Epochs Epochs (b) without batch normalization\nFigure 5: Comparing 50-layer Highway vs. Residual networks on ILSVRC-2012 classification"}, {"section_index": "13", "section_name": "A.1 OPTIMAL LINEAR ESTIMATOR", "section_text": "Assume two random variables A and B that are both noisy measurements of a third (latent) random\nWe are looking for the linear estimator q(A, B) = qo + q1A + q2B of C with E[q- C] = 0 (unbiased that has minimum variance.\nE[q(A, B) - C] = 0 E[qo + q1A + q2B - C] = 0 E[qo + q1A-q1C+ q2B- q2C+(q1+ q2-1)C]= 0 E[qo+ q1(A-C)+ q2(B-C) +(q1+ q21)C]= 0 qo + (q1 + q2- 1) E[C] = 0 E[C](1 q1 q2) = qo\nqo = 0 and q1 + q2 = 1.\nWe can solve this using Lagrangian multipliers. For that we need to take the derivative of the following term w.r.t. q1, q2 and and set them to zero:.\nVar[q1A+ q2B- C X(q1+ q21)\nVar|qiA+ q2B A(q1 + q2 - The first equation is therefore: (Var[q1A+ q2B-C](q1+ q21)) = 0 dq1 dq1 Var[q1(A- C) + q2(B-C)]- X = 0 dq1 (q? Var[A - C] + 2q1q2 Cov[A - C, B - C]) - X = 0 dq1 2q103+2q20AB-X = 0 Analogously we get: 2q20B+2q10AB-X =0 and: q1 + q2 = 1 Solving these equations gives us: oB (15) q1 = -20AB+03 OAR q2 = (16) 03-20AB+0B (17) We can write our estimator in terms of Q1 = 3 Ab and 2 = o3 - oAB Q1 Q2 A + B q Q1 + Q2 Q1 + Q2\nThe first equation is therefore.\n2q20B+2q10AB-\\ = 0\n(Var[q1A+ q2B-C]-X(q1+ q21)) = 0 dq d Var[q1A + q2B - C] - X = 0 dq1 d Var[q1(A- C) + q2(BC)]-\\= 0 dq1 (q? Var[A - C] + 2q1q2 Cov[A C, B C]) X = 0 dq1 2q107 4+2q20AB-\\ = 0\nq1 + q2 = 1\nAB q1 AB q2 20AB+0B\nQ1 Q2 A+ B Q1 + Q2 Q1 + Q2"}] |
r1X3g2_xl | [{"section_index": "0", "section_name": "ADVERSARIAL TRAINING METHODS FOR SEMI-SUPERVISED TEXT CLASSIFICATION", "section_text": "Takeru Miyatol,2*, Andrew M Dai?, Ian Goodfellow\ntakeru.miyato@gmail.com, adai@google.com, 1an@openal.com 1 Preferred Networks, Inc., ATR Cognitive Mechanisms Laboratories, Kyoto University 2 Google Brain 3 OpenAI"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Adversarial examples are examples that are created by making small perturbations to the input. designed to significantly increase the loss incurred by a machine learning model (Szegedy et al.|2014. Goodfellow et al.2015). Several models, including state of the art convolutional neural networks. lack the ability to classify adversarial examples correctly, sometimes even when the adversarial. perturbation is constrained to be so small that a human observer cannot perceive it. Adversarial. training is the process of training a model to correctly classify both unmodified examples and. adversarial examples. It improves not only robustness to adversarial examples, but also generalization. performance for original examples. Adversarial training requires the use of labels when training. models that use a supervised cost, because the label appears in the cost function that the adversarial. perturbation is designed to maximize. Virtual adversarial training (Miyato et al.|2016) extends the. idea of adversarial training to the semi-supervised regime and unlabeled examples. This is done by. regularizing the model so that given an example, the model will produce the same output distribution. as it produces on an adversarial perturbation of that example. Virtual adversarial training achieves good generalization performance for both supervised and semi-supervised learning tasks.\nPrevious work has primarily applied adversarial and virtual adversarial training to image classificatior tasks. In this work, we extend these techniques to text classification tasks and sequence models Adversarial perturbations typically consist of making small modifications to very many real-valued. inputs. For text classification, the input is discrete, and usually represented as a series of high dimensional one-hot vectors. Because the set of high-dimensional one-hot vectors does not admit infinitesimal perturbation, we define the perturbation on continuous word embeddings instead of discrete word inputs. Traditional adversarial and virtual adversarial training can be interpreted both. as a regularization strategy (Szegedy et al.]2014] Goodfellow et al.]2015] Miyato et al.]2016) and as defense against an adversary who can supply malicious inputs (Szegedy et al.|2014jGoodfellow et al. 2015). Since the perturbed embedding does not map to any word and the adversary presumably does. not have access to the word embedding layer, our proposed training strategy is no longer intended as. a defense against an adversary. We thus propose this approach exclusively as a means of regularizing. a text classifier by stabilizing the classification function..\n* This work was done when the author was at Google Brain\nAdversarial training provides a means of regularizing supervised learning algo. rithms while virtual adversarial training is able to extend supervised learning. algorithms to the semi-supervised setting. However, both methods require making. small perturbations to numerous entries of the input vector, which is inappropriate. for sparse high-dimensional inputs such as one-hot word representations. We. extend adversarial and virtual adversarial training to the text domain by applying. oerturbations to the word embeddings in a recurrent neural network rather than tc he original input itself. The proposed method achieves state of the art results or nultiple benchmark semi-supervised and purely supervised tasks. We provide visu. alizations and analysis showing that the learned word embeddings have improved n quality and that while training, the model is less prone to overfitting..\nWe show that our approach with neural language model unsupervised pretraining as proposed byDai & Le(2015) achieves state of the art performance for multiple semi-supervised text classification tasks. including sentiment classification and topic classification. We emphasize that optimization of only one additional hyperparameter e, the norm constraint limiting the size of the adversarial perturbations, achieved such state of the art performance. These results strongly encourage the use of our proposed method for other text classification tasks. We believe that text classification is an ideal setting for semi-supervised learning because there are abundant unlabeled corpora for semi-supervised learning algorithms to leverage. This work is the first work we know of to use adversarial and virtual adversarial training to improve a text or RNN model.\nWe also analyzed the trained models to qualitatively characterize the effect of adversarial and virtual adversarial training. We found that adversarial and virtual adversarial training improved word embeddings over the baseline methods"}, {"section_index": "2", "section_name": "2 MODEL", "section_text": "We denote a sequence of T words as {w(t)|t = 1,..., T}, and a corresponding target as y. To transform a discrete word input to a continuous vector, we define the word embedding matrix V E R(K+1)x D where K is the number of words in the vocabulary and each row vk corresponds to the word embedding of the i-th word. Note that the (K + 1)-th word embedding is used as an embedding of an 'end of sequence (eos)' token, veos. As a text classification model, we used a simple LSTM-based neural network model, shown in Figure[1a At time step t, the input is the discrete word w(t), and the corresponding word embedding is v(t). We additionally tried the bidirectional\ny LSTM STM (3) U(1) U(2) U(3) Veos 1 (2 Veos U(3) W(1) W(2) W(3) Weos W(1) W(2) 3 Weos (a) LSTM-based text classification model. (b) The model with perturbed embeddings.\nLSTM architecture (Graves & Schmidhuber2005) since this is used by the current state of the. art method (Johnson & Zhang2016b). For constructing the bidirectional LSTM model for tex1. classification, we add an additional LSTM on the reversed sequence to the unidirectional LSTM model described in Figure[1 The model then predicts the label on the concatenated LSTM outputs of both ends of the sequence.\nIn adversarial and virtual adversarial training, we train the classifier to be robust to perturbations of the embeddings, shown in Figure[1b These perturbations are described in detail in Section[3] At present, it is sufficient to understand that the perturbations are of bounded norm. The model could trivially learn to make the perturbations insignificant by learning embeddings with very large norm To prevent this pathological solution, when we apply adversarial and virtual adversarial training to the model we defined above, we replace the embeddings vk with normalized embeddings uk, defined as\nK K Vk - E(v where E(v) = >fjVj,Var(v)=>`fj(Vj-E(v) Uk j=1 j=1\nwhere f is the frequenc y of the i-th word, calculated within all training examples\nAdversarial training (Goodfellow et al.]2015) is a novel regularization method for classifiers to. improve robustness to small, approximately worst case perturbations. Let us denote x as the input\nLSTM LSTM + (2) 3 [y(1) v(2) v(3) Veos 0(1) U(2) U(3) Veos W(1) W(2) W(3) Weos w(1) W(2) W(3) Weos\nand 0 as the parameters of a classifier. When applied to a classifier, adversarial training adds th following term to the cost function:\n- log p(y |x + radv; 0) where radv = arg min log p(y | x + r; 0 r,||r||e\nwhere r is a perturbation on the input and O is a constant set to the current parameters of a classifier The use of the constant copy 0 rather than 0 indicates that the backpropagation algorithm should not be used to propagate gradients through the adversarial example construction process. At each step of training, we identify the worst case perturbations radv against the current model p(y|x; 0) in Eq (2), and train the model to be robust to such perturbations through minimizing Eq. (2) with respect to 0. However, we cannot calculate this value exactly in general, because exact minimization with respect to r is intractable for many interesting models such as neural networks.Goodfellow et al. (2015) proposed to approximate this value by linearizing log p(y | x; 0) around x. With a linear approximation and a L2 norm constraint in Eq. (2), the resulting adversarial perturbation is\nThis perturbation can be easily com. puted using backpropagation in neural networks\nwhere KL[p||q] denotes the KL divergence between distributions p and q. By minimizing Eq.(3) a classifier is trained to be smooth. This can be considered as making the classifier resistant to perturbations in directions to which it is most sensitive on the current model p(y|x; 0). Virtual adversarial loss Eq.(3) requires only the input x and does not require the actual label y while adversarial loss defined in Eq.(2) requires the label y. This makes it possible to apply virtual adversarial training to semi-supervised learning. Although we also in general cannot analytically calculate the virtual adversarial loss,Miyato et al.(2016) proposed to calculate the approximated Eq.(3) efficiently with backpropagation.\nradv = -eg/|g2 where g = Vs logp(y | s; 0)\nN 1 L1 Ladv(0) = logp(yn Sn + radv,n; 0 N n=1\nIn virtual adversarial training on our text classification model, at each training step, we calculate the below approximated virtual adversarial perturbation:\nN' 1 y-ady N! n'=1\nvhere N' is the number of both labeled and unlabeled examples\nSee[Warde-Farley & Goodfellow|(2016) for a recent review of adversarial training methods\nwhere rv-adv = arg max KL[p(: | x; 0)||p( | x + r; 0) r,|r||e\nAs described in Sec.2l in our work, we apply the adversarial perturbation to word embeddings, rather than directly to the input. To define adversarial perturbation on the word embeddings, let us denote a concatenation of a sequence of (normalized) word embedding vectors [(1), (2), ..., (T)] as s, and the model conditional probability of y given s as p(y[s; 0) where 0 are model parameters. Then we define the adversarial perturbation rady on s as:\nrv-adv = eg/l|g|l2 where g = Vs+dKL p( | s;O)||p( | s + d;0)\nwhere d is a TD-dimensional small random vector. This approximation corresponds to a 2nd-order Taylor expansion and a single iteration of the power method on Eq.(3) as in previous work (Miyato et al.[[2016). Then the virtual adversarial loss is defined as:"}, {"section_index": "3", "section_name": "All experiments used TensorFlow (Abadi et al.] 2016) on GPUs. Code will be available at https //github.com/tensorflow/models/tree/master/adversarial_text", "section_text": "To compare our method with other text classification methods, we tested on 5 different text datasets We summarize information about each dataset in Table1\nIMDB (Maas et al. 20111|is a standard benchmark movie review dataset for sentiment classification Elec (Johnson & Zhang2015b2 3|is an Amazon electronic product review dataset. Rotten Toma. toes (Pang & Lee| 2005) consists of short snippets of movie reviews, for sentiment classification. The Rotten Tomatoes dataset does not come with separate test sets, thus we divided all examples. randomly into 90% for the training set, and 10% for the test set. We repeated training and evaluation. five times with different random seeds for the division. For the Rotten Tomatoes dataset, we also collected unlabeled examples using movie reviews from the Amazon Reviews dataset (McAuley &. Leskovec2013) 4] DBpedia (Lehmann et al.[2015Zhang et al.2015) is a dataset of Wikipedia pages for category classification. Because the DBpedia dataset has no additional unlabeled examples. the results on DBpedia are for the supervised learning task only. RCV1 (Lewis et al.[2004) consists of. news articles from the Reuters Corpus. For the RCV1 dataset, we followed previous works (Johnson. & Zhang2015b) and we conducted a single topic classification task on the second level topics. We. used the same division into training, test and unlabeled sets as Johnson & Zhang(2015b). Regarding. pre-processing, we treated any punctuation as spaces. We converted all words to lower-case on. the Rotten Tomatoes, DBpedia, and RCV1 datasets. We removed words which appear in only one. document on all datasets. On RCV1, we also removed words in the English stop-words list provided. byLewis et al.(20045\nTable 1: Summary of datasets. Note that unlabeled examples for the Rotten Tomatoes dataset are noi provided so we instead use the unlabeled Amazon reviews dataset.\nClasses Train Test Unlabeled Avg. T Max T IMDB 2 25,000 25,000 50,000 239 2,506 Elec 2 24,792 24,897 197,025 110 5,123 Rotten Tomatoes 2 9596 1066 7,911,684 20 54 DBpedia 14 560,000 70,000 49 953 RCV1 55 15,564 49,838 668,640 153 9,852\nFollowingDai & Le|(2015), we initialized the word embedding matrix and LSTM weights with. pre-trained recurrent language model (Bengio et al.2006] Mikolov et al.1 2010) that was trained o both labeled and unlabeled examples. We used a unidirectional single-layer LSTM with 1024 hidde. units. The word embedding dimension D was 256 on IMDB and 512 on the other datasets. We use a sampled softmax loss with 1024 candidate samples for training. For the optimization, we used th. Adam optimizer (Kingma & Ba] 2015), with batch size 256, an initial learning rate of 0.001, and. 0.9999 learning rate exponential decay factor at each training step. We trained for 100,000 steps. W. applied gradient clipping with norm set to 1.0 on all the parameters except word embeddings. T reduce runtime on GPU, we used truncated backpropagation up to 400 words from each end of th. sequence. For regularization of the recurrent language model, we applied dropout (Srivastava et al 2014) on the word embedding layer with 0.5 dropout rate..\nFor the bidirectional LSTM model, we used 512 hidden units LSTM for both the standard order and reversed order sequences, and we used 256 dimensional word embeddings which are shared with both of the LSTMs. The other hyperparameters are the same as for the unidirectional LSTM. We tested the bidirectional LSTM model on IMDB, Elec and RCV because there are relatively long sentences in the datasets.\nPretraining with a recurrent language model was very effective on classification performance on al the datasets we tested on and so our results in Section[5lare with this pretraining.\nAfter pre-training, we trained the text classification model shown in Figure[1a|with adversarial and. virtual adversarial training as described in Section[3] Between the softmax layer for the target y and. the final output of the LSTM, we added a hidden layer, which has dimension 3O on IMDB, Elec and. Rotten Tomatoes, and 128 on DBpedia and RCV1. The activation function on the hidden layer was ReLU(Jarrett et al.]2009, Nair & Hinton]2010] Glorot et al.]2011). For optimization, we again used the Adam optimizer, with 0.0005 initial learning rate 0.9998 exponential decay. Batch sizes are 64 on. IMDB, Elec, RCV1, and 128 on DBpedia. For the Rotten Tomatoes dataset, for each step, we take a batch of size 64 for calculating the loss of the negative log-likelihood and adversarial training, and. 512 for calculating the loss of virtual adversarial training. Also for Rotten Tomatoes, we used texts. with lengths T less than 25 in the unlabeled dataset. We iterated 10,o00 training steps on all datasets. except IMDB and DBpedia, for which we used 15,000 and 20,000 training steps respectively. We. again applied gradient clipping with the norm as 1.0 on all the parameters except the word embedding. We also used truncated backpropagation up to 400 words, and also generated the adversarial and. virtual adversarial perturbation up to 400 words from each end of the sequence..\nFor each dataset, we divided the original training set into training set and validation set, and we roughl. optimized some hyperparameters shared with all of the methods; (model architecture, batchsize training steps) with the validation performance of the base model with embedding dropout. For eacl method, we optimized two scalar hyperparameters with the validation set. These were the dropou rate on the embeddings and the norm constraint e of adversarial and virtual adversarial training. Note that for adversarial and virtual adversarial training, we generate the perturbation after applying embedding dropout, which we found performed the best. We did not do early stopping with these. methods. The method with only pretraining and embedding dropout is used as the baseline (referrec. to as Baseline in each table).\nFigure2|shows the learning curves on the IMDB test set with the baseline method (only embedding. dropout and pretraining), adversarial training, and virtual adversarial training. We can see in Figure2a that adversarial and virtual adversarial training achieved lower negative log likelihood than the. baseline. Furthermore, virtual adversarial training, which can utilize unlabeled data, maintained this low negative log-likelihood while the other methods began to overfit later in training. Regarding adversarial and virtual adversarial loss in Figure 2b|and|2c we can see the same tendency as for negative log likelihood; virtual adversarial training was able to keep these values lower than other. methods. Because adversarial training operates only on the labeled subset of the training data, it. eventually overfits even the task of resisting adversarial perturbations..\nTable[2|shows the test performance on IMDB with each training method. 'Adversarial + Virtua. Adversarial' means the method with both adversarial and virtual adversarial loss with the shared norn constraint e. With only embedding dropout, our model achieved a 7.39% error rate. Adversarial anc. virtual adversarial training improved the performance relative to our baseline, and virtual adversaria. training achieved performance on par with the state of the art, 5.91% error rate. This is despite the. fact that the state of the art model requires training a bidirectional LSTM whereas our model onl.\n%0.7 2.5 S 0.6 Baseline Baseline Baseline Adversarial Adversarial Adversarial Virtual adversarial Virtual adversarial. Virtual adversarial B0.4 1.5 0.3 1.0 0.4 0.2 2 0.1 0.0 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 Step Step Step (a) Negative log likelihood (b) Ladv(0) (c) ) Lv-adv(0)\n2.5 Baseline Baseline Baseline 0.6 Adversarial Adversarial Adversarial 0.5 Virtual adversarial Virtual adversarial, Virtual adversarial 1.5 0.3 1.0 0.4 t 0.5 e 0.0 1000 2000 3000 4000 5000 1000 2000 3000 4000 5000 0 0 1000 2000 3000 4000 5000 Step Step Step (a) Negative log likelihood (b) Ladv(0) (c) Ly v-adv(0)\nFigure 2: Learning curves of (a) negative log likelihood, (b) adversarial loss (defined in Eq.(6) and (c) virtual adversarial loss (defined in Eq.(8)) on IMDB. All values were evaluated on the test set Adversarial and virtual adversarial loss were evaluated with e = 5.0. The optimal value of e differs between adversarial training and virtual adversarial training, but the value of 5.0 performs very well for both and provides a consistent point of comparison.\nuses a unidirectional LSTM. We also show results with a bidirectional LSTM. Our bidirectional LSTM model has the same performance as a unidirectional LSTM with virtual adversarial training..\nA common misconception is that adversarial training is equivalent to training on noisy examples Noise is actually a far weaker regularizer than adversarial perturbations because, in high dimensional input spaces, an average noise vector is approximately orthogonal to the cost gradient. Adversarial perturbations are explicitly chosen to consistently increase the cost. To demonstrate the superiority of adversarial training over the addition of noise, we include control experiments which replaced adversarial perturbations with random perturbations from a multivariate Gaussian with scaled norm on each embedding in the sequence. In Table[2] Random perturbation with labeled examples' is the method in which we replace radv with random perturbations, and 'Random perturbation with labeled and unlabeled examples' is the method in which we replace rv-adv with random perturbations. Every adversarial training method outperformed every random perturbation method.\nTable 2: Test performance on the IMDB sentiment classification task. * indicates using pretraine embeddings of CNN and bidirectional LSTM..\nMethod Test error rate Baseline (without embedding normalization). 7.33% Baseline 7.39% Random perturbation with labeled examples. 7.20% Random perturbation with labeled and unlabeled examples 6.78% Adversarial 6.21% Virtual Adversarial 5.91% Adversarial + Virtual Adversarial. 6.09% Virtual Adversarial (on bidirectional LSTM). 5.91% Adversarial + Virtual Adversarial (on bidirectional LSTM) 6.02% Full+Unlabeled+BoW (Maas et al.]2011) 11.11% Transductive SVM (Johnson & Zhang2015b 9.99% NBSVM-bigrams (Wang & Manning2012) 8.78% Paragraph Vectors (Le & Mikolov2014) 7.42% SA-LSTM (Dai & Le|2015 7.24% One-hot bi-LSTM* (Johnson & Zhang 2016b 5.94%\nTo visualize the effect of adversarial and virtual adversarial training on embeddings, we examined. embeddings trained using each method. Table3 shows the 10 top nearest neighbors to good' and. bad' with trained embeddings. The baseline and random methods are both strongly influenced by the. grammatical structure of language, due to the language model pretraining step, but are not strongly influenced by the semantics of the text classification task. For example, 'bad' appears in the list of.\nnearest neighbors to 'good' on the baseline and the random perturbation method. Both 'bad' and good' are adjectives that can modify the same set of nouns, so it is reasonable for a language model to assign them similar embeddings, but this clearly does not convey much information about the actual meaning of the words. Adversarial training ensures that the meaning of a sentence cannot be inverted via a small change, so these words with similar grammatical role but different meaning become separated. When using adversarial and virtual adversarial training, 'bad' no longer appears in the 10 top nearest neighbors to 'good'. 'bad' falls to the 19th nearest neighbor for adversarial training and 21st nearest neighbor for virtual adversarial training, with cosine distances of 0.463 and 0.464 respectively. For the baseline and random perturbation method, the cosine distances were 0.361 and 0.377, respectively. In the other direction, the nearest neighbors to 'bad' included 'good' as the 4th nearest neighbor for the baseline method and random perturbation method. For both adversarial methods, 'good' drops to the 36th nearest neighbor of 'bad'.\nTable 3: 10 top nearest neighbors to 'good' and 'bad' with the word embeddings trained on eacl. method. We used cosine distance for the metric. 'Baseline' means training with embedding dropou and 'Random' means training with random perturbation with labeled examples. 'Adversarial' anc. Virtual Adversarial' mean adversarial training and virtual adversarial training..\nWe also investigated the 15 nearest neighbors to 'great' and its cosine distances with the trainec embeddings. We saw that cosine distance on adversarial and virtual adversarial training (0.159-0.331 were much smaller than ones on the baseline and random perturbation method (0.244-0.399). The. much weaker positive word 'good' also moved from the 3rd nearest neighbor to the 15th after virtua. adversarial training."}, {"section_index": "4", "section_name": "5.2 TEST PERFORMANCE ON ELEC, RCV1 AND ROTTEN TOMATOES DATASET", "section_text": "Table4|shows the test performance on the Elec and RCV1 datasets. We can see our proposed method improved test performance on the baseline method and achieved state of the art performance on both datasets, even though the state of the art method uses a combination of CNN and bidirectional LSTM models. Our unidirectional LSTM model improves on the state of the art method and our method with a bidirectional LSTM further improves results on RCV1. The reason why the bidirectional models have better performance on the RCV1 dataset would be that, on the RCV1 dataset, there are some very long sentences compared with the other datasets, and the bidirectional model could better handle such long sentences with the shorter dependencies from the reverse order sentences.\nTable[5|shows test performance on the Rotten Tomatoes dataset. Adversarial training was able to improve over the baseline method, and with both adversarial and virtual adversarial cost, achieved almost the same performance as the current state of the art method. However the test performance of only virtual adversarial training was worse than the baseline. We speculate that this is because the Rotten Tomatoes dataset has very few labeled sentences and the labeled sentences are very short. In this case, the virtual adversarial loss on unlabeled examples overwhelmed the supervised loss, so the model prioritized being robust to perturbation rather than obtaining the correct answer.\nTable|6|shows the test performance of each method on DBpedia. The 'Random perturbation' is the. same method as the 'Random perturbation with labeled examples' explained in Section[5.1] Note that\n'good' 'bad' Baseline Random Adversarial Virtual Baseline Random Adversarial Virtual Adversarial Adversarial 1 great great decent decent terrible terrible terrible terrible 2 decent decent great great awful awful awful awful 3 bad excellent nice nice horrible horrible horrible horrible 4 excellent nice fine fine x good x good poor poor 5 Good Good entertaining entertaining Bad poor BAD BAD 6 fine bad interesting interesting BAD BAD stupid stupid 7 nice fine Good Good poor Bad Bad Bad 8 interesting interesting excellent cool stupid stupid laughable laughable 9 solid entertaining solid enjoyable Horrible Horrible lame lame 10 entertaining solid cool excellent horrendous horrendous Horrible Horrible\nTable 4: Test performance on the Elec and RCV1 classification tasks. * indicates using pretraine. embeddings of CNN, and indicates using pretrained embeddings of CNN and bidirectional LSTM\nMethod Test error rate Elec RCV1 Baseline 6.24% 7.40% Adversarial 5.61% 7.12% Virtual Adversarial 5.54% 7.05% Adversarial + Virtual Adversarial. 5.40% 6.97% Virtual Adversarial (on bidirectional LSTM) 5.55% 6.71% Adversarial + Virtual Adversarial (on bidirectional LSTM) 5.45% 6.68% Transductive SVM (Johnson & Zhang2015b 16.41% 10.77% NBLM (Naive Bayes logisitic regression model) (Johnson & Zhang 2015a 8.11% 13.97% One-hot CNN* (Johnson & Zhang2015b) 6.27% 7.71% One-hot CNN' (Johnson & Zhang 2016b) 5.87% 7.15% One-hot bi-LSTM (Johnson & Zhang 2016b) 5.55% 8.52%\nTable 5: Test performance on the Rotten Tomatoes sentiment classification task. * indicates using pretrained embeddings from word2vec Google News, and + indicates using unlabeled data from Amazon reviews.\nDBpedia has only labeled examples, as we explained in Section4] so this task is purely supervised learning. We can see that the baseline method has already achieved nearly the current state of the ar performance, and our proposed method improves from the baseline method.\nTable 6: Test performance on the DBpedia topic classification task\nMethod Test error rate Baseline (without embedding normalization) 0.87% Baseline 0.90% Random perturbation 0.85% Adversarial 0.79% Virtual Adversarial. 0.76% Bag-of-words(Zhang et al.. 2015 3.57% Large-CNN(character-level) (Zhang et al.]|2015 1.73% SA-LSTM(word-level)(Dai & Le]|2015 1.41% N-grams TFIDF (Zhang et al.[2015) 1.31% SA-LSTM(character-level)(Dai & Le)) 2015 1.19% Word CNN (Johnson & Zhang. 2016aJ 0.84%\nMethod Test error rate Baseline 17.9% Adversarial 16.8% Virtual Adversarial. 19.1% Adyersarial + Virtual Adyersarial 16.6% NBSVM-bigrams(Wang & Manning 2012 20.6% CNN*(Kim2014) 18.5% AdaSent*(Zhao et a1.2015) 16.9% SA-LSTM 7Dai & Le]2015 16.7%\nDropout (Srivastava et al.|2014) is a regularization method widely used for many domains including text. There are some previous works adding random noise to the input and hidden layer during training, to prevent overfitting (e.g.(Sietsma & Dow]1991; Poole et al.]2013)). However, in our experiments and in previous works (Miyato et al.|2016), training with adversarial and virtual adversarial perturbations outperformed the method with random perturbations..\nFor semi-supervised learning with neural networks, a common approach, especially in the image. domain, is to train a generative model whose latent features may be used as features for classification. (e.g. (Hinton et al.]2006, Maalge et al.]2016)). These models now achieve state of the art perfor- mance on the image domain. However, these methods require numerous additional hyperparameters with generative models, and the conditions under which the generative model will provide good supervised learning performance are poorly understood. By comparison, adversarial and virtual. adversarial training requires only one hyperparameter, and has a straightforward interpretation as. robust optimization.\nAdversarial and virtual adversarial training resemble some semi-supervised or transductive SVM approaches (Joachims1999] Chapelle & Zien]2005] [Collobert et al.]2006] Belkin et al.]2006) in that both families of methods push the decision boundary far from training examples (or in the case of transductive SVMs, test examples). However, adversarial training methods insist on margins on the input space , while SVMs insist on margins on the feature space defined by the kernel function This property allows adversarial training methods to achieve the models with a more flexible function on the space where the margins are imposed. In our experiments (Table[2l4) and Miyato et al.(2016) adversarial and virtual adversarial training achieve better performance than SVM based methods.\nThere has also been semi-supervised approaches applied to text classification with both CNNs anc. RNNs. These approaches utilize 'view-embeddings'(Johnson & Zhang2015b) 2016b) which use the window around a word to generate its embedding. When these are used as a pretrained model fo the classification model, they are found to improve generalization performance. These methods anc. our method are complementary as we showed that our method improved from a recurrent pretraine. language model."}, {"section_index": "5", "section_name": "7 CONCLUSION", "section_text": "In our experiments, we found that adversarial and virtual adversarial training have good regularizatior performance in sequence models on text classification tasks. On all datasets, our proposed method exceeded or was on par with the state of the art performance. We also found that adversarial and virtual adversarial training improved not only classification performance but also the quality of word embeddings. These results suggest that our proposed method is promising for other text domain tasks such as machine translation(Sutskever et al.|2014), learning distributed representations of words or paragraphs(Mikolov et al.]2013f[Le & Mikolov[2014) and question answering tasks. Our approach could also be used for other general sequential tasks, such as for video or speech."}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank the developers of Tensorflow. We thank the members of Google Brain team for their warm support and valuable comments. This work is partly supported by NEDO."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.\nYoshua Bengio, Holger Schwenk, Jean-Sebastien Senecal, Frederic Morin, and Jean-Luc Gauvain. Neura probabilistic language models. In Innovations in Machine Learning, pp. 137-186. Springer, 2006.\nndrew M Dai and Quoc V Le. Semi-supervised sequence learning. In NIPs, 2015\nAlex Graves and Jurgen Schmidhuber. Framewise phoneme classification with bidirectional lstm and othe neural network architectures. Neural Networks. 18(5):602-610. 2005\nKevin Jarrett, Koray Kavukcuoglu, Marc'Aurelio Ranzato, and Yann LeCun. What is the best multi-stag. architecture for object recognition? In ICCV, 2009.\nThorsten Joachims. Transductive inference for text classification using support vector machines. In ICML, 1999\nRie Johnson and Tong Zhang. Effective use of word order for text categorization with convolutional neura networks. NAACL HLT, 2015a\nRie Johnson and Tong Zhang. Convolutional neural networks for text categorization: Shallow word-level vs deep character-level. arXiv preprint arXiv:1609.00718, 2016a.\nYoon Kim. Convolutional neural networks for sentence classification. In EMNLP, 2014\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015\nQuoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. In ICML, 2014.\nJens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastiar Hellmann, Mohamed Morsey, Patrick van Kleef, Soren Auer, et al. Dbpedia-a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167-195, 2015.\nAndrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In ACL: Human Language Technologies-Volume 1, 2011.\nJulian McAuley and Jure Leskovec. Hidden factors and hidden topics: understanding rating dimensions with review text. In ACM conference on Recommender systems, 2013.\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of worc and phrases and their compositionality. In NIPS, 2013.\nTakeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional smoothing with virtual adversarial training. In ICLR, 2016.\nVinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In ICMI 2010.\nOlivier Chapelle and Alexander Zien. Semi-supervised classification by low density separation. In AISTATS 2005.\nRonan Collobert, Fabian Sinz, Jason Weston, and Leon Bottou. Large scale transductive svms. Journal of Machine Learning Research, 7(Aug):1687-1712, 2006.\nXavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In AISTATs, 2011.\nan J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015.\nGeoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18:1527-1554, 2006.\nRie Johnson and Tong Zhang. Semi-supervised convolutional neural networks for text categorization via regior embedding. In NIPS, 2015b.\nRie Johnson and Tong Zhang. Supervised and semi-supervised text categorization using LSTM for region embeddings. In ICML. 2016b\nDavid D Lewis, Yiming Yang, Tony G Rose, and Fan Li. Rcv1: A new benchmark collection for text categorization research. The Journal of Machine Learning Research, 5:361-397, 2004.\nLars Maalge, Casper Kaae Sonderby, Sgren Kaae Sonderby, and Ole Winther. Auxiliary deep generative models In ICML, 2016.\nTomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurrent neural network based language model. In INTERSPEECH, 2010.\nBo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL, 2005.\n. Sietsma and R. Dow. Creating artificial neural networks that generalize. Neural Networks, 4(1), 199.\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1 2014.\nSida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classification In ACL: Short Papers. 2012\nXiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In NIPS, 2015.\nHan Zhao, Zhengdong Lu, and Pascal Poupart. Self-adaptive hierarchical sentence model. In IJCAI, 2015\nChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In ICLR, 2014."}] |
SkBsEQYll | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Many dimensionality reduction or manifold learning algorithms optimize for retaining the pairwise. similarities, distances, or local neighborhoods of data points. Classical scaling (Cox & Cox! 2oo0) kernel PCA (Scholkopf et al.J|1998), isomap (Tenenbaum et al.||2000), and LLE (Roweis & Saul 2000) achieve this by performing an eigendecomposition of some similarity matrix to obtain a low. dimensional representation of the original data. However, this is computationally expensive if a lot of training examples are available. Additionally, out-of-sample representations can only be created. when the similarities to the original training examples can be computed (Bengio et al.l 2004)..\nFor some methods such as t-SNE (van der Maaten & Hinton. 2008), great effort was put into extending the algorithm to work with large datasets (van der Maaten. 2013) or to provide an explicit mapping function which can be applied to new data points (van der Maaten!. 2009). Current attempts at finding a more general solution to these issues are complex and require the development of specific cos functions and constraints when used in place of existing algorithms (Bunte et al.2012), which limit: their applicability to new objectives..\nIn this paper we introduce a new neural network architecture, that we will denote as similarit encoder (SimEc), which is able to learn representations that can retain arbitrary pairwise relation present in the input space, even those obtained from unknown similarity functions such as huma ratings. A SimEc can learn a linear or non-linear mapping function to project new data points into lower dimensional embedding space. Furthermore, it can take advantage of large datasets since th objective function is optimized iteratively using stochastic mini-batch gradient descent. We show o1 both image and text datasets that SimEcs can, on the one hand, recreate solutions found by traditiona methods such as kPCA or isomap, and, on the other hand, obtain meaningful embeddings fron similarities based on human labels."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Additionally, we propose the new context encoder (ConEc) model, a variation of similarity encoders for learning word embeddings, which extends word2vec (Mikolov et al.]2013b) by using the loca. context of words as input to the neural network to create representations for out-of-vocabulary words and to distinguish between multiple meanings of words. This is shown to be advantageous for example, if the word embeddings are used as features in a named entity recognition task as demonstrated on the CoNLL 2003 challenge."}, {"section_index": "2", "section_name": "2 SIMILARITY ENCODERS", "section_text": "We propose a novel dimensionality reduction framework termed similarity encoder (SimEc), whicl can be used to learn a linear or non-linear mapping function for computing low dimensional represen tations of data points such that the original pairwise similarities between the data points in the inpui space are preserved in the embedding space. For this, we borrow the \"bottleneck' neural network (NN) architecture idea from autoencoders (Tishby et al.]2000Hinton & Salakhutdinov]2006). Au toencoders aim to transform the high dimensional data points into low dimensional embeddings such that most of the data's variance is retained. Their network architecture has two parts: The first part of the network maps the data points from the original feature space to the low dimensional embedding (at the bottleneck). The second part of the NN mirrors the first part and projects the embedding back to a high dimensional output. This output is then compared to the original input to compute the reconstruction error of the training samples, which is used in the backpropagation procedure to tune the network's parameters. After the training is complete, i.e. the low dimensional embeddings encode enough information about the original input samples to allow for their reconstruction, the second part of the network is discarded and only the first part is used to project data points into the low dimensional embedding space. Similarity encoders have a similar two fold architecture, where in the first part of the network, the data is mapped to a low dimensional embedding, and then ir the second part (which is again only used during training), the embedding is transformed such that the error of the representation can be computed. However, since here the objective is to retain the (non-linear) pairwise similarities instead of the data's variance, the second part of the NN does noi mirror the first like it does in the autoencoder architecture.\nOutput Target Input Embedding (bottleneck) RdxN Feed Forward NN Yi E Rd Xi E RD ERN Si E RN\nFigure 1: Similarity encoder (SimEc) architecture\nYi = 1(o(xiWo)W)W2\nwhere oo and o1 denote your choice of non-linear activation functions (e.g. tanh, sigmoid, or relu) but there is no non-linearity applied after multiplying with W2. The second part of the network then\nThe similarity encoder architecture (Figure[1) uses as the first part of the network a flexible non-linear. feed-forward neural network to map the high dimensional input data points x; E RD to a low. dimensional embedding y E Rd (at the bottleneck). As we make no assumptions on the range of. values the embedding can take, the last layer of the first part of the NN (i.e. the one resulting in the embedding) is always linear. For example, with two additional non-linear hidden layers, the embedding would be computed as\n=0_1(yiW_1)\nThese approximated similarities are then compared to the target similarities (for one data point this is the corresponding row s; E RN of the similarity matrix S E RN N of the N training samples) and the computed error is used to tune the network's parameters with backpropagation\nFor the model to learn most efficiently, the exact form of the cost function to optimize as well a the type of non-linearity o_1 applied when computing the network's output should be chosen with respect to the type of target similarities that the model is supposed to preserve. In the experimenta section of the paper we are considering two application scenarios of SimEcs: a) to obtain the same low dimensional embedding as found by spectral methods such as kPCA, and b) to embed data points such that binary similarity relations obtained from human labels are preserved. In the first case (further discussed in the next section), we omit the non-linearity when computing the output of the network, i.e. s' = yiW_1, since the target similarities, computed by some kerne function, are not necessarily constrained to lie in a specific interval. As the cost function to minimiz. we choose the mean squared error between the output (approximated similarities) and the origina (target) similarities. A regularization term is added to encourage the weights of the last layer (W1\nN -1W_1 - diag(W_1W_1) min N i=1\n1 s'=_1(yiW_1) with 0_1z 1+e-10(z-0.5)\n1 min [s ln(s) + (1- si) ln(1 s)] N\nAfter the training is completed, only the first part of the neural network, which maps the input to the. embedding, is used to create the representations of new data points. Depending on the complexity of the feed-forward NN, the mapping function learned by similarity encoders can be linear or non-linear. and because of the iterative optimization using stochastic mini-batch gradient descent, large amounts. of data can be utilized to learn optimal representations3\nKernel PCA (kPCA) is a popular non-linear dimensionality reduction algorithm, which performs the eigendecomposition of a kernel matrix to obtain low dimensional representations of the data points\n3To speed up the training procedure and limit memory requirements for large datasets, the columns of the similarity matrix can also be subsampled (yielding S E IR N n), i.e. the number of target similarities (and the dimensionality of the output layer) is n < N, however all N training examples can still be used as input to train the network.\nconsists of a single additional layer with the weight matrix W-1 E Rd N to project the embedding to the output, the approximated similarities s' E RN:\nwhere : lp denotes the respective p-norms for vectors and matrices and X is a hyperparameter to control the strength of the regularization.. In the second case, the target similarities are binary and it therefore makes sense to use a non- linear activation function in the final layer when computing the output of the network to ensure the approximated similarities are between 0 and 1 as well2.\nWhile the mean squared error between the target and approximated similarities would still be a natural choice of cost function to optimize, with the additional non-linearity in the output layer, learning might be slow due to small gradients and we therefore instead optimize the cross-entropy:.\nFor a different application scenario, yet another setup might lead to the best results. When using SimEcs in practice, we recommend to first try the first setup, i.e. keeping the output layer linear and minimizing the mean squared error, as this often already gives quite good results..\nScholkopf et al.1998). However, if the kernel matrix is very large this becomes computationally very expensive. Additionally, there are constraints on possible kernel functions (should be positive semi-definite) and new data points can only be embedded in the lower dimensional space if their kernel map (i.e. the similarities to the original training points) can be computed. As we show below SimEc can optimize the same objective as kPCA but addresses these shortcomings.\nS' = YYT\nwhere Y E RNd is the low dimensional embedding of the data based on the eigenvectors belonging to the d largest eigenvalues of S.\nIn addition to the embedding itself, it is often desired to have a parametrized mapping function which can be used to project new (out-of-sample) data points into the embedding space. If the target similarity matrix is the linear kernel, i.e. S = X XT where X E RND is the given input data.. this can easily be accomplished with traditional PCA. Here, the covariance matrix of the centered input data, i.e. C = XT X is decomposed to obtain a matrix with parameters, W E RDd, based on. the eigenvectors belonging to the d largest eigenvalues of the covariance matrix. Then the optimal embedding (i.e. the same solution obtained by linear kPCA) can be computed as.\nY = XW\nThis serves as a mapping function, with which new data points can be easily projected into the lower dimensional embedding space\nWhen using a similarity encoder to embed data in a low dimensional space where the linear similarities are preserved, the SimEc's architecture would consist of a neural network with a single linear layer i.e. the parameter matrix Wo, to project the input data X to the embedding Y = X Wo, and another matrix W-1 E Rd N used to approximate the similarities as\nFinding the corresponding function to map new data points into the embedding space is trivial fc. linear kPCA, but this is not the case for other kernel functions. While it is still possible to fin the optimal embedding with kPCA for non-linear kernel functions, the mapping function remain. unknown and new data points can only be projected into the embedding space if we can compu their kernel map, i.e. the similarities to the original training examples (Bengio et al.|2004). Som. attempts were made to manually define an explicit mapping function to represent data points i. the kernel feature space, however this only works for specific kernels and there exists no genera. solution (Rahimi & Recht|2007). As neural networks are universal function approximators, with th. right architecture similarity encoders could instead learn arbitrary mapping functions for unknow. similarities to arrive at data driven kernel learning solutions.."}, {"section_index": "3", "section_name": "2.2 MODEL OVERVIEW", "section_text": "The properties of similarity encoders are summarized in the following. The objective of this dimen sionality reduction approach is to retain pairwise similarities between data points in the embedding. space. This is achieved by tuning the parameters of a neural network to obtain a linear or non-linear. mapping (depending on the network's architecture) from the high dimensional input to the low. dimensional embedding. Since the cost function is optimized using stochastic mini-batch gradient. descent, we can take advantage of large datasets for training. The embedding for new test points can. be easily computed with the explicit mapping function in the form of the tuned neural network. And. since there is no need to compute the similarity of new test examples to the original training data for. out-of-sample solutions (like with kPCA), the target similarities can be generated by an unknown. process such as human similarity judgments..\nThe general idea is that both kPCA and SimEc embed the N data points in a feature space where the given target similarities can be approximated linearly (i.e. with the scalar product of the embedding. vectors). When the error between the approximated (S') and the target similarities (S) is computed as the mean squared error, kPCA finds the optimal approximation by performing the eigendecomposition. of the (centered) target similarity matrix, i.e.\nFrom these formulas one can immediately see the link between linear similarity encoders and PCA 7 linear kPCA: once the parameters of the neural network are tuned correctly, Wo would correspond to the mapping matrix W found by PCA and W_1 could be interpreted as Y T, i.e. Y would be the same eigenvector based embedding as found with linear kPCA..\nIn the following experiments we demonstrate that similarity encoders can, on the one hand, reach the. same solution as kPCA, and, on the other hand, generate meaningful embeddings from human labels To illustrate that this is independent of the type of data, we present results obtained both on the well. known MNIST handwritten digits dataset as well as the 20 newsgroups text corpus. Further details as. well as the code to replicate these experiments and more is available online.\nWe compare the embedding found with linear kPCA to that created with a linear similarity encoder (consisting of one linear layer mapping the input to the embedding and a second linear layer to project the embedding to the output, i.e. computing the approximated similarities). Additionally, we show that a non-linear SimEc can approximate the solution found with isomap (i.e. the eigendecomposition of the geodesic distance matrix). We found that for optimal results the kernel matrix used as the target similarity matrix for the SimEc should first be centered (as it is being done for kPCA as well (Muller et al.]2001)).\nIn a second step, we show that SimEcs can learn the mapping to a low dimensional embedding fo. arbitrary similarity functions and reliably create representations for new test samples without the nee to compute their similarities to the original training examples, thereby going beyond the capabilities. of kPCA. For both datasets we illustrate this by using the class labels assigned to the samples b. human annotators to create the target similarity matrix for the training fold of the data, i.e. S is 1 fo. data points belonging to the same class and 0 everywhere else. We compare the solutions found b. SimEc architectures with a varying number of additional non-linear hidden layers in the first par. of the network (while keeping the embedding layer linear as before) to show how a more comple. network improves the ability to map the data into an embedding space in which the class-basec. similarities are retained.\nMNIST The MNIST dataset contains 28 28 pixel images depicting handwritten digits. For our experiments we randomly subsampled 10k images from all classes, of which 80% are assigned to the. training fold and the remaining 20% to the test fold (in the following plots, data points belonging to the training set are displayed transparently while the test points are opaque). As shown in Figure[2 the embeddings of the MNIST dataset created with linear kPCA and a linear similarity encoder. which uses as target similarities the linear kernel matrix, are almost identical (up to a rotation). The. same holds true for the isomap embedding, which is well approximated by a non-linear SimEc with two hidden layers using the geodesic distances between the data points as targets (Figure 8|in the. Appendix). When optimizing SimEcs to retain the class-based similarities (Figure 3), additional.\nMNIsT - linear Kernel PCA MNIST - SimEc (lin. kernel, linear)\nFigure 2: MNIST digits visualized in two dimensions by linear kPCA and a linear SimEc.\nnon-linear hidden layers in the feed-forward NN can improve the embedding by further separating data points belonging to different classes in tight clusters. As it can be seen, the test points (opaque are nicely mapped into the same locations as the corresponding training points (transparent), i.e the model learns to associate the input pixels with the class clusters only based on the imposed similarities between the training data points.\nhttps://github.com/cod3licious/simec/examples_simec.ipynb\nMNIST - SimEc (class sim, linear) MNIST - SimEc (class sim. 1 h.l.) MNIST - SimEc (class sim, 2 h.l.) MNIST - SimEc (class sim, 3 h.l.)\n6 E 3 ZO 0 3 90 9 3 54 5 5 b 7 0 8 9 5 C AC 99 7 g4 4 3 7 77\nFigure 3: MNIST digits visualized in two dimensions by SimEcs with an increasing number of non-linear hidden layers and the objective to retain similarities based on class membership\n20 newsgroups The 20 newsgroups dataset consists of around 18k newsgroup posts assigned tc. 20 different topics. We take a subset of seven categories and use the original train/test split (~4.1l. and ~2.7k samples respectively) and remove metadata such as headers to avoid overfitting|All tex. documents are transformed into 46k dimensional tf-idf feature vectors, which are used as input tc. the SimEc and to compute the linear kernel matrix of the training fold. The embedding created with. linear kPCA is again well approximated by the solution found with a corresponding linear SimEc. (Figure|9lin the Appendix). Additionally, this serves as an example where traditional PCA is not ar. option to obtain the corresponding mapping matrix for the linear kPCA solution, as due to the high. dimensionality of the input data and comparatively low number of samples, the empirical covariance. matrix would be poorly estimated and too large to decompose into eigenvalues and -vectors. With. the objective to retain the class-based similarities, a SimEc with a non-linear hidden layer cluster. documents by their topics (Figure4).\nRepresentation learning is very prominent in the field of natural language processing (NLP). Fo. example, word embeddings learned by neural network language models were shown to improve th performance when used as features for supervised learning tasks such as named entity recognitior. (NER) (Collobert et al.[2011) Turian et al.]2010). The popular word2vec model (Figure[5) learns. meaningful word embeddings by considering only the words' local contexts and thanks to its shallov. architecture it can be trained very efficiently on large corpora. However, an important limiting facto. of current word embedding models is that they only learn the representations for words from a fixec. vocabulary. This means, if in a task we encounter a new word which was not present in the texts usec. for training, we can not create an embedding for this word without repeating the time consuming.\nhttp://scikit-learn.org/stable/datasets/twenty_newsgroups.html\n20 newsgroups - SimEc (class sim, 1 h.l.) comp.graphics rec.autos rec.sport.baseball sci.med sci.space soc.religion.christian talk.politics.guns\ncomp.graphics rec.autos rec.sport.baseball sci.med sci.space soc.religion.christian talk.politics.guns.\nFigure 4: 20 newsgroups texts visualized in two dimensions by a non-linear SimEc with one hidden layer and the objective to preserve the similarities based on class membership in the embedding\ntraining procedure of the model|6|Additionally, word2vec, like many other approaches, only learns a single representation for every word. However, it is often the case that a single word can have multiple meanings, e.g. \"Washington' is both the name of a US state as well as a former president. It is only the local context in which these words appear that lets humans resolve this ambiguity and identify the proper sense of the word in question. While attempts were made to improve this, they lack flexibility as they require a clustering of word contexts beforehand (Huang et al.]2012), which still does not guarantee that all possible meanings of a word have been identified prior in the training documents. Other approaches require additional labels such part-of-speech tags (Trask et al.][2015) or other lexical resources like WordNet (Rothe & Schutze| 2015) to create word embeddings which distinguish between the different senses of a word.\nAs a further contribution of this paper we provide a link between the successful word2vec natural language model and similarity encoders and thereby propose a new model we call context encoder (ConEc), which can efficiently learn word embeddings from huge amounts of training data and additionally make use of local contexts to create representations for out-of-vocabulary words and help distinguish between multiple meanings of words.\ntarget word. The blackcatslept on the bed context words 1 Training phase After training 1 - 1) take sum of context. 2) select target and. 3) compute error & target embedding embeddings k noise weights backpropagate (negative sampling) E R1xd err = t - with: Wo W1 Wo +e- lo E R1xd l1 E R(k+1)xd 1 t: binary label vector. N x d N x d N x d\n6In practice these models are trained on such a large vocabulary that it is rare to encounter a word which does not have an embedding. However, there are still scenarios where this is the case, for example, it is unlikely that the term *W10281545\" is encountered in a regular training corpus, but we might still want its embedding to represent a search query like \"whirlpool W10281545 ice maker part\"'.\nC.Sport.basebal sci.med sci.space soc.religion.christian talk.politics.guns\nFormally, word embeddings are d-dimensional vector representations learned for all N words ir the vocabulary. Word2vec is a shallow model with parameter matrices Wo, Wi E RN d, which are. tuned iteratively by scanning huge amounts of texts sentence by sentence (see Figure|5). Based or some context words the algorithm tries to predict the target word between them. Mathematically. this is realized by first computing the sum of the embeddings of the context words by selecting the. appropriate rows from Wo. This vector is then multiplied by several rows selected from W1: one of. these rows corresponds to the target word, while the others correspond to k 'noise' words, selected a random (negative sampling). After applying a non-linear activation function, the backpropagation. error is computed by comparing this output to a label vector t E IRk+1, which is 1 at the position of. the target word and O for all k noise words. After the training of the model is complete, the word embedding for a target word is the corresponding row of Wo..\nT'he main principle utilized when learning word embeddings is that similar words appear in similar contexts (Harris 1954f |Melamud et al.[2015). Therefore, in theory one could compute the similarities. between all words by checking how many context words any two words generally have in common. (possibly weighted somehow to reduce the influence of frequent words such as 'the' and 'and') However, such a word similarity matrix would be very large, as typically the vocabulary for which. word embeddings are learned comprises several 10, 000 words, making it computationally too. expensive to be used with similarity encoders. But this matrix would also be quite sparse, because. many words in fact do not occur in similar contexts and most words only have a handful of synonyms. which could be used in their place. Therefore, we can view the negative sampling approach used. for word2vec (Mikolov et al.]2013b) as an approximation of the words' context based similarities:. while the similarity of a word to itself is 1, if for one word we select k random words out of the huge. vocabulary, it is very unlikely that they are similar to the target word, i.e. we can approximate their. similarities with O. This is the main insight necessary for adapting similarity encoders to be used for. learning (context sensitive) word embeddings.\nInput Embedding Output Target black cat on slept the xi E RN Yi E Rd s' E Rk+1 Si E Rk+1\nFigure 6: Context encoder (ConEc) architecture. The input consists of a context vector, but instead of comparing the output to a full similarity vector, only the target word and k noise words are considered\nFigure|6 shows the architecture of the context encoder. For the training procedure we stick very. closely to the optimization strategy used by word2vec: while parsing a document, we again select. a target word and its context words. As input to the context encoder network. we use a vector x; of length N (i.e. the size of the vocabulary), which indicates the context words by non-zero values (either binary or e.g. giving lower weight to context words further away from the target word). This vector is then multiplied by a first matrix of weights Wo E RNd yielding a low dimensional. embedding yi, comparable to the summed context embedding created as a first step when training. the word2vec model. This embedding is then multiplied by a second matrix W E Rdx N to yield. the output. Instead of comparing this output vector to a whole row from a word similarity matrix. (as we would with similarity encoders). only k + 1 entries are selected. namely those belonging to\nUp to now, there are no real differences between the word2vec model and our context encoders, we have merely provided an intuitive interpretation of the training procedure and objective. The mair. deviation from the word2vec model lies in the computation of the word embedding for a target word after the training is complete. In the case of word2vec, the word embedding is simply the row of the tuned Wo matrix. However, when considering the idea behind the optimization procedure, we instead propose to compute a target word's representation by multiplying Wo with the word's average context vector. This is closer to what is being done in the training procedure and additionally i enables us to compute the embeddings for out-of-vocabulary words (assuming at least most of such. a new word's context words are in the vocabulary) as well as to place more emphasis on a word's local context (which helps to identify the proper meaning of the word (Melamud et al.[2015)) by. creating a weighted sum between the word's average global and local context vectors used as input tc the ConEc.\nWith this new perspective on the model and optimization procedure, another advancement is feasible. Since the context words are merely a sparse feature vector used as input to a neural network, there. is no reason why this input vector should not contain other features about the target word as well For example, the feature vector could be extended to contain information about the word's case,. part-of-speech (POS) tag, or other relevant details. While this would increase the dimensionality. of the first weight matrix Wo to include the additional features when mapping the input to the. word's embedding, the training objective and therefore also W1 would remain unchanged. These. additional features could be especially helpful if details about the words would otherwise get lost in. preprocessing (e.g. by lowercasing) or to retain information about a word's position in the sentence,. which is ignored in a BOw approach. These extended ConEcs are expected to create embeddings. which distinguish even better between the words' different senses by taking into account, for example.. if the word is used as a noun or verb in the current context, similar to the sense2vec algorithm (Trask. et al.|2015). However, unlike sense2vec, not multiple embeddings per term are learned, instead the. dimensionality of the input vector is increased to include the POS tag of the current word as a feature.\nThe word embeddings learned with word2vec and context encoders are evaluated on a word analogy task (Mikolov et al.|2013a) as well as the CoNLL 2003 NER benchmark task (Tjong et al.2003] The word2vec model used is a continuous BOw model trained with negative sampling as described above where k = 13, the embedding dimensionality d is 200 and we use a context window of 5. The word embeddings created by the context encoders are build directly on top of the word2vec mode1 by multiplying the original embeddings (Wo) with the respective context vectors. Code to replicate the experiments can be found online|7|The results of the analogy task can be found in the Appendix|8\nNamed Entity Recognition The main advantage of context encoders is that they can use loca context to create out-of-vocabulary (OOV) embeddings and distinguish between the different sense. of words. The effects of this are most prominent in a task such as named entity recognition (NER where the local context of a word can make all the difference, e.g. to distinguish between the 'Chicago Bears\"' (an organization) and the city of Chicago (a location). To test this, we used the word embeddings as features in the CoNLL 2003 NER benchmark task (Tjong et al.]2003). The word2vec embeddings were trained on the documents used in the training part of the task!'[For the context encoders we experimented with different combinations of local and global context vectors The global context vectors were computed on only the training documents as well, i.e. just as with\nAs it was recently demonstrated that a good performance on intrinsic evaluation tasks such as word similarit. or analogy tasks does not necessarily transfer to extrinsic evaluation measures when using the word embedding as features (Chiu et al.2016]Linzen2016), we consider the performance on the NER challenge as more relevant.\n9Since this is a very small corpus, we trained word2vec for 25 iterations on these documents (afterwards the performance on the development split stopped improving significantly) while usually the model is trained in a single pass through a much larger corpus.\nthe target word as well as k random and unrelated noise words. After applying a non-linearity we compare these entries s' E Rk+1 to the binary target vector exactly as in the word2vec model and. use error backpropagation to tune the parameters..\nthe word2vec model, when applied to the test documents there are some words which don't have a. word embedding available as they did not occur in the training texts. The local context vectors on the. other hand can be computed for all words occurring in the current document for which the model. should identify the named entities. When combining these local context vectors with the global ones. we always use the local context vector as is in case there is no global vector available and otherwise compute a weighted average between the two context vectors as wt : CVlocal + (1 wi) : CVglobal. The different word embeddings were used as features with a logistic regression classifier trained on. the labels obtained from the training part of the task and the reported F1-scores were computed using. the official evaluation script. Please note that we are using this task to show the potential of ConEc. word embeddings as features in a real world task and to illustrate their advantages over the regulai. word2vec embeddings and did not optimize for competitive performance on this NER challenge..\nNER performance with different word embedding features Test performance 45 45 Otrain dev test 40 40 C [%] au03S-1 35 35 30 30 25 25 20 20 global W1=0. w=0.1w=0.2 w=0.3 w=0.4 w=0.5 w=0.6 w=0.7 W=0.8w=0.9 W=1. word2vec global W1=0. W1=0.4\nNER performance with different word embedding features Test performance 45 45 0train dev test 40 40 [%] au03S-Td 35 35 30 30 25 25 20 20 globalwi=0. w=0.1w=0.2w=0.3w=0.4w=0.5w=0.6w=0.7 w=0.8w=0.9 W=1 word2vec global W1=0. W1=0.4\nFigure 7: Results of the CoNLL 2003 NER task based on three random initializations of the word2ve. model. The overall results are shown on the left, where the mean performance using word2ve. embeddings is considered as our baseline indicated by the dashed lines, all other embeddings are. computed with context encoders using various combinations of the words' global and local contex vectors. On the right, the increased performance (mean and std) on the test fold achieved by using. ConEc is highlighted: Enhancing the word2vec embeddings with global context information yields a. performance gain of 2.5 percentage points (A). By additionally using local context vectors to create. OOV word embeddings (wi = 0) we gain another 1.7 points (B). When using a combination o. global and local context vectors (wi = 0.4) to distinguish between the different meanings of words. the F1-score increases by another 5.1 points (C), yielding a F1-score of 39.92%, which marks a. significant improvement compared to the 30.59% reached with word2vec features..\nFigure 7|shows the results achieved with various word embeddings on the training, developmen. and test part of the CoNLL task. As it can be seen there, taking into account the local context car. yield large improvements, especially on the dev and test data. Context encoders using only the globa. context vectors already perform better than word2vec. When using the local context vectors onl where the global ones are not available (wi = 0) we can see a jump in the development and tes. performance, while of course the training performance stays the same as here we have global contex vectors for all words. The best performances on all folds are achieved when averaging the global anc. local context vectors with around wj = 0.4 before multiplying them with the word2vec embeddings. This clearly shows that using ConEcs with local context vectors can be very beneficial as they let u. compute word embeddings for out-of-vocabulary words as well as help distinguish between multipl. meanings of words.\n10 The global context matrix is computed without taking the word itself into account (i.e. zero on the diagonal) to make the context vectors comparable to the local context vectors of OOV words where we can't count the. target word either. Both global and local context vectors are normalized by their respective maximum values.. then multiplied with the length normalized word2vec embeddings and again renormalized to have unit length."}, {"section_index": "4", "section_name": "4 CONCLUSION", "section_text": "Representing intrinsically complex data is an ubiquitous challenge in data analysis. While kernel methods and manifold learning have made very successful contributions, their ability to scale is somewhat limited. Neural autoencoders offer scalable nonlinear embeddings, but their objective is to minimize the reconstruction error of the input data which does not necessarily preserve important pairwise relations between data points. In this paper we have proposed SimEcs as a neural network framework which bridges this gap by optimizing the same objective as spectral methods, such as kPCA, for creating similarity preserving embeddings while retaining the favorable properties of autoencoders.\nSimilarity encoders are a novel method to learn similarity preserving embeddings and can be especially useful when it is computationally infeasible to perform the eigendecomposition of a kernel matrix when the target similarities are obtained through an unknown process such as human similarity judgments, or when an explicit mapping function is required. To accomplish this, a feed-forward neural network is constructed to map the data into an embedding space where the original similarities can be approximated linearly.\nAs a second contribution we have defined context encoders, a practical extension of SimEcs, that car. be readily used to enhance the word2vec model with further loca1 context information and global worc statistics. Most importantly, ConEcs allow to easily create word embeddings for out-of-vocabulary. words on the spot and distinguish between different meanings of a word based its local context..\nFinally. we have demonstrated the usefulness of SimEcs and ConEcs for practical tasks such as the visualization of data from different domains and to create meaningful word embedding features for a NER task, going beyond the capabilities of traditional methods.\nFuture work will aim to further the theoretical understanding of SimEcs and ConEcs and explore other application scenarios where using this novel neural network architecture can be beneficial. As it is often the case with neural network models, determining the optimal architecture as well as othei hyperparameter choices best suited for the task at hand can be difficult. While so far we mainly studied SimEcs based on fairly simple feed-forward networks, it appears promising to consider also deeper neural networks and possibly even more elaborate architectures, such as convolutional networks, for the initial mapping step to the embedding space, as in this manner hierarchical structures in complex data could be reflected. Note furthermore that prior knowledge as well as more general error functions could be employed to tailor the embedding to the desired application target(s)."}, {"section_index": "5", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Antje Relitz, Christoph Hartmann, Ivana Balazevic, and other anonymous. reviewers for their helpful comments on earlier versions of this manuscript. Additionally, Franziska Horn acknowledges funding from the Elsa-Neumann scholarship from the TU Berlin.."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Kerstin Bunte, Michael Biehl, and Barbara Hammer. A general framework for dimensionality-reducing dat visualization mapping. Neural Computation, 24(3):771-804, 2012.\nRonan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493-2537, 2011\nTrevor F Cox and Michael AA Cox. Multidimensional scaling. CRC Press, 2000\nYoav Goldberg and Omer Levy. word2vec explained: Deriving Mikolov et al's negative-sampling word embedding method. arXiv preprint arXiv:1402.3722, 2014\nZellig S Harris. Distributional structure. Word. 10(2-3):146-162. 1954\nYoshua Bengio, Jean-Francois Paiement, Pascal Vincent, Olivier Delalleau, Nicolas Le Roux, and Marie Ouimet. Out-of-sample extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering. Advances in neural information processing systems, 16:177-184, 2004.\nGeoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural network. Science, 313(5786):504-507, 2006.\nEric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. Improving word representations via global context and multiple word prototypes. In Proceedings of the 5Oth Annual Meeting of the Association. for Computational Linguistics: Long Papers-Volume 1. pp. 873-882. ACL. 2012.\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations vector space. arXiv preprint arXiv:1301.3781, 2013a.\nKlaus-Robert Muller. Sebastian Mika, Gunnar Ratsch, Koji Tsuda, and Bernhard Scholkopf. An introduction tc kernel-based learning algorithms. Neural Networks, IEEE Transactions on, 12(2):181-201, 2001\nSascha Rothe and Hinrich Schutze. Autoextend: Extending word embeddings to embeddings for synsets and lexemes. arXiv preprint arXiv:1507.01127, 2015\nSam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500):2323-2326, 2000.\nBernhard Scholkopf, Alexander Smola, and Klaus-Robert Muller. Nonlinear component analysis as a kerne eigenvalue problem. Neural computation, 10(5):1299-1319, 1998.\nJoshua B Tenenbaum, Vin De Silva, and John C Langford. A global geometric framework for nonlinea dimensionality reduction. Science, 290(5500):2319-2323, 2000.\nNaftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprin physics/0004057, 2000.\nAndrew Trask, Phil Michalak, and John Liu. sense2vec-a fast and accurate method for word sense disambiguatio in neural word embeddings. arXiv preprint arXiv:1511.06388, 2015.\nJoseph Turian. Ley Ratinov. and Yoshua Bengio. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th annual meeting of the association for computational. linguistics, pp. 384-394. Association for Computational Linguistics, 2010.\nLaurens van der Maaten. Barnes-Hut-SNE. In Proceedings of the International Conference on Learning Representations, 2013.\nLaurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learnin. Research, 9(2579-2605):85, 2008.\nTal Linzen. Issues in evaluating semantic sp. aces using word analogies. arXiy. print arXiv:1606.07736, 2016\nJeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global Vectors for Word Representa tion. In EMNLP, volume 14, pp. 1532-1543, 2014.\nAli Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in neura information processing systems, pp. 1177-1184, 2007.\nLaurens van der Maaten. Learning a parametric embedding by preserving local structure. In International Conference on Artificial Intelligence and Statistics, pp. 384-391, 2009\nMNIST - isomap MNIST - SimEc (isomap, 2 h.l.)\nFigure 8: MNIST digits visualized in two dimensions by isomap and a non-linear SimEc\n20 newsgroups - linear Kernel PCA 20 newsgroups - SimEc (lin. kernel, linear\nFigure 9: 20 newsgroups dataset embedded with linear kernel PCA and a corresponding linear SimEc\nAnalogy task To show that the word embeddings created with context encoders capture meaningful semantic and syntactic relationships between words, we evaluated them on the original analogy task published together with the word2vec model (Mikolov et al.|2013a)|11This task consists of many questions in the form of \"man is to king as woman is to XXX\" where the model is supposed to find the correct answer queen. This is accomplished by taking the word embedding for king, subtracting from it the embedding for man and then adding the embedding for woman. This new word vector should then be most similar (with respect to the cosine similarity) to the embedding for queen2 The word2vec and corresponding context encoder model are trained for ten iterations on the t ext 8 corpus!13which contains around 17 million words and a vocabulary of about 70k unique words, and the training part of the 1-bi 11 i on benchmark dataset14which contains over 768 million words with a vocabulary of 486k unique words|15\nThe results of the analogy task are shown in Table[1 To capture some of the semantic relations between words (e.g. the first four task categories) it can be advantageous to use context encoders, i.e to weight the word2vec embeddings with the words' average context vectors - however to achieve the best results we also had to include the target word itself in these context vectors. One reason for the ConEcs' superior performance on some of the task categories but not others might be that the city and country names compared in the first four task categories only have a single sense (referring to the\nTable 1: Accuracy on the analogy task with mean and standard deviation computed using thre random seeds when initializing the word2vec model. The best results for each category and corpu are in bold.\ntext8 (10 iter) 1-billion word2vec Context Encoder. word2vec ConEc capital-common-countries 63.84.7 78.70.2 79.32.2 83.11.2 capital-world 34.02.1 54.71.3 63.81.4 75.90.4 currency 15.40.9 19.30.6 13.33.6 14.80.8 city-in-state 28.61.0 43.60.9 19.61.7 29.61.0 family 79.61.5 77.20.4 78.72.2 79.01.4 gram1-adjective-to-adverb 11.00.9 16.60.7 12.30.5 13.31.1 gram2-opposite 24.33.0 24.32.0 27.60.1 21.31.1 gram3-comparative 64.30.5 63.01.1 83.70.9 76.21.1 gram4-superlative 40.32.1 37.61.5 69.40.5 56.21.2 gram5-present-participle 30.51.0 31.70.4 78.41.0 68.00.7 gram6-nationality-adjective 70.61.5 67.21.4 83.80.6 83.80.5 gram7-past-tense 30.51.8 33.00.6 53.90.9 49.20.7 gram8-plural 49.80.3 49.21.2 62.71.9 56.71.0 gram9-plural-verbs 41.02.5 30.11.9 68.70.2 45.00.4 total 42.10.6 46.50.1 57.20.3 55.80.3\nrespective location), while the words asked for in other task categories can have multiple meanings for example \"run' is used as both a verb and a noun and in some contexts refers to the sport activity while other times it is used in a more abstract sense, e.g. in the context of someone running for president. Therefore, the results in the other task categories might improve if the words' context vectors are first clustered and then the ConEc embedding is generated by multiplying with the average of only those context vectors corresponding to the word sense most appropriate for the task category"}] |
Sks3zF9eg | [{"section_index": "0", "section_name": "TAMING THE WAVES: SINE AS ACTIVATION FUNCTION IN DEEP NEURAL NETWORKS", "section_text": "Giambattista Parascandolo, Heikki Huttunen & Tuomas Virtanen\nMost deep neural networks use non-periodic and monotonic-or at least quasiconvex- activation functions. While sinusoidal activation functions have been successfully used for specific applications, they remain largely ignored and regarded as difficult to train. In this paper we formally characterize why these networks can indeed often be difficult to train even in very simple scenarios, and describe how the presence of infinitely many and shallow local minima emerges from the architecture. We also provide an explanation to the good performance achieved on a typical classification task, by showing that for several network ar- chitectures the presence of the periodic cycles is largely ignored when the learning is successful. Finally, we show that there are non-trivial tasks-such as learn- ing algorithms--where networks using sinusoidal activations can learn faster than more established monotonic functions."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Most activation functions typically used nowadays in deep neural networks-- such as sigmoid, tanh. ReLU, Leaky ReLU, ELU, parametric ReLU, maxout---are non-periodic. Moreover, these functions. are all quasiconvex, and more specifically either monotonic (sigmoid, tanh, ReLU, Leaky ReLU ELU) or piece-wise monotonic with two monotonic segments (parametric ReLU, maxout)..\nMonotonicity makes sense from an intuitive point of view. At any layer of a network, neurons learn to respond to certain patterns, i.e. those that correlate with their weights; in case of monotonic func- tions, to a stronger positive correlation corresponds a stronger (or equal) activation, and viceversa, to. a weaker positive correlation corresponds a weaker (or equal) activation. Neurons using piece-wise monotonic functions with two monotonic segments can be viewed as two separate neurons, each equipped with one of the two monotonic segments, and therefore independently looking for either the positive or the negative correlation between the weights and the input..\nExcluding the trivial case of constant functions, periodic functions are non-quasiconvex, and there fore non-monotonic. This means that for a periodic activation function, as the correlation with the input increases the activation will oscillate between stronger and weaker activations. This apparently undesirable behavior might suggest that periodic functions might be just as undesirable as activatior functions in a typical learning task.\nNeural networks using sinusoidal activation functions have been regarded as difficult to train (La. pedes & Farber((1987)) and have been largely ignored in the last years. There are a few questions\nI.e., it can correctly classify any set of points\ngiambattista.parascandolo,heikki.huttunen,tuomas.virtanen}@tut.fi"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "But is this really the case? As shown in Section2 there are several examples from the literature. where sinusoidal functions were successfully used in neural networks. Moreover, as noted already in Gaynier & Downs (1995), networks using simple monotonic activation functions--such as sigmoids, tanh, ReLU-tend to have smaller VC dimension than those using non-monotonic functions. More specifically, even a network with a single hidden neuron using sinusoidal activation has infinite VC dimension1\nthat naturally arise and make an analysis of deep neural networks using periodic activation functions interesting:\nIn this paper we shed some light on these questions. In Section2 we review relevant works or the topic of periodic activation functions. Starting from a simple example, in Section |3|we show what makes learning with sinusoidal activations a challenging task. In Section4|we run a series. of corroborative experiments, and show that there are tasks where sinusoidal activation functions outperforms more established quasi-convex functions. We finally present our conclusions in Section\nPeriodic activation functions, and more specifically sinusoids, have received a tiny fraction of the attention that the research community reserved to the more popular monotonic functions. One of the first notions of a neural network with one hidden layer using sine as activation comes from (Lapedes & Farber|1987 pp. 25-26). The authors define it as a generalized Fourier decomposition, and while recognizing the potential in their approximation capacity, they report that in their experiments these networks often exhibited numerical problems or converged to local minima.\nSome works have concentrated on mixing periodic and non periodic activations. In Fletcher & Hinde (1994) the authors propose to learn a coefficient that weighs each activation between sine and sigmoid. More recently, in|Gashler & Ashmore (2016) the authors used sinusoids, linear and ReLU activations in the first layer of a deep network for time-series prediction.\nSome theoretical results were presented in Rosen-Zvi et al.(1998), where the authors analyze the learning process for networks with zero or one hidden layers, and sinusoidal activations in all lay. ers. In|Nakagawa (1995) the author shows that a chaotic neuron model using a periodic activation function has larger memory capacity than one with a monotonous function.\nConcerning recurrent neural networks (RNNs), inSopena & Alquezar(1994) and|Alquezar Mancho et al.[(1997) the activation function for the last fully connected layer of a simple RNN was sine instead of sigmoid, which led to higher accuracy on a next-character prediction task.Choueiki et al.(1997) and Koplon & Sontag(1997) used sinusoidal activations in a RNN for short-term load forecasting and fitting sequential input/output data respectively.Liu et al.(2016) studied the stability of RNNs using non-monotonic activation functions, trying also sinusoids along others. No work so far---to the best of the authors' knowledge- --has investigated the use of periodic activation functions in convolutional neural networks (CNNs).\nA separate line of research has focused on networks that closely mimic Fourier series approxima tions, so called Fourier series neural networks (Rafajlowicz & Pawlak(1997); Halawa(2008) Here the hidden layer is composed of two parts: each input node is connected to an individual set of hidden nodes using sines and cosines as activations. The input-to-hidden connections have in- dependent and fixed weights (with integer frequencies 1...K) for each input dimension. Then, the product is computed for each possible combinations of sines and cosines across dimensions. Af- ter that, only the hidden-to-output connections-which correspond to the Fourier coefficients---are learned. Despite the good theoretical properties, the number of hidden units grows exponentially with the dimensionality of the input (Halawa (2008)), rendering these networks impractical in most situations.\nWhat makes them in theory difficult to train? Why do they still often manage to learn in practice? How does the learned representation differ from the one of similar quasi-convex functi Are there tasks where periodic activation functions are more apt than quasiconvex one\nIn Sopena et al.(1999) the authors show on several small datasets that a multi layer perceptron. with one hidden layer using sinusoids improves accuracy and shortens training times compared to its sigmoidal counterpart. For similar networks, improvements are shown in Wong et al. (2002). for a small handwritten digit recognition task and in|McCaughan(1997) for the validity of logical. arguments.\nLet us start with a definition of the framework studied. In this section we analyze a deep neural. network (DNN) with one hidden layer and linear activation at the output. The network receives as input a vector x-that has an associated target y-and computes an hidden activation h and a prediction y as\nh = F(Wx+ bw) y= Ah+bA\nWe can encounter issues with local minima even when learning the network parameters to solve a very simple optimization problem. Let us assume we are trying to learn the target function g(x) = sin(vx) for -m < x < m and some frequency v E R. x is the input to the network, and for this analysis we treat the case of continuous and uniformly distributed data, but we argue later in the section that similar conclusions can be expected with a limited amount of randomly distributed samples. By training a network with a single hidden neuron, fixed hidden-to-output connection A = a = [1] and no biases, i.e. no phase nor DC term to learn, our problem is reduced to learning the frequency v as the weight W =w|.\nFormally, we are minimizing the squared loss (sin(vx) - sin(wx))2. For a fixed choice of v and m the loss landscape L(v, w, m) wrt to w has the form\nL(v,w, m) (sin(vx) - sin(wx))2dx 2 sin (m (w - v)) 2 sin(m(w+v)) sin (2mw) C (v,m w+v 2w W-V\nwhere c(v, m) is a constant term. As illustrated in Fig.[1] for a fixed choice of v and m, the three main terms in L(v, w, m) are three cardinal sines (or sincs): the first is negative and centered at w = v, which is the only global minimum and where the loss is O; the second term is positive and centered at w = -v, and is the only global maximum; the third sinc is negative and centered in w = 0. The latter creates a local minimum for small values of w and large values of m and v, where the function expressed by the network is a constant sin(0) = 0.\nFigure 1: The loss surface when only the frequency v = 10 of a sine needs to be learned. One of the three sincs is centered in 0, the other two w = 10.\nWe can already spot two culprits responsible for the difficulty of this learning problem\nwhere W and A are weight matrices, bw and bA are bias vectors, and F is an activation function As noted already in previous works, there is a clear interpretation of the variables in the network when F = sin, in terms of a Fourier representation. The weights W and the biases bw are re- spectively the frequencies and phases of the sinusoids, while A are the amplitudes associated, and bA the DC term. As shown inCybenko(1989);Jones(1992) such a network can approximate all continuous functions on C(In), i.e. on the n-dimensional hypercube..\n2 1.5 ssoI 1 MMM 0.5 0 30 -20 -10 0 10 20 30 W\nAlso note that away from the main lobes the overall shape of the loss is almost flat, and therefore if the optimization starts far from the global optimum the gradients will tend to be small.\nLet us now make the result more general, by including again the amplitudes and bias terms, an. trying to learn a more complex function. After adding a bias/phase term to the neuron and to th target function g(x) (b and respectively) and a hidden-to-output weight/amplitude term, (a and . respectively), we are trying to minimize (y sin(vx + ) a sin(wx + b))2. From the solution of th integral, the equation describing the second summand in Eq.3|gains a term ay cos(b + ), while th. third summand gains a term a2 cos(2b). Therefore all the sincs are still present (as shown in Fig.2. and so are the aforementioned side effects..\nicic th\nFigure 2: The loss surface as a function of the network parameters when trying to learn g(x) 1 sin(vx + 0). Cold colors are smaller values. The local minima in the ripples generated by the sincs are clearly visible.\nMoreover, the local minimum centered in zero comes from the integral\nx sin(2wx) sin?(wx)dx = 2 4w\nDespite all this, the problems we just described are typically not an issue for many tasks. Going. back to the example with a single sinusoid to learn, we can notice that the central local minimun. disappears when the frequency v is small enough that the main lobe of the rightmost sinc incorpo. rates the main lobe of the central sinc (see Fig.3h. This happens when the data has a frequency. representation with a large amount of low frequencies, which we assume to be often the case fo. many realistic datasets. The size of the support m also has an effect on the width and depth of the. sincs. In a practical case at training time the integral is replaced by a sum-since only a limitec. amount of training samples is available-, the sampling is typically not uniform, and there migh. be noise in the data. Moreover. in the analysis we assumed that the loss surface (and therefore th\n(i) the deep local minimum centered in w = 0, produced by the sinc centeed in O, which \"traps\" small weights around zero (ii) the infinitely many ripples created by all three sincs, each of which is a shallow local.. minimum.\nwhich appears after expanding the square of the sum and applying linearity to the integral in L(v, w, m). Note that this term is not related to the function to be learned g(x), nor to the fact that there is a single hidden neuron, and therefore will always appear in any network with a single layer of sinusoids trained using mean squared error..\nFinally, since any function in the class that we are considering can be approximated to de- urn our analysis to any target function g(x). The resulting function to be minimized is again the square of the sum of multiple sinu soids. After squaring and applying linearity, every term will either be sin?() or sin() sin() (with some amplitude terms). The former produces a sinc centered in zero, while the latter an odd pair of Sincs.\ngradient) is calculated on the full training set, while in practice only mini batches of training sample are typically used. All these factors can contribute to smooth the loss surface L, potentially makin the task easier to solve (see Fig.3)\nFigure 3: The loss surface when only the frequency of the target sinusoid needs to be learned, only a set of non-uniformly distributed samples is available at training time, and for a low frequency v of the target function. Note that the central local minimum has disappeared..\nOn these premises, we can expect that learning will be difficult when g(x) has large high frequency. components (disjoint sincs). If network weights are initialized with small enough values, the weight might remain trapped inside the local minima of the central sincs. For large initialization the networl. might still be unable to find the global minimum due to the absence of overall curvature and th presence of shallow local minima. The optimization will be hard also if g(x) has low frequency. components and the weights are initialized with large values. We speculate that a large initializatioi of the weights, typical in the past, was the main reason why these networks were regarded as difficul. to train even with a single hidden layers..\nExtending the analysis to deeper networks using sinusoids is not as simple. Already for two hidder layers the resulting function is of the form sin(sin( )), whose integral is not known analytically ir. closed form.\nAs a consequence of the results presented in Section [3.1] the correct initialization range of the weights using sine might be very different from the one used for other activation functions. If the weights are very small, the sinusoid acts in its central linear region (Fig.4).\nWhile for inherently periodic tasks it is reasonable to assume that the network might indeed perform. better, several tasks analyzed in Section2lare not clearly periodic. None of the aformentioned works. has analyzed the possibility that the network used mostly the monotonic segment of the sinusoid around zero, which is very similar to the tanh (Fig.4). Especially in the typical training scenario-. where the input data x is normalized to have zero mean and unit variance, and the network initializa. tion is done using small weights W and zero biases-most pre-activations z = Wx + b are likely to be such thatz< /2\nIn Section4|we run a series of experiments to investigate if and how much a network trained using sine as activation actually relies on the periodic part.."}, {"section_index": "3", "section_name": "4 EXPERIMENTS", "section_text": "In this section we train several networks using sin as activation function on the MNIST and Reuters. dataset. We then investigate how much of the periodicity is actually used by replacing the activation function in the trained network with the truncated sin, (abbreviated as tr. sin), defined as.\n2 1.5 ssoJ 0.5 0 20 15 -10 -5 0 5 10 15 20 W\n(0, if -/2< x tr. sin = sin(x), if -/2x</2 (1, if x > /2\nWe also train the same networks using the monotonic function tanh for comparison. We then rur experiments on a couple of algorithmic task where the nature of the problem makes the periodicity. of the sinusoid potentially beneficial.."}, {"section_index": "4", "section_name": "4.1 MNIST", "section_text": "We experiment with the MNIST dataset, which consists of 8-bits gray-scale images, each sized 28 x 28 pixel, of hand-written digits from O to 9. The dataset has 60,000 samples for training and 10,000 samples for testing. It is simple to obtain relatively high accuracy on this dataset, given that even a linear classifier can achieve around 90% accuracy. Since the data is almost linearly separable, it is reasonable to expect that using sine as activation function will not make much use of the periodic part of the function. We test a DNN, a CNN and an RNN on this problem, using sine as activation function, and compare the results to the same network trained using tanh.\nOn all experiment on MNIST we scale the images linearly between O and 1. All networks have an output layer with 10 nodes, use softmax and are trained with cross-entropy as loss. The batch size is 128 and the optimizer used is Adam (Kingma & Ba (2015)) with the hyper-parameters proposed. in the original paper.\nDNN We use a DNN with 1 to 2 hidden layers, each with 256 hidden neurons. We initialize the. weights in all layers using a normal distribution with standard deviation in the set 1, O.1, O.01 The input images are flattened to vectors of size 28 28 = 784, which makes the task referred to as. permutation invariant MNIST. The networks are trained for 20 epochs..\nRNN The input images are presented as a sequence of 28 rows, each containing 28 values, starting from the top to the bottom. We use a RNN with 1 hidden layer with 128 hidden neurons. We experiment separately with vanilla RNNs and LSTMs. When the latter are used with sine, the function is used in place of the inner tanh activation. We initialize the weights in all recurrent layers using a normal distribution with standard deviation of O.1.\nThe DNN results are reported in Table[1 As expected, replacing the activation from tanh to trun cated sin does not affect much the results. For this reason we will not report this value on th ollowing tables. For the same reason, switching sin to either tanh or to truncated sin has almos the same effect, so we will only report the latter from here onwards. When using small values o = {0.1, 0.01} for the initialization, all networks equipped with sines obtained very similar result o the networks trained with tanh. Even though for these networks between 27% to 47% of the acti vations fall outside of the range [-/2, /2], replacing sin with tr. sin does not reduce the accurac! oy more than 2.5%. We can therefore conclude that the network is ignoring for the most part th oeriodicity of the function. On the contrary, the tanh is more significantly relying on the saturate art, and as o increases so does the drop in the accuracy when switching the activation function tc sine (reaching random guessing accuracy for o = 1).\n1.0 tanh(x) 0.5 sin(x) 0.0 0.5 1.0 -4 -2 0 2 4 x\nFigure 4: sin(x) and tanh(x) are very similar for -/2 < x < /2. The network might end up using only this part of the sine, therefore treating it as a monotonic function and ignoring its. periodicity.\nAs expected from the results presented in Section[3] the networks with sine had difficulty to converge for large initialization o = 2 Also notice that adding weight decay allowed the same network with 1 hidden layer to converge, reaching a solution that scarcely uses the periodic part of the function. Finally, results show that even for deeper networks with eight hidden layers, sinusoid can learn the task quite effortlessly, and still does so scarcely relying on the segment of the function outside [- /2, /2].\nA somewhat similar but less evident behavior emerged from the RNNs, as shown in Table[2] Espe cially for the LSTMs, the network using tanh relied on larger pre-activations much more than the. network using sin.\nTable 1: MNIST results for DNNs. For each row, we train a network using either tanh or sin an. report the results on the test data. We then replace the activation in the trained models with the on followed by ->, and directly recompute the accuracy on the test set without retraining the networks The last column reports the percentage of hidden activations for the sin networks that exceeds the. central monotonic segment of the sinusoid..\nNetwork tanh tanh -> tr. sin tanh -> sin- sin sin -> tanh-- sin -> tr. sin %|z|> /2 DNN 1-L init O.01 98.0 98.1 98.0 98.0 95.2 95.6 38% DNN 2-L init 0.01 98.2 98.2 81.4 98.2 95.1 95.6 27%, 48% DNN 1-L init 0.1 98.1 98.1 78.1 98.1 96.1 96.3 47% DNN 2-L init 0.1 98.2 98.2 81.3 98.1 96.1 96.5 29%, 47% DNN 1-L init 1 95.6 95.5 10.0 16.9 13.6 13.8 86% DNN 2-L init 1 92.8 92.5 10.0 10.0 10.0 10.0 DNN 1-L init 1, 10-4L2 96.8 92.5 10.0 97.7 96.0 96.1 14% DNN 8-L init 0.1 97.8 97.8 59.5 97.0 92.7 93.7 all ~40%\nTable 2: MNIST results for RNN and LSTM\nNetwork tanh tanh -> sin sin sin -> tr. sin- RNN init 0.1 96.3 81.3 97.4 94.1 LSTM init 0.1 97.3 77.6 97.2 93.7\nSimilar experiments with the Reuters dataset (ref) showed the same behavior, as seen in table |3 Each sequence of words corresponding to a data sample from the dataset is first converted to a vector of size 1o00, where the ith entry represents the amount of times that the ith most frequent word in the dataset appears in the sentence. The DNN has 128 hidden neurons, networks are trained for 20 epochs and test results are computed on a held-out 20% of the data.\nTable 3: Reuters results for DNNs. On the training data all the original architectures - i.e. withoi changing the activation function after training reach an accuracy > 90%\nNetwork tanh tanh -> sin sin sin -> tr. sin DNN 2-L init 0.01 75.9 71.6 76.1 76.3 DNN 2-L init 0.1 77.0 76.0 77.3 77.9 DNN 2-L init 1 61.6 3.2 16.4 8.5"}, {"section_index": "5", "section_name": "4.2 LEARNING ALGORITHMIC TASKS", "section_text": "We test the networks using sine as activation on a couple of algorithmic tasks, such as sum or difference of D-digits numbers in base 10. In both tasks the data is presented as a sequence of one-hot encoded vectors, where the size of the vector at each timestep is 12: the first 10 entries correspond to the numbers from O to 9, the last two entries correspond to the operator symbol--+'.\n2The network with sin, 1-L and o = 1 reaches an accuracy of 40% on the training data after 20 epochs anc 83% after 1000 epochs. With two hidden layers it has random guessing accuracy on the training data after 20 epochs, and after 100 epochs 100% accuracy on training data and random guessing accuracy on test data.\nor '_' in case of sum or difference respectively-and the blank symbol used for padding. Th. length of an input sequence is D + 1 + D, while the output sequence has length D + 1. If a strin is shorter than the total length, the remaining entries are padded with the blank symbol..\nFor the task sum (difference) the network is expected to produce the result of the sum (difference) of two positive integers fed as input. We run experiments with the number of digits D = 8. The order of the digits of each number is inverted, which was shown to improve the performance in several. tasks using encoder-decoder (ENC-DEC) architectures.\nWe use an encoder-decoder architecture based on vanilla RNN or LSTM. The networks have 128 hidden units in every layer, one recurrent layer for encoding and one recurrent layer for decoding The decoder has also a fully connected output layer with softmax at each step. The encoder 'reads' the input sequence one symbol at a time and updates its hidden state. At the end of the input sequence, the hidden state from the encoder is fed at each step for D + 1 times as input to the decoder. The decoder produces the output, one digit at a time.\nThe networks are trained for 5ooo iterations'|using Adam as optimizer, cross-entropy as loss and . batch size of 128. The feed-forward and recurrent weights are initialized using a normal distributiol with the widely used schemes proposed in Glorot & Bengio[(2010) and Saxe et al.[(2013) respec tively, we clip gradients at 1 and decay the learning rate by 10-3 at every iteration. Samples ar generated at each iteration and we do not use a separate validation or test set, since the number o. possible samples is so large that overfitting is not an issue. The accuracy for a given prediction is. only if every digit in the sequence is correctly predicted. The results reported in Fig.5lare compute. at every iteration on the newly generated samples before they are used for training..\nENC-DEC on sum with 8 digits ENC-DEC on dif with 8 digits 1.0 1.0 0.8 0.8 0.6 0.6 aecnnre aecnnne 0.4 sin RNN 0.4 sin RNN tanh RNN tanh RNN 0.2 sin LSTM 0.2 sin LSTM tanh LSTM tanh LSTM 0.0 0.0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 0 iterations iterations\nFigure 5: Accuracy curves of the ENC-DEC LSTM and RNN using sine or tanh. The number of digits for each sequence is sampled uniformly in {1, ..., D}. For uniform sampling of the addend. in {0, ..., 10D - 1}--which prevents small addends to appear very often--the experiments are in the appendix."}, {"section_index": "6", "section_name": "5 CONCLUSIONS", "section_text": "Neural networks with a single hidden layer using sinusoidal activation functions have been largely ignored and regarded as difficult to train. In this paper we analyzed these networks, characterizing the loss surface, and showing in what conditions they are especially difficult to train. By looking intc the hidden activations of networks succesfully trained on a simple classification task, we showed tha when learning is successful the networks often scarcely rely on the periodicity of the sinusoids.\nFinally, we showed on a pair of simple algorithmic tasks where the periodicity is intuitively ben eficial, that neural networks using sinusoidal activation functions can potentially learn faster and. better than those using established monotonic functions on certain tasks. This encourages future.\nHere we refer to one iteration as the processing of 128 minibatches\nENC-DEC on sum with 8 digits ENC-DEC on dif with 8 diaits 1.0 1.0 0.8 0.8 0.6 aceenncy 0.6 aecunne 0.4 sin RNN 0.4 sin RNN tanh RNN tanh RNN 0.2 sin LSTM 0.2 sin LSTM tanh LSTM tanh LSTM 0.0 0.0 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 iterations iterations\nThe networks using sine learn the tasks faster and with higher accuracy than those using tanh. While in vanilla RNNs the difference is quite evident, the improvement is less striking for the LSTM. In all the models switching the activation from sine to truncated sine, or from tanh to sine brings the accuracy almost to O, indicating that the network is effectively using the periodic part of the function.\nwork to investigate the use of periodic functions, the effect at different layers, and the potential of incorporating these functions in other models using quasi-convex functions.."}, {"section_index": "7", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors wish to acknowledge CSC IT Center for Science, Finland, for computational resources"}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Rene Alquezar Mancho et al. Symbolic and connectionist learning techniques for grammatica inference. 1997.\nGP Fletcher and CJ Hinde. Learning the activation function for the neurons in neural networks. In ICANN94, pp. 611-614. Springer, 1994\nRJ Gaynier and T Downs. Sinusoidal and monotonic transfer functions: Implications for vc dimen sion. Neural networks, 8(6):901-904. 1995.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neura networks. In Aistats, volume 9, pp. 249-256, 2010.\nRenee Koplon and Eduardo D Sontag. Using fourier-neural recurrent networks to fit sequential input/output data. Neurocomputing, 15(3):225-248, 1997.\nAlan Lapedes and Robert Farber. Nonlinear signal processing using neural networks: Prediction and system modelling. Technical report, 1987.\nPeng Liu, Zhigang Zeng, and Jun Wang. Multistability of recurrent neural networks with non monotonic activation functions and mixed time delays. IEEE Transactions on Systems, Man, an Cybernetics: Systems, 46(4):512-523, 2016.\nDavid B McCaughan. On the properties of periodic perceptrons. In Neural Networks, 1997., Inter national Conference on, volume 1, pp. 188-193. IEEE, 1997.\nMasahiro Nakagawa. An artificial neuron model with a periodic activation function. Journal of the Physical society of Japan, 64(3):1023-1031. 1995.\nE Rafajlowicz and M Pawlak. On function recovery by neural networks based on orthogonal expan sions. Nonlinear Analysis: Theory, Methods & Applications, 30(3):1343-1354, 1997.\nAndrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynan ics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013..\nJosep M Sopena, Enrique Romero, and Rene Alquezar. Neural networks with periodic and mono tonic activation functions: a comparative study in classification problems. In Artificial Neural Networks, 1999. ICANN 99. Ninth International Conference on (Conf. Publ. No. 470), volume 1 pp. 323-328. IET, 1999.\nJM Sopena and R Alquezar. Improvement of learning in recurrent networks by substituting the sigmoid activation function. In ICANN94, pp. 417-420. Springer, 1994."}, {"section_index": "9", "section_name": "A APPENDIX", "section_text": "As shown on the left plots in Fig.6 for D = 8 the networks using sin learn faster and reach higher. accuracy than the network using tanh. For the case of D = 16 and 3 recurrent layers in the encoder, sine reaches almost 80% accuracy, while tanh never takes off in the 5000 epochs of training. A similar behavior emerges on the task dif, as shown in Fig.7 although with overall lower accuracy and with none of the networks successfully learning the task with D = 16. Surprisingly, the LSTMs almost completely fail to learn the tasks under these training setting..\nENC-DEC on sum with 8 digits ENC-DEC on sum with 16 digits 1.0 1.0 0.8 0.8 aceunrey 0.6 0.4 sin RNN 0.4 sin RNN tanh RNN tanh RNN 0.2 sin LSTM 0.2 sin LSTM tanh LSTM tanh LSTM 0.0 0.0 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 iterations iterations ENC-DEC on sum with 8 diqits ENC-DEC on sum with 16 digits 101 101 10 sso| 101 SSO 10 sin RNN sin RNN 102 tanh RNN tanh RNN sin LSTM sin LSTM tanh LSTM tanh LSTM 103 101 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 iterations iterations\nFigure 6: Accuracy and loss curves of the ENC-DEC RNN using sine or tanh, for the task sum witl 8 or 16 digits per addend. The digits are sampled uniformly..\nENC-DEC on dif with 8 digits ENC-DEC on dif with 16 digits 1.0 1.0 0.8 0.8 0.6 0.6 acnrre aecnnne 0.4 sin RNN 0.4 sin RNN tanh RNN tanh RNN 0.2 sin LSTM 0.2 sin LSTM tanh LSTM tanh LSTM 0.0 0.0 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 iterations iterations ENC-DEC on dif with 8 digits. ENC-DEC on dif with 16 digits 101 101 100 SSO SSO sin RNN sin RNN 101 tanh RNN tanh RNN sin LSTM sin LSTM tanh LSTM tanh LSTM 102 100 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 iterations iterations\nFigure 7: Accuracy and loss curves of the ENC-DEC RNN using sine or tanh, for the task dif witl 8 or 16 digits per addend. The digits are sampled uniformly.\nN -DEC on dif with 8 digits ENC- -DEC on dif with 16 digits 1.0 1.0 0.8 0.8 0.6 0.6 eenra aacnrre 0.4 sin RNN 0.4 sin RNN tanh RNN tanh RNN 0.2 sin LSTM 0.2 sin LSTM tanh LSTM tanh LSTM 0.0 0.0 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 iterations iterations\nMore hidden neurons When the number of hidden units is doubled from 128 to 256. the standard. LSTM using tanh as activation learns faster and reaches higher accuracy than the network trained with sine, while the vanilla RNN using sin still outperforms both the vanilla RNN and the LSTM using tanh. These results are reported in Fig.8|for the case of D = 8 only, since for D = 16. all networks are stuck at zero accuracy. Further investigation would be required to explain how doubling the amount of neurons in the tanh LSTM changed the learned representation, providing. such a boost in performance\nENC-DEC on sum with 8 digits ENC-DEC on dif with 8 digits 1.0 1.0 0.8 0.8 aeecnney 0.6 0.4 sin RNN 0.4 sin RNN tanh RNN tanh RNN I 0.2 sin LSTM 0.2 - sin LSTM tanh LSTM tanh LSTM 2 0.0 0.0 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 iterations iterations\nFigure 8: Accuracy curves of the ENC-DEC RNN using sine or tanh, for the tasks sum and dif witl 8 digits per addend. The digits are sampled uniformly..\nCurriculum learning experimentsFor the D = 16 case, we also experiment with a curriculum learning approach: while keeping the length of the input and output sequences fixed to 16+1+16 and 17 respectively, we start training by limiting the maximum number of digits to D = 8 and increase D by 2 every 1000 iterations, so that by the 4000th iteration D = 16. As shown in Fig.9] by using this approach the network using sine as activation function reaches an accuracy close to 1 by the end of the training. As shown by the steep drops in performance when D is increased, the network has only learned to correctly perform the operation within the number of digits it was trained upon, bu it can adapt very quickly to the longer addends. The network using tanh takes more time to learn the case with D = 8 and after that does not adapt to larger number of digits.\nENC-DEC on sum with 16 digits ENC-DEC on dif with 16 digits 1.0 1.0 Wwmmmym M 0.8 0.8 0.6 0.6 Ceunee 0.4 sin RNN 0.4 sin RNN tanh RNN tanh RNN 0.2 sin LSTM 0.2 sin LSTM - tanh LSTM -tanh LSTM 0.0 0.0 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 iterations iterations ENC-DEC on sum with 16 digits ENC-DEC on dif with 16 digits 2.5 2.5 2.0 2.0 1.5 1.5 SSO SSO 1.0 sin RNN 1.0 sin RNN tanh RNN tanh RNN 0.5 sin LSTM 0.5 sin LSTM tanh LSTM tanh LSTM 0.0 0.0 0 1000 2000 3000 4000 5000 1000 2000 3000 4000 5000 0 iterations iterations\nFigure 9: Accuracy and loss curves of the ENC-DEC RNN using sine or tanh, for the tasks sum anc dif. D starts from 8 and increases by 2 every 1000 iterations until it reaches 16 digits per addend at iteration 4000.\nENC-DEC on sum with 8 digits ENC-DEC on dif with 8 digits 1.0 1.0 0.8 0.8 aeecnney 0.6 0.4 sin RNN 0.4 sin RNN tanh RNN tanh RNN 0.2 - sin LSTM 0.2 1 sin LSTM tanh LSTM 11 tanh LSTM 0.0 0.0 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 iterations iterations"}] |
r1kQkVFgl | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Integrated development environments (IDEs) are essential tools for programmers. Especially when a. developer is new to a codebase, one of their most useful features is code suggestion: given a piece o code as context, suggest a likely sequence of next tokens. Typically, the IDE suggests an identifier o a function call, including API calls. While extensive support exists for statically-typed language. such as Java, code suggestion for dynamic languages like Python is harder and less well supportec. because of the lack of type annotations. Moreover, suggestion engines in modern IDEs do not propose. expressions or multi-statement idiomatic code.\nRecently, methods from statistical natural language processing (NLP) have been used to train code suggestion systems from code usage in large code repositories (Hindle et al., 2012; Allamanis & Sutton, 2013; Tu et al., 2014). To this end, usually an n-gram language model is trained to score possible completions. Neural language models for code suggestion (White et al., 2015; Das & Shah, 2015) have extended this line of work to capture more long-range dependencies. Yet, these standard neural language models are limited by the so-called hidden state bottleneck, i.e., all contex1 information has to be stored in a fixed-dimensional internal vector representation. This limitatior restricts such models to local phenomena and does not capture very long-range semantic relationships like suggesting calling a function that has been defined many tokens before.\nTo address these issues, we create a large corpus of 41M lines of Python code by using a heuristic for. crawling high-quality code repositories from GitHub. We investigate, for the first time, the use of. attention (Bahdanau et al., 2014) for code suggestion and find that, despite a substantial improvement"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "To enhance developer productivity, all modern integrated development environ- ments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their re liance on type annotations means that they do not provide the same level of support. for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a. neural language model with a sparse pointer network aimed at capturing very long range dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact,. this increase in code suggestion accuracy is due to a 13 times more accurate pre- diction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past.."}, {"section_index": "2", "section_name": "2 METHODS", "section_text": "We first revisit neural language models, before briefly describing how to extend such a language. model with an attention mechanism. Then we introduce a sparse attention mechanism for a pointer network that can exploit the Python abstract syntax tree of the current context for code suggestion..\nwhere the parameters are estimated from a training corpus. Given a sequence of Python tokens, we seek to predict the next M tokens at+1, : , at+M that maximize Equation 1\nIn this work, we build upon neural language models using Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM, Hochreiter & Schmidhuber, 1997). This neural language model estimates the probabilities in Equation 1 using the output vector of an LSTM at time step t (denoted ht here) according to\nexp(vht + b- Pea=Tat-1.., a1 x, exp(vf,ht + br'\nwhere v, is a parameter vector associated with token t in the vocabulary\nNeural language models can, in theory, capture long-term dependencies in token sequences through their internal memory. However, as this internal memory has fixed dimension and can be updated ai every time step, such models often only capture local phenomena. In contrast, we are interested in very long-range dependencies like referring to a function identifier introduced many tokens in the past. For example, a function identifier may be introduced at the top of a file and only used neai the bottom. In the following, we investigate various external memory architectures for neural code suggestion."}, {"section_index": "3", "section_name": "2.2 ATTENTION", "section_text": "A straight-forward approach to capturing long-range dependencies is to use a neural attention mech-. anism (Bahdanau et al., 2014) on the previous K output vectors of the language model. Attention. mechanisms have been successfully applied to sequence-to-sequence tasks such as machine transla. tion (Bahdanau et al., 2014), question-answering (Hermann et al., 2015), syntactic parsing (Vinyals. et al., 2015b), as well as dual-sequence modeling like recognizing textual entailment (Rocktaschel. et al., 2016). The idea is to overcome the hidden-state bottleneck by allowing referral back to previous output vectors. Recently, these mechanisms were applied to language modelling by Cheng et al.. (2016) and Tran et al. (2016).\nin accuracy, it still makes avoidable mistakes. Hence, we introduce a model that leverages long-range. Python dependencies by selectively attending over the introduction of identifiers as determined by examining the Abstract Syntax Tree. The model is a form of pointer network (Vinyals et al.. 2015a), and learns to dynamically choose between syntax-aware pointing for modeling long-range. dependencies and free form generation to deal with local phenomena, based on the current context..\nOur contributions are threefold: (i) We release a code suggestion corpus of 41M lines of Python code crawled from GitHub, (ii) We introduce a sparse attention mechanism that captures very long-range dependencies for code suggestion of this dynamic programming language efficiently, and (iii) We provide a qualitative analysis demonstrating that this model is indeed able to learn such long-range dependencies.\nCode suggestion can be approached by a language model that measures the probability of observing a sequence of tokens in a Python program. For example, for the sequence S = a1, ..., an, the joint probability of S factorizes according to\nN Pe(S) = Pe(a1)I[Pe(at|at-1, ..., a1 t=2\narg max Pe(a1, ..., at, at+l,..., at+M at+1, ..., At+M\nFormally, an attention mechanism with a fixed memory Mt E RkK of K vectors m, E Rk f i E [1, K], produces an attention distribution Qt E RK and context vector ct E Rk at each tin step t according to Equations 4 to 7. Furthermore, WM, wh E Rkk and w E Rk are trainabl parameters. Finally, note that 1k represents a K-dimensional vector of ones.\nM = [m1 ... mk] X I Gt = tanh(WM Mt +1K(Whht) Qt = softmax(wT Gt) X ct = Mat\n2 nt = tanh Yt = softmax(W)\nThe problem of the attention mechanism above is that it quickly becomes computationally expensive for large K. Moreover, attending over many memories can make training hard as a lot of noise is introduced in early stages of optimization where the LSTM outputs (and thus the memory M) are more or less random. To alleviate these problems we now turn to pointer networks and a simple heuristic for populating Mt that permits the efficient retrieval of identifiers in a large history of. Python code."}, {"section_index": "4", "section_name": "2.3 SPARSE POINTER NETWORK", "section_text": "We develop an attention mechanism that provides a filtered view of a large history of Python tokens. At any given time step, the memory consists of context representations of the previous K identifiers introduced in the history. This allows us to model long-range dependencies found in identifier usage For instance, a class identifier may be declared hundreds of lines of code before it is used. Given a history of Python tokens, we obtain a next-word distribution from a weighed average of the sparse. pointer network for identifier reference and a standard neural language model. The weighting of the two is determined by a controller.\nFormally, at time-step t, the sparse pointer network operates on a memory Mt E Rk x K of only the. K previous identifier representations (e.g. function identifiers, class identifiers and so on). In addition we maintain a vector mt = [id1, ..., idk] E NK of symbol ids for these identifier representations. (i.e. pointers into the large global vocabulary)..\nAs before, we calculate a context vector c using the attention mechanism (Equation 7), but on a memory Mt only containing representations of identifiers that were declared in the history. Next, we. obtain a pseudo-sparse distribution over the global vocabulary from.\nQt[j if mt[j] = i otherwise\nwhere -C is a large negative constant (e.g. -1oo0). In addition, we calculate a next-word distribution from a standard neural language model\nIt = mi ... mk E RkxK Gt = tanh(WM Mt +1K(Whht) k X K Qt = softmax(wTGt) R1xK\nFor language modeling, we populate Mt with a fixed window of the previous K LSTM output vectors. To obtain a distribution over the next token we combine the context vector c of the attention mechanism with the output vector ht of the LSTM using a trainable projection matrix WA E Rk 2k The resulting final output vector nt E R? encodes the next-word distribution and is projected to the size of the vocabulary V]. Subsequently, we apply a softmax to arrive at a probability distribution yt E IR|V! over the next token. This process is presented in Equation 9 where WV E R|V|xk and 6V E R|V! are trainable parameters.\nht nt = tanh WA JR k Yt = softmax(WVnt+ b) E R|V|\nimport os class [Reader]: Class Reader def _init_(self, (base_path): init process_file self.base path] = base_path self Function Function self base_path filename def (process_file(self, (filename) : Call os.path.join Assign path+ os.path.join(self. ? base_path self. LSTM name - 0.04 class_ - 0.08 Lang. SOOV 0.40 Model base_path (*) 0.67 base_path 0.01 SOOV 0.12 lang. model 0.70 M class_ 0.02 id. attention 0.30 filename 0.021 Reader 0.00 ed Cess path base_path 0.02 base_path (*) 0.95 Id. process_file 0.00 attention a filename 0.03 path 0.00\nFigure 1: Sparse pointer network for code suggestion on a Python code snippet, showing the next word distributions of the language model and identifier attention and their weighted combination through \\\nand we use a controller to calculate a distribution X E R2 over the language model and pointe network for the final weighted next-word distribution y* via\nht Xt E R3k Ct Xt = softmax(W^h + b E R2 yt =[yt it]\\t E R|V|\nht Xt Ct Xt = softmax(W^h yt =[yt it]\\t\nHere, xt is the representation of the input token, and W E R2x3k and b E R2 a trainable. weight matrix and bias respectively. This controller is conditioned on the input, output and contex. representations. This means for deciding whether to refer to an identifier or generate from the globa. vocabulary. the controller has access to information from the encoded next-word distribution h+ o the standard neural language model, as well as the attention-weighted identifier representations c. from the current history.\nFigure 1 overviews this process. In it, the identifier base_path appears twice, once as an argumen to a function and once as a member of a class (denoted by *). Each appearance has a different id. in the vocabulary and obtains a different probability from the model. In the example, the model. correctly chooses to refer to the member of the class instead of the out-of-scope function argument. although, from a user point-of-view, the suggestion would be the same in both cases..\nPrevious work on code suggestion either focused on statically-typed languages (particularly Java). or trained on very small corpora. Thus, we decided to collect a new large-scale corpus of the dynamic programming language Python. According to the programming language popularity website. Pypl (Carbonnelle, 2016), Python is the second most popular language after Java. It is also the 3rd most common language in terms of number of repositories on the open-source code repository. GitHub, after JavaScript and Java (Zapponi, 2016).\nWe collected a corpus of 41M lines of Python code from GitHub projects. Ideally, we would like this corpus to only contain high-quality Python code, as our language model learns to suggest code from how users write code. However, it is difficult to automatically assess what constitutes high-quality code. Thus, we resort to the heuristic that popular code projects tend to be of good quality, There are\nTable 1: Python corpus statistics\nFigure 2: Example of the Python code normalization. Original file on the left and normalized versior on the right.\ntwo metrics on GitHub that we can use for this purpose, namely stars (similar to bookmarks) and forks (copies of a repository that allow users to freely experiment with changes without affecting the original repository). Similar to Allamanis & Sutton (2013) and Allamanis et al. (2014), we select Python projects with more than 100 stars, sort by the number of forks descending, and take the top. 1000 projects. We then removed projects that did not compile with Python3, leaving us with 949 projects. We split the corpus on the project level into train, dev, and test. Table 1 presents the corpus. statistics."}, {"section_index": "5", "section_name": "3.1 NORMALIZATION OF IDENTIFIERS", "section_text": "Unsurprisingly, the long tail of words in the vocabulary consists of rare identifiers. To improve generalization, we normalize identifiers before feeding the resulting token stream to our models That is, we replace every identifier name with an anonymous identifier indicating the identifier group (class, variable, argument, attribute or function) concatenated with a random number that make the identifier unique in its scope. Note that we only replace novel identifiers defined within a file Identifier references to external APIs and libraries are left untouched. Consistent with previous corpu creation for code suggestion (e.g. Khanh Dam et al., 2016; White et al., 2015), we replace numerica constant tokens with $num$, remove comments, reformat the code, and replace tokens appearing less than five times with an $Oov$ (out of vocabulary) token."}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "Although previous work by White et al. (2015) already established that a simple neural language model outperforms an n-gram model for code suggestion, we include a number of n-gram baselines to confirm this observation. Specifically, we use n-gram models for n E {3, 4, 5, 6} with Modified Kneser-Ney smoothing (Kneser & Ney, 1995) from the Kyoto Language Modelling Toolkit (Neubig 2012).\nWe train the sparse pointer network using mini-batch SGD with a batch size of 30 and truncated backpropagation through time (Werbos, 1990) with a history of 20 identifier representations. We use\nTable 2: Perplexity (PP), Accuracy (Acc) and Accuarcy among top 5 predictions (Acc@5\nModel Train PP Dev PP Test PP Acc [%] Acc@5 [%] All IDs Other All IDs Other 3-gram 12.90 24.19 26.90 13.19 50.81 4-gram 7.60 21.07 23.85 13.68 51.26 5-gram 4.52 19.33 21.22 13.90 51.49 6-gram 3.37 18.73 20.17 14.51 51.76 LSTM 9.29 13.08 14.01 57.91 2.1 62.8 76.30 4.5 82.6 LSTM w/ Attention 20 7.30 11.07 11.74 61.30 21.4 64.8 79.32 29.9 83.7 7.09 9.83 10.05 63.21 30.2 LSTM w/ Attention 50 65.3 81.69 41.3 84.1 Sparse Pointer Network 6.41 9.40 9.18 62.97 27.3 64.9 82.62 43.6 84.5\nan initial learning rate of 0.7 and decay it by 0.9 after every epoch. As additional baselines, we tes a neural language model with LSTM units with and without attention. For the attention language. models, we experiment with a fixed-window attention memory of the previous 20 and 50 tokens respectively, and a batch size of 75. We found during testing that the baseline models performec worse with the same batch size as the sparse pointer network of 30. We therefore chose to report the stronger results obtained with a batch size of 75..\nAll neural language models were developed in TensorFlow (Abadi et al., 2016) and trained using cross-entropy loss. While processing a Python source code file, the last recurrent state of the RNN is fed as the initial state of the subsequent sequence of the same file and reset between files. All models use an input and hidden size of 200, an LSTM forget gate bias of 1 (Jozefowicz et al., 2015) gradient norm clipping of 5 (Pascanu et al., 2013), and randomly initialized parameters in the interval (0.05, 0.05). As regularizer, we use a dropout of 0.1 on the input representations. Furthermore, we use a sampled softmax (Jean et al., 2015) with a log-uniform sampling distribution and a sample size of 1000."}, {"section_index": "7", "section_name": "5 RESULTS", "section_text": "We evaluate all models using perplexity (PP), as well as accuracy of the top prediction (Acc) and the top five predictions (Acc@5). The results are summarized in Table 2\nWe can confirm that for code suggestion neural models outperform n-gram language models by a large margin. Furthermore, adding attention improves the results substantially (2.3 lower perplexit and 3.4 percentage points increased accuracy). Interestingly, this increase can be attributed to a superior prediction of identifiers, which increased from an accuracy of 2.1% to 21.4%. An LSTM with an attention window of 50 gives us the best accuracy for the top prediction. We achieve furthe improvements for perplexity and accuracy of the top five predictions by using a sparse pointer networl that uses a smaller memory of the past 20 identifier representations."}, {"section_index": "8", "section_name": "5.1 OUALITATIVE ANALYSIS", "section_text": "Figures 3a-d show a code suggestion example involving an identifier usage. While the LSTM baseline. is uncertain about the next token, we get a sensible prediction by using attention or the sparse pointe. network. The sparse pointer network provides more reasonable alternative suggestions beyond the correct top suggestion.\nFigures 3e-h show the use-case referring to a class attribute declared 67 tokens in the past. Only the Sparse Pointer Network makes a good suggestion. Furthermore, the attention weights in 3i demonstrate that this model distinguished attributes from other groups of identifiers. We give a full example of a token-by-token suggestion of the Sparse Pointer Network in Figure 4 in the Appendix\nPrevious code suggestion work using methods from statistical NLP has mostly focused on n-gran. models. Much of this work is inspired by Hindle et al. (2012) who argued that real programs fal. into a much smaller space than the flexibility of programming languages allows. They were able. to capture the repetitiveness and predictable statistical properties of real programs using languag. models. Subsequently, Tu et al. (2014) improved upon Hindle et al.'s work by adding a cache. mechanism that allowed them to exploit locality stemming from the specialisation and decoupling o program modules. Tu et al.'s idea of adding a cache mechanism to the language model is specifically. designed to exploit the properties of source code, and thus follows the same aim as the sparse attentior mechanism introduced in this paper.\nWhile the majority of preceding work trained on small corpora, Allamanis & Sutton (2013) created corpus of 352M lines of Java code which they analysed with n-gram language models. The size o the corpus allowed them to train a single language model that was effective across multiple differen project domains. White et al. (2015) later demonstrated that neural language models outperforn n-gram models for code suggestion. They compared various n-gram models (up to nine grams including Tu et al.'s cache model, with a basic RNN neural language model. Khanh Dam et a (2016) compared White et al.'s basic RNN with LSTMs and found that the latter are better at code suggestion due to their improved ability to learn long-range dependencies found in source code. Ou paper extends this line of work by introducing a sparse attention model that captures even longe dependencies.\nThe combination of lagged attention mechanisms with language modelling is inspired by Cheng et al. (2016) who equipped LSTM cells with a fixed-length memory tape rather than a single memory cell. They achieved promising results on the standard Penn Treebank benchmark corpus (Marcus et al., 1993). Similarly, Tran et al. added a memory block to LSTMs for language modelling of English, German and Italian and outperformed both n-gram and neural language models. Their memory encompasses representations of all possible words in the vocabulary rather than providing a sparse view as we do. Attention mechanisms were previously applied to the study of source code by Allamanis et al. who used a convolutional neural network combined with an attention mechanism to generate method names from bodies.\nself - 0.11 var209 0.46 var209 0.28 getattr 0.02 self 0.04 self 0.18 class Class234: SOOV 0.02 theano 0.02 arg645 0.13 def function123(self, arg645, arg631): var209 = arg645 + arg631 0.01 getattr 0.02 arg631 0.04 return ? 0.01 0.01 SOOV 0.01 super (a) Code snippet for referencing. (b) LSTM Model. (c) LSTM w/ Attention (d) Sparse Pointer Net- variable. 50. work. class Class210: def _init_(self, arg233) self.attribute172 = arg233 def function1234(self, arg635): return sOOVg if arg635 else $OOVg attribute1874 0.03 SOOV 0.02 attribute172 0.68 def function651(self, arg536): attribute1010 0.02 arg2129 0.01 attribute1572 0.01 return sOOVg if arg536 else $OOVg attribute660 0.02 attribute1892 0.01 attribute2260 0.01 def function2766(self, arg1556): attribute96 0.02 config 0.01 path 0.01 var155 = os.path.join(arg1556, self.) ? SOOV 0.02 var4618 0.00 attribute2390 0.01 (e) Code snippet for referencing class. (f) LSTM Model.. (g) LSTM w/ Attention (h) Sparse Pointer Net- member. 50. work. Class210 arg233 attribute172 function1234 arg635 function651 arg536 function2766 arg1556 var155 (i) Sparse Pointer Network attention over memory of identifier representations.\nFigure 3: Code suggestion example involving a reference to a variable (a-d), a long-range dependency (e-h), and the attention weights of the Sparse Pointer Network (i).\nAn alternative to our purely lexical approach to code suggestion involves the use of probabilistic. context-free grammars (PCFGs) which exploit the formal grammar specifications and well-defined deterministic parsers available for source code. These were used by Allamanis & Sutton (2014) to extract idiomatic patterns from source code. A weakness of PCFGs is their inability to model context-dependent rules of programming languages such as that variables need to be declared before. being used. Maddison & Tarlow (2014) added context-aware variables to their PCFG model in order to capture such rules.\nLing et al. (2016) recently used a pointer network to generate code from natural language descriptions Our use of a controller for deciding whether to generate from a language model or copy an identifier using a sparse pointer network is inspired by their latent code predictor. However, their inputs (textua descriptions) are short whereas code suggestion requires capturing very long-range dependencies tha we addressed by a filtered view on the memory of previous identifier representations."}, {"section_index": "9", "section_name": "CONCLUSIONS AND FUTURE WORK", "section_text": "In this paper, we investigated neural language models for code suggestion of the dynamically-typed programming language Python. We released a corpus of 41M lines of Python crawled from GitHub and compared n-gram, standard neural language models, and attention. By using attention, we observed an order of magnitude more accurate prediction of identifiers. Furthermore, we proposed a sparse pointer network that can efficiently capture long-range dependencies by only operating on a filtered view of a memory of previous identifier representations. This model achieves the lowest perplexity and best accuracy among the top five predictions. The Python corpus and the code for our modelsis released at https: //github.com/uclmr/pycodesuggest.\nThe presented methods were only tested for code suggestion within the same Python file. We are interested in scaling the approach to the level of entire code projects and collections thereof, as well as integrating a trained code suggestion model into an existing IDE. Furthermore, we plan to work on code completion, i.e., models that provide a likely continuation of a partial token, using character language models (Graves, 2013)."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Miltiadis Allamanis, Earl T. Barr, Christian Bird, and Charles Sutton. Learning natural coding conventions. In Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014, pp. 281-293, New York, NY, USA, 2014. ACM. ISBN 978 1-4503-3056-5. doi: 10.1145/2635868.2635883. URL http://doi.acm.org/10.1145/ 2635868.2635883.\nThis work was supported by Microsoft Research through its PhD Scholarship Programme, an Allen Distinguished Investigator Award, and a Marie Curie Career Integration Award\nMartin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin. Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajal Monga, Sherry Moore, Derek Gordon Murray, Benoit Steiner, Paul A. Tucker, Vijay Vasudevan. Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zhang. Tensorflow: A system for large-scale. machine learning. CoRR, abs/1605.08695, 2016. URL http://arxiv.org/abs/1605 0 8 6 95.\nMiltiadis Allamanis and Charles Sutton. Mining idioms from source code. In Proceedings o the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering FSE 2014, pp. 472-483, New York, NY, USA, 2014. ACM. ISBN 978-1-4503-3056-5. do 10.1145/2635868.2635901. URL http://doi.acm.0rg/10.1145/2635868.2635901\nMiltiadis Allamanis, Hao Peng, and Charles A. Sutton. A convolutional attention network for extrem. summarization of source code. In Proceedings of the 33nd International Conference on Machin Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pp. 2091-2100, 2016. URI http://jmlr.org/proceedings/papers/v48/allamanis16.html.\nSubhasis Das and Chinmayee Shah. Contextual code completion using machine learning. 2015\nAlex Graves. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850, 2013 URLhttp://arxiv.0rg/abs/1308.0850.\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, MustafaSuleyman, andPhil Blunsom. Teaching machines to read and compre hend. In Advances in Neural Information Processing Systems 28: Annual Confer ence on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal Quebec, Canada, pp. 1693-1701, 2015. URL http://papers.nips.cc/paper/ 5945-teaching-machines-to-read-and-comprehend.\nAbram Hindle, Earl T. Barr, Zhendong Su, Mark Gabel, and Premkumar Devanbu. On the naturalness of software. In Proceedings of the 34th International Conference on Software Engineering, ICSE '12, pp. 837-847, Piscataway, NJ, USA, 2012. IEEE Press. ISBN 978-1-4673-1067-3. URL http://dl.acm.org/citation.cfm?id=2337223.2337322.\nSebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1-10, Beijing, China, July 2015. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/P15-1001\nTim Rocktaschel. Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunson Reasoning about entailment with neural attention. In ICLR, 2016..\nKe M. Tran, Arianna Bisazza, and Christof Monz. Recurrent memory networks for language modeling. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of th Association for Computational Linguistics: Human Language Technologies, San Diego Californic USA, June 12-17, 2016, pp. 321-331, 2016. URL http://aclweb.org/anthology/N N16/N16-1036.pdf.\nZhaopeng Tu, Zhendong Su, and Premkumar Devanbu. On the localness of software. In Proceeding. of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering FSE 2014, pp. 269-280, New York, NY, USA, 2014. ACM. ISBN 978-1-4503-3056-5. doi 10.1145/2635868.2635875. URL http://doi.acm.0rg/10.1145/2635868.2635875\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Advances in Neura Information Processing Systems, pp. 2692-2700, 2015a\nPaul J Werbos. Backpropagation through time: what it does and how to do it. Proceedings of th IEEE. 78(10):1550-1560. 1990\nMartin White, Christopher Vendome, Mario Linares-Vasquez, and Denys Poshyvanyk. Toward deep learning software repositories. In Proceedings of the 12th Working Conference on Mining Software Repositories, MSR '15, pp. 334-345, Piscataway, NJ, USA, 2015. IEEE Press. URL http://dl.acm.0rg/citation.cfm?id=2820518.2820559\nCarlo Zapponi. Githut - programming languages and github. http: //githut. info/, 2016 URL http: //githut.info/. [Online; accessed 19-August-2016].\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In Proceedings of the 3Oth International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pp. 1310-1318, 2013. URL http: // jmlr.0rg/ proceedings/papers/v28/pascanu13.html."}, {"section_index": "11", "section_name": "APPENDIX", "section_text": "LM Att import os dass Class253 Class253 Class253 Class253 S<indent>S Class253 def Class253 _init_ Class253 Class253 self Class253 Class253 arg651 Class253 ) Class253 arg651 Class253 arg651 Class253 arg651 S<indent>S Class253 arg651 self Class253 arg651 Class253 arg651 attribute943 Class253 arg651 = Class253 arg651 attribute943 arg651 Class253 arg651 attribute943 Class253 arg651 attribute943 Class253 arg651 attribute943 S<dedent> Class253 arg651 attribute943 def Class253 arg651 attribute943 function1690 Class253 arg651 attribute943 Class253 ( arg651 attribute943 function1690 self Class253 arg651 attribute943 function1690 Class253 arg651 attribute943 function1690 arg2004 Class253 arg651 attribute943 function1690 Class253 ) arg651 attribute943function1690 arg2004 Class253 arg651 attribute943function1690 arg2004 Class253 arg651 attribute943function1690 arg2004 s<indent> Class253 arg651 attribute943 function1690 arg2004 var4040 Class253 arg651 attribute943function1690 arg2004 Class253 arg651 attribute943 function1690 arg2004 var4040 O5 Class253 arg651 attribute943function1690 arg2004 var4040 Class253 arg651 attribute943function1690 arg2004 var4040 path Class253 arg651 attribute943function1690 arg2004 var4040 Class253 arg651 attribute943 function1690 arg2004 var4040 join Class253 arg651 attribute943 function1690 arg2004 var4040 ( Class253 arg651 attribute943function1690 arg2004 var4040 self Class253 arg651 attribute943 function1690 arg2004 var4040 Class253 arg651 attribute943 function1690 arg2004 var4040 attribute943 Class253 arg651 attribute943 function1690 arg2004 var4040 Class253 arg651 attribute943function1690 arg2004 var4040 arg2004 Class253 arg651 attribute943 function1690 arg2004 var4040 ) Class253 arg651 attribute943function1690 arg2004 var4040 Class253 arg651 attribute943function1690 arg2004 var4040 with Class253 arg651 attribute943 function1690 arg2004 var4040 open Class253 arg651 attribute943 function1690 arg2004 var4040 Class253 ( arg651 attribute943function1690 arg2004 var4040 var4040 Class253 arg651 attribute943 function1690 arg2004 var4040 Class253 arg651 attribute943 function1690 arg2004 var4040 t Class253 arg651 attribute943 function1690 arg2004 var4040 ) Class253 arg651 attribute943 function1690 arg2004 var4040 as Class253 arg651 attribute943function1690 arg2004 var4040 var2496 Class253 arg651 attribute943function1690 arg2004 var4040 Class253 arg651 attribute943function1690 arg2004 var4040 var2496 Class253 arg651 attribute943 function1690 arg2004 var4040 var2496 S<indent> Class253 arg651 attribute943function1690 arg2004 var4040 var2496 var3334 Class253 arg651 attribute943 function1690 arg2004 var4040 var2496 = Class253 arg651 attribute943function1690 arg2004 var4040 var2496 var3334\nFigure 4: Full example of code suggestion with a Sparse Pointer Network. Boldface tokens on the left show the first declaration of an identifier. The middle part visualizes the memory of representations of these identifiers. The right part visualizes the output A of the controller, which is used for interpolating oetween the language mode1 (LM) and the attention of the pointer network (Att)"}] |
SJAr0QFxe | [{"section_index": "0", "section_name": "DEMYSTIFYING RESNET", "section_text": "Department of Electronic Engineering Tsinghua University. Beijing 100084, China.\nlisihanl3@mails.tsinqhua.edu.cn\nWe provide a theoretical explanation for the great performance of ResNet via the. study of deep linear networks and some nonlinear variants. We show that with or without nonlinearities, by adding shortcuts that have depth two, the condi-. tion number of the Hessian of the loss function at the zero initial point is depth invariant, which makes training very deep models no more difficult than shallow. ones. Shortcuts of higher depth result in an extremely flat (high-order) stationary. point initially, from which the optimization algorithm is hard to escape. The 1-. shortcut, however, is essentially equivalent to no shortcuts. Extensive experiments. are provided accompanying our theoretical results. We show that initializing the. network to small weights with 2-shortcuts achieves significantly better results than. random Gaussian (Xavier) initialization, orthogonal initialization, and shortcuts of. deeper depth, from various perspectives ranging from final loss, learning dynam-. ics and stability, to the behavior of the Hessian along the learning process.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Residual network (ResNet) was first proposed in|He et al.(2015a) and extended in He et al.(2016) It followed a principled approach to add shortcut connections every two layers to a VGG-style net-. work (Simonyan & Zisserman2014). The new network becomes easier to train, and achieves both lower training and test errors. Using the new structure,He et al.[(2015a) managed to train a network with 1001 layers, which was virtually impossible before. Unlike Highway Network (Srivastava. et al.]2015a b) which not only has shortcut paths but also borrows the idea of gates from LSTM (Sainath et al.[2013), ResNet does not have gates. Later He et al.(2016) found that by keeping a clean shortcut path, residual networks will perform even better..\nMany attempts have been made to improve ResNet to a further extent. \"ResNet in ResNet' (Targ et al.]2016) adds more convolution layers and data paths to each layer, making it capable of repre senting several types of residual units. \"ResNets of ResNets\" (Zhang et al.]2016) construct multi- level shortcut connections, which means there exist shortcuts that skip multiple residual units. Wide Residual Networks (Zagoruyko & Komodakis|2016) makes the residual network shorter but wider and achieves state of the art results on several datasets while using a shallower network. Moreover, some existing models are also reported to be improved by shortcut connections, including Inception v4 (Szegedy et al.|[2016), in which shortcut connections make the deep network easier to train.\nWhy are residual networks so easy to train? He et al.(2015a) suggests that layers in residual net. works are learning residual mappings, making them easier to represent identity mappings, which. prevents the networks from degradation when the depths of the networks increase. However, Veit. et al.(2016) claims that ResNets are actually ensembles of shallow networks, which means they dc not solve the problem of training deep networks completely..\nWe propose a theoretical explanation for the great performance of ResNet. We concur with He et al. [2015a) that the key contribution of ResNet should be some specia1 structure of the loss function that makes training very deep models no more difficult than shallow ones. Analysis, however, seems. non-trivial. QuotingHe et al.(2015a):\n\"But if F has only a single layer, Eqn.(1) is similar to a linear layer: y = Wix + x, for whicl we have not observed advantages.\n{jiantao,yjhan,tsachy}@stanford.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "\"Deeper non-bottleneck ResNets (e.g., Fig. 5 left) also gain accuracy from increased depth (a.. shown on CIFAR-10), but are not as economical as the bottleneck ResNets. So the usage of bottle. neck designs is mainly due to practical considerations. We further note that the degradation problen of plain nets is also witnessed for the bottleneck designs..\nTheir empirical observations are inspiring. First, the 1-shortcuts mentioned in the first paragrapl. do not work. Second, noting that the non-bottleneck ResNets have 2-shortcuts, but the bottleneck. ResNets use 3-shortcuts. one sees that shortcuts with depth three also do not work. Hence. a rea sonable theoretical explanation must be able to distinguish the 2-shortcut from shortcuts of othe. depths, and clearly demonstrate why the 2-shortcuts are special and are able to ease the optimizatior. process so significantly for deep models, while shortcuts of other depths may not do the job.."}, {"section_index": "3", "section_name": "2 MAIN RESULTS", "section_text": "cond(H) = /cond((XX - YX)T(xx _ Yx))\nDefinition 1. The condition number of a matrix A is defined as\n0max(A) cond(A) = min(A)\n|(A)|r max cond(A) = [X(A)|min\nMoreover, the zero initial point for ResNet with 2-shortcuts is in fact a so-called strict saddl point (Ge et al.2015), which are proved to be easy to escape from.\nWhy shortcuts of other depths do not work? We show that the Hessian at the zero initial point for the. 1-shortcut ResNet has condition number growing unboundedly for deep nets. As is well known in. convex optimization theory, large condition numbers can have enormous adversarial impact on the convergence of first order methods (NemirovskiJ2005). Hence, it is quite clear that starting training. at a point with a huge condition number would make the algorithm very difficult to escape from the. initial point, making 1-shortcut ResNet no better than conventional approaches..\nAiming at explaining the performance of 2-shortcuts, we need to eliminate other variables that may contribute to the success of ResNet. Indeed, one may argue that the deep structure of ResNet may give it better representation power (lower approximation error), which contributes to lower training errors. To eliminate this effect, we focus on deep linear networks, where deeper models do not have better approximation properties. The special role of 2-shortcuts naturally arises in the study.\nOur work reveals that non-degenerate depth-invariant initial condition numbers, a unique property. of residual networks with 2-shortcuts, contributed to the success of ResNet. In fact, in a linear network that will be defined rigorously later, the condition number of Hessian of the Frobenius loss. function at the zero initial point is.\nwhich is independent of the number of layers. Here X X and YX denote the input-input and. the output-input correlation matrices, defined in Section3.3] The condition number of a possibly non-PSD matrix is defined as:\nwhere Omax(A) and Omin(A) are the maximum and minimum of singular values of A. In particular. if A is normal, i.e. AT A = AAT, the definition can be simplified to\n1 The equivalence of Equation (2) and Equation (3) can be proved easily using the eigenvalue decomposition of A. Note that as Hessians are symmetric (if all the second derivatives are continuous), we will use Equation (3) 1 |max to represent for their condition numbers. As the ||min of Hessian is usually very unstable, we calculated [X|(0.1)\ncondition numbers instead, where ||(o.1) is the 10th percentile of the absolute values of eigenvalues. Note that the Hessian at zero initial point for 2-shortcut ResNet also has a nice structure of spectrum: see Theorem1|for details.\nOne may still ask: why are we interested in the Hessian at the zero initial point? It is because in order for the outputs of deep neural networks not explode, the singular values of the mapping of each layer are not supposed to be deviating too much from one. Indeed, it is because it is extremely challenging. However, by design ResNet with shortcuts already have an identity mapping every few layers, which. forces the mappings inside the shortcuts to have small operator norms. Hence, analyzing the network at zero initial point gives a decent characterization of the searching environment of the optimization algorithm.\nOn the other hand, our experiments reveal that orthogonal initialization (Saxe et al.. 2013) is sub optimal. Although better than Xavier initialization (Glorot & Bengio. 2010). the initial condition numbers of the networks still explode as the networks become deeper, which means the networks. are still initialized on \"bad\"' submanifolds that are hard to optimize using gradient descent."}, {"section_index": "4", "section_name": "3.1 DEEP LINEAR NETWORKS", "section_text": "Deep linear networks are feed-forward neural networks that only contain linear units, which mean their input-output mappings are simply linear transformations. Apparently, increasing their depth will not affect the representational power of the networks. However, linear networks with dept! deeper than one show nonlinear dynamics of training (Saxe et al.]2013). As a result, analyzing the training of deep linear networks gives us a better understanding of the training of non-linea networks.\nMuch theoretical work has been done on deep linear networks.Kawaguchi (2016) extended th work of Choromanska et al.(2015a b) and proved that with few assumptions, every local minimun point in deep linear networks is a global minimum point. This means that the difficulties in th training of deep linear networks mostly come from saddle points on the loss surfaces, which are alsc the main causes of slow learning in nonlinear networks (Pascanu et al.]2014).\nSaxe et al.[(2013) studied the dynamics of training using gradient descent. They found that for a special class of initial conditions, which could be obtained from greedy layerwise pre-training, the training time for a deep linear network with an infinity depth can still be finite. Furthermore, they found that by setting the initial weights to random orthogonal matrices (produced by performing QR or SVD decompositions on random Gaussian matrices), the network will still have a depth indepen dent learning time. They argue that it is caused by the eigenvalue and singular value spectra of the end-to-end linear transformation. When using orthogonal initialization, the overall transformation is an orthogonal matrix, which has all the singular values equal to 1. In the meantime, when using scaled Gaussian initialization, most of the singular values are close to zero, making the network unsuitable for backpropagating errors. However, this explanation is not sufficient to prove that the training difficulty of orthogonal initialized networks is depth-invariant. It only gives us an intuition on why orthogonal initialization performs better than scaled Gaussian initialization.\nThus, we use deep linear networks to study the effect of shortcut connections. After adding the shortcuts, the overall model is still linear and the global minimum does not change."}, {"section_index": "5", "section_name": "3.2 NETWORK STRUCTURE", "section_text": "We first generalize a linear network by adding shortcuts to it to make it a linear residual network We organize the network into R residual units. The r-th residual unit consists of Lr layers whose\nFor shortcuts with depth deeper than two, the Hessian at the zero initial point is a zero matrix, making it a higher-order stationary point. Intuitively, the higher order the stationary point is, the harder it is to escape from it. Indeed, it is supported both in theory (Anandkumar & Ge]2016) and by our experiments.\nweights are Wr,1, ..., Wr,Lr-1, denoted as the transformation path, as well as a shortcut Sr con-. necting from the first layer to the last one, denoted as the shortcut path. The input-output mapping can be written as\nR Lr-1 HI Wr,l + S)x = Wx y r=1 l=1\nInstead of analyzing the general form, we concentrate on a special kind of linear residual networks where all the residual units are the same.\nDefinition 2. A linear residual network is called an n-shortcut linear network if\n1. its layers have the same dimension (so that dx = dy): 2. its shortcuts are identity matrices;. 3. its shortcuts have the same depth n.\nThe input-output mapping for such a network becomes\nR n HI Wr,l + IdxJx = Wx y r=1 l=1\nThen we add some activation functions to the networks. We concentrate on the case where activatior functions are on the transformation paths, which is also the case in the latest ResNet (He et al.2016)\nDefinition 3. An n-shortcut linear network becomes an n-shortcut network if element-wise activa tion functions pre(x), mid(x), post (x) are added at the transformation paths, where on a transfor. mation path, pre(x) is added before the first weight matrix, Omid(x) is added between two weight. matrixes and post (x) is added after the last weight matrix..\npre W1 mid W2 post\nFigure 1: An example of different position for nonlinearities in a residual unit of a 2-shortcut net wOrk.\nNote that n-shortcut linear networks are special cases of n-shortcut networks, where all the activa tion functions are identity mappings"}, {"section_index": "6", "section_name": "3.3 OPTIMIZATION", "section_text": "We denote the collection of all the variable weight parameters in an n-shortcut linear network as w. Consider m training samples {x, y}, = 1, ..., m. Using Frobenius loss, for an n-shortcut linear network, we define the loss function as follows:\nm 1 1 l|y\"-Wx\"II3 L(w) Y -WX|F 2m 2m p=1\nwhere x', y are the -th columns of X, Y, and |l[F denotes the Frobenius norm. Using gradien descent with learning rate a, we have the weights updating rules as\nAWr,l = a(Wr T(YX _wXX)(Wr) .Wr,l A after after before Defor\nwhere X X and Y X denote the input-input and the output-input correlation matrices, defined as\n1 m rXX m u=1 m 1 CYr m p=1\nHere Wr denote the linear mappings before and after the r-th residual unii after W,r,l Wr,l denote the linear mappings before and after Wr,l within the transformation patl before) after of the r-th residual unit. In other words, the overall transformation can be represented as\n(Wr,l Wr,lWr,l aftel after before"}, {"section_index": "7", "section_name": "4.1 INITIAL POINT PROPERTIES", "section_text": "A simple idea is to zero initialize all the weights, so that the output variances of residual units stay the same along the network. It is worth noting that as found in He et al.(2015a), the deeper ResNet has smaller magnitudes of layer responses. This phenomenon has been confirmed in our experiments. As illustrated in Figure 2|and Figure [3] the deeper a residual network is, the small its average Frobenius norm of weight matrixes is, both during the traning process and when the training ends. Also,Hardt & Ma (2016) proves that if all the weight matrixes have small norms, a linear residual network will have no critical points other than the global optimum.\nWe begin with the definition of k-th order stationary point\nDefinition 4. Suppose function f(x) admits k-th order Taylor expansion at point xo. We say that the point xo is a k-th order stationary point of f(x) if the corresponding k-th order Taylor expansion of f(x) at x = xo is a constant: f(x) = f(xo) + o(||x - xo|2)\nNow we state our main theorem, whose proof can be found in Appendix|A\nBefore we analyze the initial point properties of n-shortcut networks, we have to choose the way to initialize them. ResNet uses MSRA initialization (He et al.]2015b). It is a kind of scaled Gaussian. initialization that tries to keep the variances of signals along a transformation path, which is also the. idea behind Xavier initialization (Glorot & Bengio] 2010). However, because of the shortcut paths,. the output variance of the entire network will actually explode as the network becomes deeper. Batch normalization units partly solved this problem in ResNet, but still they cannot prevent the large output variance in a deep network..\nAll these evidences indicate that zero is spacial in a residual network: as the network becomes. deeper, the training tends to end up around it. Thus, we are looking into the Hessian at zero. As the. zero is a saddle point, in our experiments we use zero initialization with small random perturbations. to escape from it. We first Xavier initialize the weight matrixes, and then multiply a small constant (0,01) to them.\n1. if n 2, it is an (n - 1)th-order stationary point. In particular, if n 3, the Hessian is a zero matrix;\n25 ResNet-8 ResNet-20 20 ResNet-56 ResNet-152 15 10 5 0 -3.0 -2.5 -2.0 -1.5 -1.0 0.5 0.0 0.5 Ig(loss)\n3.5 4 depth = 2 3.5 3 depth = 2 3 2.5 depth = 2.5 2 depth = depth = 8 2 depth = 8 depth = 16 1.5 depth = 32 depth = 16 depth = 64 depth= 32 depth 128 depth=64 0.5 depth = 256 0.5 0 0 -3.5 -3 -2.5 -2 -1.5 -0.5 0 -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 log10 Iog10 (loss)"}, {"section_index": "8", "section_name": "NEW", "section_text": "Figure 2: The average Frobenius norms of ResNets of different depths during the training process The pre-ResNetimplementationinhttps: //github. com/facebook/fb.resnet.torch is used. The learning rate is initialized to 0.1, decreased to O.01 at the 81st epoch (marked with circles) and decreased to 0.001 at the 122nd epoch (marked with triangles). Each model is trained for 200 epochs."}, {"section_index": "9", "section_name": "NEW", "section_text": "Figure 3: The average Frobenius norms of 2-shortcut networks of different depths during the training process when zero initialized. Left: Without nonlinearities. Right: With ReLUs at mid positions.\ntne Hessian can be written ds 0 AT A 0 0 AT H = A 0\nwhere A, B only depend on the training set and the activation functions\nThe Hessian at zero initial point for the 1-shortcut linear network follows block Toeplitz structure which has been well studied in the literature. In particular, its condition number tends to explode as. the number of layers increase (Gray2006).\nThe assumptions hold for most activation functions including tanh, symmetric sigmoid and ReLU. (Nair & Hinton]2010). Note that although ReLU does not have derivatives at zero, one may do a local polynomial approximation to yield o(k), 1 k max(n - 1, 2).\nTo get intuitive explanations of the theorems, imagine changing parameters in an n-shortcut network.. One has to change at least n parameters to make any difference in the loss. So zero is an (n - 1)th order stationary point. Notice that the higher the order of a stationary point, the more difficult for a first order method to escape from it..\nOn the other hand, if n = 2, one will have to change two parameters in the same residual unit but different weight matrices to affect the loss, leading to a clear block diagonal Hessian.."}, {"section_index": "10", "section_name": "4.2 LEARNING DYNAMICS", "section_text": "which can be seen as a linear network with identity initialization, a special case of orthogonal ini tialization, if the original 1-shortcut network is zero initialized..\n0 AT A 0 0 AT H = A 0\ncond(H) = Vcond((Xopre(X) _ Yopre(X))T(Xopre(X) _Yopre(X))\nB AT AT AT A B AT AT H = A A B ... AT A A A B\nB AT AT AT7 A B AT AT H = A A B AT A A A B\nTheorem[1|shows that the condition numbers of 2-shortcut networks are depth-invariant with a nice structure of eigenvalues. Indeed, the eigenvalues of the Hessian H at the zero initial point are multiple copies of /eigs(ATA), and the number of copies is equal to the number of shortcut connections.\nTo understand Equation (7) better, we can take n-shortcut linear networks to two extremes. First when n = 1, let Vr,1 = Wr,1 + Id . .. R - 1. As Id is a constant. we have\nR-1 R-1 -1 Avr,1=a(1 (eYx- 1)xxx)( r'=r+1 r'=1 r'=1\nOn the other side, if the number of shortcut connections R = 1, the shortcut will only change the distribution of the output training set from Y to Y - X. These two extremes are illustrated in Figure4\nY W1 w w X Y W1 w? w' x Y w+1 w+I w+I x Y-X W1 W w x\nFigure 4: Equivalents of two extremes of n-shortcut linear networks. 1-shortcut linear networks are equivalent to linear networks with identity initialization, while skip-all shortcuts will only change the effective dataset outputs."}, {"section_index": "11", "section_name": "4.3 LEARNING RESULTS", "section_text": "The optimal weights of an n-shortcut linear network can be easily computed via least squares, which leads to\nW = YxT(XxT)-1 = yYX XX\nand the minimum of the loss function is\n1 |Y -yYx(Xx)-1x|F Lmin 2m\nwhere |I-l|F denotes the Frobenius norm and (XX)-1 denotes any kind of generalized inverse of Xx. 'So given a training set, we can pre-compute its Lmin and use it to evaluate any n-shortcut. linear network."}, {"section_index": "12", "section_name": "5 EXPERIMENTS", "section_text": "We compare networks with Xavier initialization (Glorot & Bengio2010), networks with orthogo. nal initialization (Saxe et al.2013) and 2-shortcut networks with zero initialization. The training. dynamics of 1-shortcut networks are similar to that of linear networks with orthogonal initializatior in our experiments. Setup details can be found in Appendix|B."}, {"section_index": "13", "section_name": "5.2 LEARNING DYNAMICS", "section_text": "Having a good beginning does not guarantee an easy trip on the loss surface. In order to depic the loss surfaces encountered from different initial points, we plot the maxima and 1oth percentiles (instead of minima, as they are very unstable) of the absolute values of Hessians eigenvalues a different losses.\nAs shown in Figure|6|and Figure[7] the condition numbers of 2-shortcut networks at different losses are always smaller, especially when the loss is large. Also, notice that the condition numbers roughly evolved to the same value for both orthogonal and 2-shortcut linear networks. This may be explained by the fact that the minimizers, as well as any point near them, have similar condition numbers.\nY W41 w w X Y W-1 w? w x Y w41 w+I w+I x Y-X W'1 W w x\nAs can be seen in Figure |5] 2-shortcut linear networks have constant condition numbers as expected. On the other hand, when using Xavier or orthogonal initialization in linear networks, the initial condition numbers will go to infinity as the depths become infinity, making the networks hard to train. This also explains why orthogonal initialization is helpful for a linear network, as its initial condition number grows slower than the Xavier initialization.\nIer 15 Orthogonal 10 5 2-Shortcut 0 0 0.0 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5 2.0 Ig(depth) Ig(depth)\nFigure 5: Initial condition numbers of Hessians for different linear networks as the depths of the networks increase. Means and standard deviations are estimated based on 10 runs\n6 Orthogonal Maximum 2-Shortcut Ig(condition number) 2 10th percentile 0 -3 -2 1 0 Ig(loss - opt loss)\nFigure 6: Maxima and 10th percentiles of absolute values of eigenvalues at different losses when the depth is 16. For each run, eigenvalues at different losses are calculated using linear interpolation\n10 Maximum Orthogonal 5 2-Shortcut 0 -5 10th percentile -10 -15 -1.4 -1.3 -1.2 -1.1 -1 Ig(loss)\nFigure 7: Maxima and 10th percentiles of absolute values of eigenvalues at different losses when the depth is 16. Eigenvalues at different losses are calculated using linear interpolation\n0.54 0.50 raan aageate eo aees 0.52 0.48 0 Orthogonal. 0.50 H index 0.48 -2 0.44 + Ig(gradient) 2-Shortcut 0.46 0.42 R -3 -2 -1 0 0 50 100 150 Ig(loss - opt loss) epoch\nFigure 8: Left: ratio of negative eigenvalues at different losses when the depth is 16. For each run indexes at different losses are calculated using linear interpolation. Right: the dynamics of gradien and index of a 2-shortcut linear network in a single run. The gradient reaches its maximum while the index drops dramatically, indicating moving toward negative curvature directions.\nThis is because the initial point is near a saddle point, thus it tends to go towards negative curvature directions, eliminating some negative eigenvalues at the beginning. This phenomenon matches the observation that the gradient reaches its maximum when the index drops dramatically, as shown in Figure 8 right."}, {"section_index": "14", "section_name": "5.3 LEARNING RESULTS", "section_text": "We run different networks for 1o00 epochs using different learning rates at log scale, and compare the average final losses of the optimal learning rates.\n1.5 2-Shortcu 2 Xavier eate 3 3-Shortcut leaennnn 0.5 Orthogona 3-Shortcut Xavier 5 -Shortcy 0 -6 I-Shortcut -0.5 2-Shortcut Orthogonal -8 0.5 1 1.5 2 0.5 1 1.5 2 Ig(depth) Ig(depth)\n1.5 2-Shortcut 2 Xavier (ssol do - ssol |euo!)6| 3-Shortcut 3 -4 0.5 Orthogonal 3-Shortcut Xavier -Shortcut 5 0 6 1-Shortcut -0.5 7 2-Shortcut Orthogonal -8 0.5 1 1.5 2 0.5 1 1.5 2 Ig(depth) Ig(depth)\nFigure 9: Left: Optimal Final losses of different linear networks. Right: Corresponding optima learning rates. When the depth is 96, the final losses of Xavier with different learning rates are basically the same, so the optimal learning rate is omitted as it is very unstable.\nFigure 9[shows the results for linear networks. Just like their depth-invariant initial condition num. bers, the final losses of 2-shortcut linear networks stay close to optimal as the networks become. deeper. Higher learning rates can also be applied, resulting in fast learning in deep networks\nThen we add ReLUs to the mid positions of the networks. To make a fair comparison, the numbers of ReLU units in different networks are the same when the depths are the same, so 1-shortcut and 3-shortcut networks are omitted. The result is shown in Figure[10.\nAnother observation is the changes of negative eigenvalues ratios. Index (ratio of negative eigen values) is an important characteristic of a critical point. Usually for the critical points of a neural network, the larger the loss the larger the index (Dauphin et al.]2014). In our experiments, the in- dex of a 2-shortcut network is always smaller, and drops dramatically at the beginning, as shown in Figure[8] left. This might make the networks tend to stop at low critical points."}, {"section_index": "15", "section_name": "NEW", "section_text": "-0.5 1.5 2-Shortcut-ReLU Xavier-ReLU sso] -1.0 1.0 (eu) Xavier-ReLU -1.5 Orthogonal-ReLU 0.5 9 Orthogonal-ReLU 2-Shortcut-ReLU -2.0 0.0 8.5 1.0 1.5 2. 0 8.5 1.0 1.5 2.0 Ig(depth) Ig(depth)\nFigure 10: Left: Optimal Final losses of different networks with ReLUs in mid positions. Right Corresponding optimal learning rates. Note that as it is hard to compute the minimum losses witl ReLUs, we plot the log1o(final loss) instead of log1o(final loss - optimal loss). When the depth is. 64, the final losses of Xavier-ReLU and orthogonal-ReLU with different learning rates are basically. the same, so the optimal learning rates are omitted as they are very unstable..\nNote that because of the nonlinearities, the optimal losses vary for different networks with different depths. It is usually thought that deeper networks can represent more complex models, leading to smaller optimal losses. However, our experiments show that linear networks with Xavier or orthogonal initialization have difficulties finding these optimal points, while 2-shortcut networks find these optimal points easily as they did without nonlinear units..\nFurther studies should concentrate on the behavior of shortcut connections on convolution networks as well as the influences of batch normalization units (Ioffe & Szegedy2015) in ResNet. Mean. while, it would be very interesting to extend the insights obtained in this paper to recurrent neura. networks such as LSTM (Sainath et al.2013)."}, {"section_index": "16", "section_name": "REFERENCES", "section_text": "Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op- timization. In Advances in neural information processing systems. pp. 2933-2941. 2014\nRong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle pointsonline stochasti gradient for tensor decomposition. In Proceedings of The 28th Conference on Learning Theory. pp. 797-842, 2015.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Aistats, volume 9, pp. 249-256, 2010.\nRobert M Gray. Toeplitz and circulant matrices: A review. now publishers inc, 2006\nAnima Anandkumar and Rong Ge. Efficient approaches for escaping higher order saddle points in non-convex optimization. arXiv preprint arXiv:1602.05908, 2016.\nAnna Choromanska. Mikael Henaff. Michael Mathieu. Gerard Ben Arous. and Yann LeCun. The loss surfaces of multilayer networks. In A1STATS, 2015a\nAnna Choromanska, Yann LeCun, and Gerard Ben Arous. Open problem: The landscape of the. loss surfaces of multilayer networks. In Proceedings of The 28th Conference on Learning Theory COLT 2015, Paris, France, July 3, volume 6, pp. 1756-1760, 2015b.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015a.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing. human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1026-1034, 2015b.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. 2015.\nArkadi Nemirovski. Efficient methods in convex programming. 2005.\nKe Zhang, Miao Sun, Tony X Han, Xingfang Yuan, Liru Guo, and Tao Liu. Residual networks of. residual networks: Multilevel residual networks. arXiv preprint arXiv:1608.02908. 2016\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014\nRupesh Kumar Srivastava, Klaus Greff, and Jurgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015b.\nAndreas Veit, Michael Wilber, and Serge Belongie. Residual networks are exponential ensembles of relatively shallow networks. arXiv preprint arXiv:1605.06431, 2016.."}, {"section_index": "17", "section_name": "A PROOFS OF THEOREMS", "section_text": "Definition 5. The elements in Hessian of an n-shortcut network is defined as\nwhere L is the loss function, and the indices ind(.) is ordered lexicographically following the four indices (r, l, j, i) of the weight variable w'. In other words, the priority decreases along the index of shortcuts, index of weight matrix inside shortcuts, index of column, and index of row.\nProof. As all the residual units expect unit r1 and r2 are identity transformations, reordering residual units while preserving the order of units r1 and r2 will not affect the overall trans.\na2L Hind(w1),ind(w2) dwidw2\nNote that the collection of all the weight variables in the n-shortcut network is denoted as w. We study the behavior of the loss function in the vicinity of w = 0..\n1. Using Lemma[1] for an n-shortcut network, at zero, all the k-th order partial derivatives of the loss function are zero, where k ranges from 1 to n - 1. Hence, the initial point zero is a (n - 1)th-order stationary point of the loss function. 2. Consider the Hessian in n = 2 case. Using Lemma[1and Lemma[2] the form of Hessian can be directly written as Equation (11), as illustrated in Figure[11 So we have eigs(H) ATA (18) e1g Thus cond(H) = cond(AT A), which is depth-invariant. Note that the dimension of A is d3 x d2. To get the expression of A, consider two parameters that are in the same residual unit but different weight matrices, i.e. w1 = r, 2 r, 1 ; W2\n0 eigs(H) = eigs( A eigs(AT A) A 0\nResidual Unit 1 Residual Unit 2 W1,1 W1,2 W2,1 W2,2 W11 W1,2 W2,1 W22\nFigure 11: The Hessian in n = 2 case. It follows from Lemma|1|that only off-diagonal subblocks in each diagonal block, i.e., the blocks marked in orange (slash) and blue (chessboard), are non-zero. From Lemma[2] we conclude the translation invariance and that all blocks marked in orange (slash) (resp. blue (chessboard)) are the same. Given that the Hessian is symmetric, the blocks marked in blue and orange are transposes of each other, and thus it can be directly written as Equation (11).\nThen we have\na2 L l(j1-1)dx+i1,(j2-1)dx+i2 Ow1dw2|w=0 02 m=1 2m m(y x -post(w1mid(w2pre(x))) dw1dw2 w=0 pre(x: m u=1\n(with a small difference~depending on whether j1 = i2), we make a dx dx matrix with rows indexed by i1 and columns indexed by j2, and the entry at (i1,J2) equal to A(1-1)dx+i1,(j2-1)dx+i2 Apparently, this matrix is equal to when j1 + i2.\nTo simplify the expression of A, we rearrange the columns of A by a permutation matrix 1.e.\nA' = AP\nXopre(X) _ Yopre(X) A = 0mid(0)opost(0) Xopre(X) _ Yopre(X\nwhich leads to Equation (12)\nWe took the experiments on whitened versions of MNIST. 10 greatest principal components are kept. for the dataset inputs. The dataset outputs are represented using one-hot encoding. The network was trained using gradient descent. For every epoch, the Hessians of the networks were calculated using the method proposed in (Bishopl|1992). As the |min of Hessian is usually very unstable, we calculated !max. to represent condition number instead, where ||(o.1) is the 10th percentile of the X(0.1) absolute values of eigenvalues.\nAs pre, mid or post positions are not defined in linear networks without shortcuts, when comparing Xavier or orthogonal initialized linear networks to 2-shortcut networks, we added ReLUs at the same positions in linear networks as in 2-shortcuts networks.\neigs(H) = 0mid(0)op /eigs((Xopre(X) _ Yopre(X))T(Xopre(X) _Yopre(X ost(0)1\na2 L B(j1-1)dx+i1,(j2-1)dx+i2 dw1Ow2|w=0 1 TL i1 = i2 1 m i1 F i2\nyXX B = P\na2 L A(j1-1)dx+i1,(j2-1)dx+i2 dw1dw2|w=0 ) (x; J1 =i2,i1 = i2 11. L j2 J1 =i2,i1 # i2 m ) L J1 i2,i1=i2 1 12 0 j1 # i2,i1 i2\nyXX _yYX pT +B. XX"}] |
SkhU2fcll | [{"section_index": "0", "section_name": "DEEP MULTI-TASK REPRESENTATION LEARNING A TENSOR FACTORISATION APPROACH", "section_text": "Yongxin Yang, Timothy M. Hospedales\nQueen Mary, University of London\nyongxin.yang, t.hospedales}@qmul.ac.uk"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Most contemporary multi-task learning methods assume linear models. This set- ting is considered shallow in the era of deep learning. In this paper, we present. a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by. many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast. to existing deep learning approaches that need a user-defined multi-task sharing. strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learn ing in terms of both higher accuracy and fewer design choices.."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "The paradigm of multi-task learning is to learn multiple related tasks simultaneously so that knowl edge obtained from each task can be re-used by the others. Early work in this area focused on neura network models (Caruana[1997), while more recent methods have shifted focus to kernel methods sparsity and low-dimensional task representations of linear models (Evgeniou & Pontil2004) Ar gyriou et al.|2008| Kumar & Daume II12012). Nevertheless given the impressive practical efficacy of contemporary deep neural networks (DNN)s in many important applications, we are motivated tc revisit MTL from a deep learning perspective.\nWhile the machine learning community has focused on MTL for shallow linear models recently, ap. plications have continued to exploit neural network MTL (Zhang et al.2014] Liu et al.]2015). The. typical design pattern dates back at least 20 years (Caruana![1997): define a DNN with shared lower representation layers, which then forks into separate layers and losses for each task. The sharing. structure is defined manually: full-sharing up to the fork, and full separation after the fork. However this complicates DNN architecture design because the user must specify the sharing structure: How. many task specific layers? How many task independent layers? How to structure sharing if there are many tasks of varying relatedness?\nIn this paper we present a method for end-to-end multi-task learning in DNNs. This contribution can be seen as generalising shallow MTL methods (Evgeniou & Pontill. 2004]Argyriou et al.2008 Kumar & Daume III|[2012) to learning how to share at every layer of a deep network; or as learning the sharing structure for deep MTL (Caruana1997]Zhang et al.]2014] Spieckermann et al.[2014 Liu et al. 2015) which currently must be defined manually on a problem-by-problem basis.\nBefore proceeding it is worth explicitly distinguishing some different problem settings, which have. all been loosely referred to as MTL in the literature. Homogeneous MTL: Each task corresponds to a single output. For example, MNIST digit recognition is commonly used to evaluate MTL algo. rithms by casting it as 10 binary classification tasks (Kumar & Daume HI|2012). Heterogeneous MTL: Each task corresponds to a unique set of output(s) (Zhang et al.2014). For example, one. may want simultaneously predict a person's age (task one: multi-class classification or regression) as well as identify their gender (task two: binary classification) from a face image..\nIn this paper, we propose a multi-task learning method that works on all these settings. The key idea. is to use tensor factorisation to divide each set of model parameters (i.e., both FC weight matrices.\nand convolutional kernel tensors) into shared and task-specific parts. It is a natural generalisatio. of shallow MTL methods that explicitly or implicitly are based on matrix factorisation (Evgeniou & Pontill[2004] Argyriou et al.]2008]Kumar & Daume I112012]Daume I112007). As linear methods these typically require pre-engineered features. In contrast, as a deep network, our generalisatior can learn directly from raw image data, determining sharing structure in a layer-wise fashion. Fo the simplest NN architecture - no hidden layer, single output - our method reduces to matrix-base. ones, therefore matrix-based methods including (Evgeniou & Pontil]2004] Argyriou et al.] 2008 Kumar & Daume II12012] Daume II12007) are special cases of ours.\nMulti-Task Learning Most contemporary MTL algorithms assume that the input and model are both D-dimensional vectors. The models of T tasks can then be stacked into a D T sized matrix W. Despite different motivations and implementations, many matrix-based MTL methods work by placing constrains on W. For example, posing an l2.1 norm on W to encourage low-rank W (Argyriou et al.]2008). Similarly, (Kumar & Daume III]2012) factorises W as W = LS, i.e., it assigns a lower rank as a hyper-parameter. An earlier work (Evgeniou & Pontil]2004) proposes that the linear model for each task t can be written as wt = wt + wo. This is the factorisation L = [wo, w1, ..., wT] and S = [11xT; IT]. In fact, such matrix factorisation encompasses many MTL methods. E.g., (Xue et al.]2007) assumes S.,i (the ith column of S) is a unit vector generated by a Dirichlet Process and (Passos et al.2012) models W using linear factor analysis with Indian Buffet Process (Griffiths & Ghahramani 2011) prior on S.\nTensor Factorisation In deep learning, tensor factorisation has been used to exploit factorised tensors' fewer parameters than the original (e.g., 4-way convolutional kernel) tensor, and thus com- press and/or speed up the model, e.g., (Lebedev et al.]2015][Novikov et al.][2015). For shallow linear. MTL, tensor factorisation has been used to address problems where tasks are described by multiple independent factors rather than merely indexed by a single factor (Yang & Hospedales|2015). Here the D-dimensional linear models for all unique tasks stack into a tensor W, of e.g. D T T2. in the case of two task factors. Knowledge sharing is then achieved by imposing tensor norms on W (Romera-paredes et al.]2013] Wimalawarne et al.]2014). Our framework factors tensors for the different reason that for DNN models, parameters include convolutional kernels (N-way tensors) or. Di D2 FC layer weight matrices (2-way tensors). Stacking up these parameters for many tasks results in D ... Dy T tensors within which we share knowledge through factorisation..\nHeterogeneous MTL and DNNs Some studies consider heterogeneous MTL, where tasks may have different numbers of outputs (Caruana|1997). This differs from the previously discussed stud- ies (Evgeniou & Pontil2004fArgyriou et al.]2008) Bonilla et al.[2007} Jacob et al.[2009] Kumar & Daume 1112012 Romera-paredes et al.2013] Wimalawarne et al.[2014) which implicitly as- sume that each task has a single output. Heterogeneous MTL typically uses neural networks with multiple sets of outputs and losses. E.g.,Huang et al.(2013) proposes a shared-hidden-layer DNN model for multilingual speech processing, where each task corresponds to an individual language. Zhang et al.(2014) uses a DNN to find facial landmarks (regression) as well as recognise facial attributes (classification); while Liu et al. (2015) proposes a DNN for query classification and in- formation retrieval (ranking for web search). A key commonality of these studies is that they all require a user-defined parameter sharing strategy. A typical design pattern is to use shared layers (same parameters) for lower layers of the DNN and then split (independent parameters) for the top layers. However, there is no systematic way to make such design choices, so researchers usually rely on trial-and-error, further complicating the already somewhat dark art of DNN design. In contrast, our method learns where and how much to share representation parameters across the tasks, hence significantly reducing the space of DNN design choices.\nParametrised DNNs Our MTL approach is a parameterised DNN (Sigaud et al.2015), in that. DNN weights are dynamically generated given some side information - in the case of MTL, given. the task identity. In a related example of speaker-adaptive speech recognition (Tan et al.[[2016) there may be several clusters in the data (e.g., gender, acoustic conditions), and each speaker's model. could be a linear combination of these latent task/clusters' models. They model each speaker i's between speakers/tasks comes from X and the base models are shared. An advantage of this is that,\nwhen new data come, one can choose to re-train X parameters only, and keep W fixed. This will significantly reduce the number of parameters to learn, and consequently the required training data Beyond this, Yang & Hospedales (2015) show that it is possible to train another neural network to predict those values from some abstract metadata. Thus a model for an unseen task can be gener ated on-the-fly with no training instances given an abstract description of the task. The techniques developed here are compatible with both these ideas of generating models with minimal or no effort"}, {"section_index": "3", "section_name": "3.1 PRELIMINARIES", "section_text": "We first recap some tensor factorisation basics before explaining how to factorise DNN weigh. tensors for multi-task representation learning. An N-way tensor W with shape Di D2 ... D. seen as 0, 1, and 2-way tensors respectively, although the term tensor is usually used for 3-way o. higher. A mode-n fibre of W is a Dn-dimensional vector obtained by fixing all but the nth index The mode-n flattening W(n) of W is the matrix of size Dn I-n D; constructed by concatenating. all of the -n. D; mode-n fibres along columns..\nK =W,=LS.,= L..kSk, k=1\nFrom Single to Multiple Outputs. Consider extending this matrix factorisation approach to the. case of multiple outputs. The model for each task is then a Dj D2 matrix, for Dj input and D2 output dimensions. The collection of all those matrices constructs a Di D2 T tensor. A straightforward extension of Eq.1|to this case is\nK =W..i= L.,,kSk,i k=1\nThis is equivalent to imposing the same structural constraint on WT. (3) (transposed mode-3 flattening of W). It is important to note that this allows knowledge sharing across the tasks only. I.e., knowl- edge sharing is only across-tasks not across dimensions within a task. However it may be that the knowledge learned in the mapping to one output dimension may be useful to the others within one task. E.g., consider recognising photos of handwritten and print digits it may be useful to share across handwritten-print; as well as across different digits within each. In order to support general knowledge sharing across both tasks and outputs within tasks, we propose to use more general tensoi factorisation techniques. Unlike for matrices, there are multiple definitions of tensor factorisation and we use Tucker (Tucker1966) and Tensor Train (TT) (Oseledets2011) decompositions.\nWe slightly abuse '-1' referring to the last axis of the tensor.\nThe dot product of two tensors is a natural extension of matrix dot product, e.g., if we have a tensor A of size M1 M2 . .. P and a tensor B of size P N1 N2 . .., the tensor dot product Ao B will be a tensor of size M M2 ... N1 N2... by matrix dot product AT-1)B(1) and reshaping and reshaping. Here the subscripts indicate the axes of A and B at which dot product is performed E.g., when A is of size M P M3 ... M1 and B is of size N N, P ... N1, then Ao (2.3)B is a tensor of size M1 M3 .:. M1 N1 N2 ... NJ.\nMatrix-based Knowledge Sharing Assume we have T linear models (tasks) parametrised by D-. dimensional weight vectors, so the collection of all models forms a size D T matrix W. One. commonly used MTL approach (Kumar & Daume III2012) is to place a structure constraint on W, e.g., W = LS, where L is a D K matrix and S is a K T matrix. This factorisation recovers a. shared factor L and a task-specific factor S. One can see the columns of L as latent basis tasks, and the model w(i) for the ith task is the linear combination of those latent basis tasks with task-specific. information S.i.\nTucker Decomposition Given an N-way tensor of size D1 D2 . . . D, Tucker decomposition outputs a core tensor S of size K1 K2... K, and N matrices U(n) of size Dn Kn, such. that,\nK1 K2 K N Wd1,d2 L... d dNk k1=1 k2=1 kN=1 W\nTucker decomposition is usually implemented by an alternating least squares (ALS) method (Kolda & Bader.2009). However (Lathauwer et al.]2000) treat it as a higher-order singular value decom- position (HOsVD), which is more efficient to solve: U(n) is exactly the U matrix from the SVD of. mode-n flattening W(n) of W, and the core tensor S is obtained by,.\nS =W\nTensor Train Decomposition Tensor Train (TT) Decomposition outputs 2 matrices U(1) and U(N) of size D1 K1 and K-1 Dv respectively, and (N - 2) 3-way tensors U(n) of size Kn-1 Dn Kn. The elements of W can be computed by,\nKN-1 K1 K2 L... d1,k1 d k1=1 k2=1 kN-1=1 W U(1).u(2).... I(N)\nwhere U(n) is a matrix of size Kn-1 Kn sliced from U(n) with the second axis fixed at dn. Th. TT decomposition is typically realised with a recursive SVD-based solution (Oseledets| 2011)\nKnowledge Sharing If the final axis of the input tensor above indexes tasks, i.e. if Dy = T ther the last factor U(N) in both decompositions encodes a matrix of task specific knowledge, and the. other factors encode shared knowledge\nTo realise deep multi-task representation learning (DMTRL), we learn one DNN per-task each with. the same architecture2| However each corresponding layer's weights are generated with one of the. knowledge sharing structures in Eq.2] Eq.4|or Eq.8 It is important to note that we apply these. right-to-left' in order to generate weight tensors with the specified sharing structure, rather thar. actually applying Tucker or TT to decompose an input tensor. In the forward pass, we synthesise. weight tensors W and perform inference as usual, so the method can be thought of as tensor com position rather than decomposition.\nOur weight generation (construct tensors from smaller pieces) does not introduce non-differentiable. terms, so our deep multi-task representation learner is trainable via standard backpropagation.. Specifically, in the backward pass over FC layers, rather than directly learning the 3-way tensor W, our methods learn either {S, U1, U2, U3} (DMTRL-Tucker, Eq.4), {U1, U2, U3} (DMTRL-TT, Eq.[8), or in the simplest case {L, S} (DMTRL-LAF] Eq.2). Besides FC layers, contemporary\nK1K2 KN-1 Wdi,d2,...,dN C.. (6) 1.k 1,d k1=1k2=1 kN-1=1 U(1) Id (7) .d3 dN W U(1).u(2)....U(N) (8)\n2Except heterogeneous MTL, where the output layer is necessarily unshared due to different dimensionalit 3LAF refers to Last Axis Flattening.\nFigure 1: Illustrative example with two tasks corresponding to two neural networks in homogeneous (single output) and heterogeneous (different output dimension) cases. Weight layers grouped by solid rectangles are tied across networks. Weight layers grouped by dashed rectangles are softly shared across networks with our method. Ungrouped weights are independent. Homogeneous MTL Shallow: Left is STL (two independent networks); right is MTL. In the case of vector input and no hidden layer, our method is equivalent to conventional matrix-based MTI methods. Homogeneous MTL Deep: STL (Left) is independent networks. User-defined-MTL (UD MTL) selects layers to share/separate. Our DMTRL learns sharing at every layer. Heterogeneous MTL: UD-MTL selects layers to share/separate. DMTRL learns sharing at every shareable layer.\nDNN designs often exploit convolutional layers. Those layers usually contain kernel filter parame- ters that are 3-way tensors of size H W C, (where H is height, W is width, and C is the number of input channels) or 4-way tensors of size H W C M, where M is the number of filters in this layer (i.e., the number of output channels). The proposed methods naturally extend to convolution layers as convolution just adds more axes on the left-hand side. E.g., the collection of parameters from a given convolutional layer of T neural networks forms a tensor of shape H W C M T.\nThese knowledge sharing strategies provide a way to softly share parameters across the correspond ing layers of each task's DNN: where, what, and how much to share are learned from data. This is in contrast to the conventional Deep-MTL approach of manually selecting a set of layers to undergc hard parameter sharing: by tying weights so each task uses exactly the same weight matrix/tensor for the corresponding layer (Zhang et al.][2014]Liu et al.[[2015); and a set of layers to be completely separate: by using independent weight matrices/tensors. In contrast our approach benefits from: (i) automatically learning this sharing structure from data rather than requiring user trial and error, and (ii) smoothly interpolating between fully shared and fully segregated layers, rather than a hard switching between these states. An illustration of the proposed framework for different problem settings can be found in Fig.\nImplementation Details Our method is implemented with TensorFlow (Abadi et al.2015). The code is released on GitHub4]For DMTRL-Tucker, DMTRL-TT, and DMTRL-LAF, we need tc assign the rank of each weight tensor. The DNN architecture itself may be complicated and sc can benefit from different ranks at different layers, but grid-search is impractical. However, since\nMTL STL (moilys) STL UD-MTL DMTRL 1MT 1 dee STL UD-MTL DMTRL MMT heereeereonns\nboth Tucker and TT decomposition methods have SVD-based solutions, and vanilla SVD is directly. applicable to DMTRL-LAF, we can initialise the model and set the ranks as follows: First train the DNNs independently in single task learning mode. Then pack the layer-wise parameters as the input for tensor decomposition. When SVD is applied, set a threshold for relative error so SVD will pick the appropriate rank. Thus our method needs only a single hyper parameter of max reconstruction. error (we set to e = 10% throughout) that indirectly specifies the ranks of every layer. Note that training from random initialisation also works, but the STL-based initialisation makes rank selection easy and transparent. Nevertheless, like (Kumar & Daume III!2012) the framework is not sensitive to rank choice so long as they are big enough. If random initialisation is desired to eliminate the pre-training requirement, good practice is to initialise parameter tensors by a suitable random weight distribution first, then do decomposition, and use the decomposed values for initialising the factors (the real learnable parameters in our framework). In this way, the resulting re-composed tensors will. have approximately the intended distribution. Our sharing is applied to weight parameters only, bias. terms are not shared. Apart from initialisation, decomposition is not used anywhere.."}, {"section_index": "4", "section_name": "4.1 HOMOGENEOUS MTI", "section_text": "Dataset, Settings and Baselines We use MNIST handwritten digits. The task is to recognise digit images zero to nine. When this dataset is used for the evaluation of MTL methods, ten 1-vs-al binary classification problems usually define ten tasks (Kumar & Daume II]2012). The dataset has a given train (60,000 images) and test (10,000 images) split. Each instance is a monochrome image of size 28 x 28 x 1.\nWe use a modified LeNet (LeCun et al.1998) as the CNN architecture. The first convolutional layer. has 32 filters of size 5 5, followed by 2 2 max pooling. The second convolutional layer has 64 filters of size 4 4, and again a 2 2 max pooling. After these two convolutional layers, two fully connected layers with 512 and 1 output(s) are placed sequentially. The convolutional and first FC layer use RELU f(x) = max(x, 0) activation function. We use hinge loss, l(y) = max(0, 1- y y), where y E 1 is the true label and y is the output of each task's neural network..\nConventional matrix-based MTL methods (Evgeniou & Pontil|2004)|Argyriou et al.. 2008 Kumar & Daume II12012f Romera-paredes et al.[2013fWimalawarne et al.2 2014) are linear models taking vector input only, so they need a preprocessing that flattens the image into a vector, and typically. reduce dimension by PCA. As per our motivation for studying Deep MTL, our methods decisively. outperform such shallow linear baselines. Thus to find a stronger MTL competitor, we instead searcl user defined architectures for Deep-MTL parameter sharing (cf (Zhang et al.|2014) Liu et al.2015 Caruana 1997)). In all of the four parametrised layers (pooling has no parameters), we set the first. N (1 N 3) to be hard shared We then use cross-validation to select among the three user-. defined MTL architectures and the best option is N = 3, i.e., the first three layers are fully shared. (we denote this model UD-MTL). For our methods, all four parametrised layers are softly shared. with the different factorisation approaches. To evaluate different MTL methods and a baseline of. single task learning (STL), we take ten different fractions of the given 60K training split, train the. model, and test on the 10K testing split. For each fraction, we repeat the experiment 5 times with. randomly sampled training data. We report two performance metrics: (1) the mean error rate of the. ten binary classification problems and (2) the error rate of recognising a digit by ranking each task's. 1-vs-all output (multi-class classification error).\nResults As we can see in Fig.2] all MTL approaches outperform STL, and the advantage is more. significant when the training data is small. The proposed methods, DMTRL-TT and DMTRL Tucker outperform the best user-defined MTL when the training data is very small, and their perfor-. mance is comparable when the training data is large..\nFurther Discussion For a slightly unfair comparison, in the case of binary classification with 1000 training data, shallow matrix-based MTL methods with PCA feature (Kang et al.J 2011] Kumar & Daume II12012) reported 14.0% / 13.4% error rate. With the same amount of data, our methods\n5This is not strictly all possible user-defined sharing options. For example, another possibility is the first convolutional layer and the first FC layer could be fully shared, with the second convolutional layer being in dependent (task specific). However, this is against the intuition that lower/earlier layers are more task agnostic and later layers more task specific. Note that sharing the last layer is technically possible but not intuitive, and in any case not meaningful unless at least one early layer is unshared, as the tasks are different.\nBinary Classification. Multi-class Classification 0.12 STLE STLE 0.1 0.2 DMTRL-LAF DMTRL-LAF DMTRL-Tucker DMTRL-Tucker Raetee 0.08 DMTRL-TT Rate 0.15 DMTRL-TT UD-MTL UD-MTL 0.06 0.1 0.04 0.05 0.02 0 0 10-2 10-1 100 10-2 101 100 Fraction of Training Data. Fraction of Training Data.\nFigure 2: Homogeneous MTL: digit recognition on MNIST dataset. Each digit provides a task\nhave error rate below 6%. This shows the importance of our deep end-to-end multi-task represen tation learning contribution versus conventional shallow MTL. Since the error rates in (Kang et al.. 2011f Kumar & Daume III[2012) were produced on a private subset of MNIST dataset with PCA representations only, to ensure a direct comparison, we implement several classic MTL methods and. compare them in Appendix|A\nFor readers interested in the connection to model capacity (number of parameters), we present fur ther analysis in AppendixB\nDataset, Settings and Baselines The AdienceFaces (Eidinger et al.]2014) is a large-scale face. images dataset with the labels of each person's gender and age group. We use this dataset for. the evaluation of heterogeneous MTL with two tasks: (i) gender classification (two classes) and. (ii) age group classification (eight classes). Two independent CNN models for this benchmark are. introduced in (Levi & Hassncer!2015). The two CNNs have the same architecture except for the. last fully-connected layer, since the heterogeneous tasks have different number of outputs (two /. eight). We take these CNNs from (Levi & Hassncer2015) as the STL baseline. We again search. for the best possible user-defined MTL architecture as a strong competitor: the proposed CNN has. six layers - three convolutional and three fully-connected layers. The last fully-connected layer has non-shareable parameters because they are of different size. To search the MTL design-space, we. try setting the first N (1 < N < 5) layers to be hard shared between the tasks. Running 5-fold. cross-validation on the train set to evaluate the architectures, we find the best choice is N = 5 (i.e... all layers fully shared before the final heterogeneous outputs). For our proposed methods, all the. layers before the last heterogeneous dimensionality FC layers are softly shared..\nResults Fig.3 shows the error rate for each task. For the gender recognition task, we find that: (i) User-defined MTL is not consistently better than STL, but (ii) our methods, esp., DMTRL- Tucker, consistently outperform both STL and the best user-defined MTL. For the harder age group classification task, our methods generally improve on STL. However UD-MTL does not consistently improve on STL, and even reduces performance when the training set is bigger. This is the negative transfer phenomenon (Rosenstein et al.|2005), where using a transfer learning algorithm is worse than not using it. This difference in outcomes is attributed to sufficient data eventually providing some effective task-specific representation. Our methods can discover and exploit this, but UD. MTL's hard switch between sharing and not sharing can not represent or exploit such increasing task-specificity of representation.\nWe select increasing fractions of the AdienceFaces train split randomly, train the model, and evaluate. on the same test set. For reference, there are 12245 images with gender labelled for training, 4007. ones for testing. and 11823 images with age group labelled for training, and 4316 ones for testing\nGender Classification Age Group Classification 0.45 0.75 0.4 0.7 aate Rate R 0.35 0.65 Ernr STL 0.3 0.6 DMTRL-LAF DMTRL-Tucker 0.25 0.55 DMTRL-TT - UD-MTL 0.2 0.5 10-2 101 100 10-2 10-1 100 Fraction of Training Data Fraction of Training Data\nFigure 3: Heterogeneous MTL: Age and Gender recognition in AdienceFace dataset\nDataset, Settings and Baselines We next consider the task of learning to recognise handwritten. letters in multiple languages using the Omniglot (Lake et al.]2015) dataset. Omniglot contains. handwritten characters in 50 different alphabets (e.g., Cyrillic, Korean, Tengwar), each with its own. number of unique characters (14 ~ 55). In total, there are 1623 unique characters, and each has exactly 20 instances. Here each task corresponds to an alphabet, and the goal is to recognise its characters. MTL has a clear motivation here, as cross-alphabet knowledge sharing is likely to be. useful as one is unlikely to have extensive training data for a wide variety of less common alphabets.\nThe images are monochrome of size 105 105. We design a CNN with 3 convolutional and 2 FC. layers. The first conv layer has 8 filters of size 5 5; the second conv layer has 12 filters of size 3 3, and the third convolutional layer has 16 filters of size 3 3. Each convolutional layer is. followed by a 2 2 max-pooling. The first FC layer has 64 neurons, and the second FC layer has. size corresponding to the number of unique classes in the alphabet. The activation function is tanh.\nResults Fig.4reports the average error rate across all 50 tasks (alphabets). Our proposed MTI. methods surpass the STL baseline in all cases. User-defined MTL does not work well when the training data is very small, but does help when training fraction is larger than 50%..\nMeasuring the Learned Sharing Compared to the conventional user-defined sharing architec- tures, our method learns how to share from data. We next try to quantify the amount of sharing. estimated by our model on the Omniglot data. Returning to the key factorisation W = LS, we. can find that S-like matrix appears in all variants of proposed method. It is S in DMTRL-LAF, the. transposed U(N) in DMTRL-Tucker, and U(N) in DMTRL-TT (N is the last axis of WV). S is a K T size matrix, where T is the number of tasks, and K is the number of latent tasks (Kumar & Daume III]2012) or the dimension of task coding (Yang & Hospedales2015). Each column of S is a set of coefficients that produce the final weight matrix/tensor by linear combination. If. we put STL and user-defined MTL (for a certain shared layer) in this framework, we see that STL is to assign (rather than learn) S to be an identity matrix IT. Similarly, user-defined MTL (for a. certain shared layer) is to assign S to be a matrix with all zeros but one particular row is all ones. e.g., S = [11xT; 0]. Between these two extremes, our method learns the sharing structure in S. We propose the following equation to measure the learned sharing strength:.\nWe use a similar strategy to find the best user-defined MTL model: the CNN has 5 parametrised layers, of which 4 layers are potentially shareable. So we tried hard-sharing the first N (1 < N < 4) layers. Evaluating these options by 5-fold cross-validation, the best option turned out to be N = 3, i.e., the first three layers are hard shared. For our methods, all four shareable layers are softly shared.\nSince there is no standard train/test split for this dataset, we use the following setting: We repeat edly pick at random 5,... 90% of images per class for training. Note that 5% is the minimum corresponding to one-shot learning. The remaining data are used for evaluation..\n1 2 Q(S.,i,S.,j) T(T -1) 2 i<j i<j\nAlphabet Classification. Sharing Strength at Each Layer. STL 0.7 DMTRL-LAF 0.8 DMTRL-Tucker Rate 0.6 DMTRL-TT 0.6 UD-MTL Errrr 0.5 0.4 DMTRL-LAE DMTRL-Tucker 0.4 0.2 DMTRL-TT UD-MTL 0.3 0 0.05 0.10 0.20 0.50 0.60 0.70 0.80 0.90 Conv1 Conv2 Conv3 FC1 FC2 Fraction of Training Data. Layers y3ams sgsdos6o oosgaos 3e O_bCc .Oc1J9b>c QT7L9b9b2Nb\nFigure 4: Results of multi-task learning of multilingual character recognition (Omniglot dataset) Below: Illustration of the language pairs estimated to be the most related (left - Georgian Mkhedrul and Inuktitut) and most unrelated (right - Balinese and ULOG) character recognition tasks..\nHere Q(a, b) is a similarity measure for two vectors a and b and we use cosine similarity. p is th. average on all combinations of column-wise similarity. So p measures how much sharing is encodec by S between p = 0 for STL (nothing to share) and p = 1 for user-defined MTL (completely shared) Since S is a real-valued matrix in our scenario, we normalise it before applying Eq.[9] First we take. absolute values, because large either positive or negative value suggests a significant coefficient. Second we normalise each column of S by applying a softmax function, so the sum of every columr. is 1. The motivation behind the second step is to make a matched range of our S with S = IT o1. S = [11xT; 0], as for those two cases, the sum of each column is 1 and the range is [0, 1].\nFor the Omniglot experiment, we plot the measured sharing amount for training fraction 10%. Fig.4 reveals that three proposed methods tend to share more for bottom layers ('Conv1', 'Conv2', and 'Conv3') and share less for top layer ('FC1'). This is qualitatively similar to the best user-defined MTL, where the first three layers are fully shared (p = 1) and the 4th layer is completely not shared (p = 0). However, our methods: (i) learn this structure in a purely data-driven way and (ii) benefits from the ability to smoothly interpolate between high and low degrees of sharing as depth increases. As an illustration, Fig.4|also shows example text from the most and least similar language pairs as estimated at our multilingual character recogniser's FC1 layer (the result can vary across layers)."}, {"section_index": "5", "section_name": "5 CONCLUSION", "section_text": "In this paper, we propose a novel framework for end-to-end multi-task representation learning ir. contemporary deep neural networks. The key idea is to generalise matrix factorisation-based multi. task ideas to tensor factorisation, in order to flexibly share knowledge in fully connected and convo lutional DNN layers. Our method provides consistently better performance than single task learning. and comparable or better performance than the best results from exhaustive search of user-definec. MTL architectures. It reduces the design choices and architectural search space that must be ex. plored in the workflow of Deep MTL architecture design (Caruana]1997] Zhang et al.]2014] Liu. et al.] 2015), relieving researchers of the need to decide how to structure layer sharing/segregation Instead sharing structure is determined in a data-driven way on a layer-by-layer basis that moreover. allows a smooth interpolation between sharing and not sharing in progressively deeper layers..\nAcknowledgements This work was supported by EPSRC (EP/L023385/1), and the Europe Union's Horizon 2020 research and innovation program under grant agreement No 640891."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew. Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath. Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah. Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin- cent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Watten-. berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning. on heterogeneous systems, 2015. URLhttp: / /tensorf1ow. org/ Software available from. tensorflow.org\nRich Caruana. Multitask learning. Machine Learning. 1997\nHa1 Daume II. Frustratingly easy domain adaptation. In ACL, 2007\nTheodoros Evgeniou and Massimiliano Pontil. Regularized multi-task learning. In Knowledge Discovery and Data Mining (KDD). 2004\nThomas L. Griffiths and Zoubin Ghahramani. The indian buffet process: An introduction and review Journal of Machine Learning Research (JMLR), 2011.\nJui-Ting Huang, Jinyu Li, Dong Yu, Li Deng, and Yifan Gong. Cross-language knowledge transfer. using multilingual deep neural network with shared hidden layers. In International Conference on Acoustics, Speech, and Signal Processing (ICASSP). 2013..\nBrenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 2015.\nTamara G. Kolda and Brett W. Bader. Tensor decompositions and applications. SIAM Review, 2009\nY. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document. recognition. Proceedings of the IEEE, 1998.. G. Levi and T. Hassncer. Age and gender classification using convolutional neural networks. In Computer Vision and Pattern Recognition Workshops (CVPRw), 2015.. Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. Representa- tionlearninc nultitosldee 1rolnetw ntic classifcatio 41\nV. Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Computing, 2011\nAlexandre Passos, Piyush Rai, Jacques Wainer, and Hal Daume III. Flexible modeling of latent tasl structures in multitask learning. In International Conference on Machine Learning (ICML), 2012\nBernardino Romera-paredes, Hane Aung, Nadia Bianchi-berthouze, and Massimiliano Pontil. Mul. tilinear multitask learning. In International Conference on Machine Learning (ICML). 2013.\nTian Tan, Yanmin Qian, and Kai Yu. Cluster adaptive training for deep neural network based acoustic model. IEEE/ACM Trans. Audio, Speech & Language Processing, 24(3):459-468, 2016.\nL. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 1966\nZhanpeng Zhang, Ping Luo, Chen Change Loy, and Xiaoou Tang. Facial landmark detection by deep multi-task learning. In European Conference on Computer Vision (ECCV), 2014."}, {"section_index": "7", "section_name": "COMPARISON WITH CLASSIC (SHALLOW) MTL METHODS", "section_text": "We provide a comparison with classic (shallow, matrix-based) MTL methods for the first experiment (MNIST, binary one-vs-rest classification, 1% training data, mean of error rates for 10-fold CV). A subtlety in making this comparison is what feature should the classic methods use? Conventionally they use a PCA feature (obtained by flattening the image, then dimension reduction by PCA). How ever for visual recognition tasks, performance is better with deep features - a key motivation for ou focus on deep approaches to MTL. We therefore also compare the classic methods when using a feature extracted from the penultimate layer of the CNN network used in our experiment.\nModel PCA Feature CNN Feature Single Task Learning 16.89 11.52 Evgeniou & Pontil(2004 15.27 10.32 Argyriou et al.(2008) 15.64 9.56 Kumar & Daume I1(2012 14.08 9.41 DMTRL-LAF 8.25 DMTRL-Tucker 9.24 DMTRL-TT 7.31 UD-MTL 9.34\nTable 1: Comparison with classic MTL methods. MNIST binary classification error rate (%)\nAs expected, the classic methods improve on STL, and they perform significantly better with CNN than PCA features. However, our DMTRL methods still outperform the best classic methods, even when they are enhanced by CNN features. This is due to soft (cf hard) sharing of the feature extrac tion layers and the ability of end-to-end training of both the classifier and feature extractor. Finally we note that more fundamentally, the classic methods are restricted to binary problems (due to their matrix-based nature) and so, unlike our tensor-based approach, they are unsuitable for multi-class problems like omniglot and age-group classification.."}, {"section_index": "8", "section_name": "MODEL CAPACITY AND PERFORMANCE", "section_text": "We list the number of parameters for each model in the first experiment (MNIST, binary one-vs-rest. classification) and the performance (1% training data, mean of error rate for 10-fold CV).\nModel Error Rate (%) Number of parameters Ratio STL 11.52 4351K 1.00 DMTRL-LAF 8.25 1632K 0.38 DMTRL-Tucker 9.24 1740K 0.40 DMTRL-TT 7.31 2187K 0.50 UD-MTL 9.34 436K 0.10 UD-MTL-Large 9.39 1644K 0.38\nTable 2: Comparison of deep models: Error rate and number of parameters\nThe conventional hard-sharing method (UD-MTL) design is to share all layers except the top layer Its number of parameter is roughly 10% of the single task learning method (STL), as most parameters. are shared across the 10 tasks corresponding to 10 digits. Our soft-sharing methods also significantl. reduce the number of parameters compared to STL, but are larger than UD-MTL's hard sharing..\nTo compare our method to UD-MTL, while controlling for network capacity, we expanded UD MDL by adding more hidden neurons so its number of parameter is close to our methods (denotec UD-MTL-Large). However UD-MDL performance does not increase. This is evidence that our model's good performance is not simply due to greater capacity than UD-MTL."}] |
BJtNZAFgg | [{"section_index": "0", "section_name": "ADVERSARIAL FEATURE LEARNING", "section_text": "Jeff Donahue\nThe ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex. data distributions has been demonstrated empirically, with compelling results. showing that the latent space of such generators captures semantic variation in. the data distribution. Intuitively, models trained to predict these semantic latent. representations given data may serve as useful feature representations for auxiliary. problems where semantics are relevant. However, in their existing form, GANs. have no means of learning the inverse mapping - projecting data back into the. latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs). as a means of learning this inverse mapping, and demonstrate that the resulting. learned feature representation is useful for auxiliary supervised discrimination tasks. competitive with contemporary approaches to unsupervised and self-supervised. feature learning."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep convolutional networks (convnets) have become a staple of the modern computer vision pipeline After training these models on a massive database of image-label pairs like ImageNet (Russakovsky et al.]2015), the network easily adapts to a variety of similar visual tasks, achieving impressive. results on image classification (Donahue et al.]2014]Zeiler & Fergus]2014] Razavian et al.]2014] or localization (Girshick et al.][2014jLong et al.2015) tasks. In other perceptual domains such as natural language processing or speech recognition, deep networks have proven highly effective as. well (Bahdanau et al.]2015] Sutskever et al.]2014][Vinyals et al.]2015][Graves et al.2013). However, all of these recent results rely on a supervisory signal from large-scale databases of hand-labeled data,. ignoring much of the useful information present in the structure of the data itself..\nMeanwhile, Generative Adversarial Networks (GANs) (Goodfellow et al.|2014) have emerged as a. powerful framework for learning generative models of arbitrarily complex data distributions. The. GAN framework learns a generator mapping samples from an arbitrary latent distribution to data, as. well as an adversarial discriminator which tries to distinguish between real and generated samples as accurately as possible. The generator's goal is to \"fool\"' the discriminator by producing samples. which are as close to real data as possible. When trained on databases of natural images, GANs. produce impressive results (Radford et al. 2016Denton et al.2015).\nInterpolations in the latent space of the generator produce smooth and plausible semantic variations. and certain directions in this space correspond to particular semantic attributes along which the data distribution varies. For example,Radford et al.(2016) showed that a GAN trained on a database of human faces learns to associate particular latent directions with gender and the presence of eyeglasses.\nA natural question arises from this ostensible \"semantic juice' flowing through the weights of generators learned using the GAN framework: can GANs be used for unsupervised learning of rich feature representations for arbitrary data distributions? An obvious issue with doing so is that the"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "data features P(y) x,E E X\nFigure 1: The structure of Bidirectional Generative Adversarial Networks (BiGAN)\ngenerator maps latent samples to generated data, but the framework does not include an invers mapping from data to latent representation.\nHence, we propose a novel unsupervised feature learning framework, Bidirectional Generativ. Adversarial Networks (BiGAN). The overall model is depicted in Figure[1] In short, in addition tc the generator G from the standard GAN framework (Goodfellow et al.]2014), BiGAN includes ar encoder E which maps data x to latent representations z. The BiGAN discriminator D discriminate not only in data space (x versus G(z)), but jointly in data and latent space (tuples (x, E(x)) versu G(z), z)), where the latent component is either an encoder output E(x) or a generator input z..\nIt may not be obvious from this description that the BiGAN encoder E should learn to invert the. generator G. The two modules cannot directly \"communicate\"' with one another: the encoder never. 'sees\" generator outputs (E(G(z)) is not computed), and vice versa. Yet, in Section[3] we will both. argue intuitively and formally prove that the encoder and generator must learn to invert one another in order to fool the BiGAN discriminator..\nAn alternative approach to learning the inverse mapping from data to latent representation is to directly model p(z|G(z)), predicting generator input z given generated data G(z). We'll refer to this alternative as a latent regressor, later arguing (Section4.1) that the BiGAN encoder may be preferable in a feature learning context, as well as comparing the approaches empirically..\nBiGANs are a robust and highly generic approach to unsupervised feature learning, making nc assumptions about the structure or type of data to which they are applied, as our theoretical results will demonstrate. Our empirical studies will show that despite their generality, BiGANs are competitive with contemporary approaches to self-supervised and weakly supervised feature learning designed specifically for a notoriously complex data distribution - natural images.\nDumoulin et al.(2016) independently proposed an identical model in their concurrent work, exploring the case of a stochastic encoder E and the ability of such models to learn in a semi-supervised setting\nLet px(x) be the distribution of our data for x E Nx (e.g. natural images). The goal of generative modeling is capture this data distribution using a probabilistic model. Unfortunately, exact modeling of this probability density function is computationally intractable (Hinton et al.]2006f Salakhutdinov & Hinton]2009) for all but the most trivial models. Generative Adversarial Networks (GANs) (Good-\nBecause the BiGAN encoder learns to predict features z given data x, and prior work on GANs has. demonstrated that these features capture semantic attributes of the data, we hypothesize that a trained BiGAN encoder may serve as a useful feature representation for related semantic tasks, in the same way that fully supervised visual models trained to predict semantic \"labels\" given images serve as powerful feature representations for related visual tasks. In this context, a latent representation z may be thought of as a \"label' for x, but one which came for \"free,\" without the need for supervision.\nfellow et al.|[2014) instead model the data distribution as a transformation of a fixed latent distribution pz(z) for z E Nz. This transformation, called a generator, is expressed as a deterministic feed forward network G : Nz -> Nx with pg(x[z) = (x - G(z)) and pG(x) = Ez~pz [pG(x|z)]. The goal is to train a generator such that pg(x) ~ px(x).\nThe GAN framework trains a generator, such that no discriminative model D : Nx +> [0, 1] can. distinguish samples of the data distribution from samples of the generative distribution. Both generator. and discriminator are learned using the adversaria1 (minimax) objective min max V(D. G). where\nThe adversarial objective[1does not directly lend itself to an efficient optimization, as each step in the generator G requires a full discriminator D to be learned. Furthermore, a perfect discriminator no longer provides any gradient information to the generator, as the gradient of any global or local maximum of V(D, G) is 0. To provide a strong gradient signal nonetheless,[Goodfellow et al.(2014] slightly alter the objective between generator and discriminator updates, while keeping the same fixed point characteristics. They also propose to optimize (1) using an alternating optimization switching between updates to the generator and discriminator. While this optimization is not guaranteed to converge, empirically it works well if the discriminator and generator are well balanced.\nDespite the empirical strength of GANs as generative models of arbitrary data distributions, it is not clear how they can be applied as an unsupervised feature representation. One possibility for learning such representations is to learn an inverse mapping regressing from generated data G(z) back to the latent input z. However, unless the generator perfectly models the data distribution px, a nearly. impossible objective for a complex data distribution such as that of high-resolution natural images this idea may prove insufficient."}, {"section_index": "3", "section_name": "BIDIRECTIONAL GENERATIVE ADVERSARIAL NETWORKS", "section_text": "In Bidirectional Generative Adversarial Networks (BiGANs) we not only train a generator, but. additionally train an encoder E : Nx -> Nz. The encoder induces a distribution p=(z|x) = (z - E(x)) mapping data points x into the latent feature space of the generative model. The. discriminator is also modified to take input from the latent space, predicting Pp(Y|x, z), where. Y = 1 if x is real (sampled from the real data distribution px), and Y = 0 if x is generated (the. output of G(z), z ~ pz).\nThe BiGAN training objective is defined as a minimax objective\nV(D,E,G) := Ex~px[Ez~ps(|x) [log D(x,z)]] +Ez~pz[Ex~pc(|z) [log(1- D(x,z)] log D(x,E(x)) log(1-D(G(z),z))\nWe optimize this minimax objective using the same alternating gradient based optimization as Goodfellow et al.(2014). See Section[3.4[for details.\nBiGANs share many of the theoretical properties of GANs (Goodfellow et al.]2014), while addition. ally guaranteeing that at the global optimum, G and E are each other's inverse. BiGANs are also. closely related to autoencoders with an lo loss function. In the following sections we highlight some of the appealing theoretical properties of BiGANs..\nDefinitions Let pgz(x, z) := pg(x|z)pz(z) and pex(x, z) := pE(z|x)px(x) be the joint distri butions modeled by the generator and encoder respectively. := x z is the joint latent and\nV(D, G) := Ex~px [log D(x)] + Ex~pg [log(1 - D(x)] Ez~pz[log(1-D(G(z)))]\nPx(Rx) := Jnx Px(x)1[xERx] dx Pz(Rz) := Jnz Pz(z)1zERz] dz\nas measures over regions Rx C Nx and Rz C Nz. We refer to the set of features and data samples in the support of Px and Pz as x := supp(Px) and Nz := supp(Pz) respectively. DkL (P||Q and Djs (P Q) respectively denote the Kullback-Leibler (KL) and Jensen-Shannon divergences between probability measures P and Q. By definition,\nP+Q Das(P|Q):= DKL 2 2 2\nProposition 1 For any E and G, the optimal discriminator D*G := arg maxp V(D, E, G) is the dPEX measure PEx + Pgz\nThis optimal discriminator now allows us to characterize the optimal generator and encoder\nTheorem 1 The global minimum of C(E, G) is achieved if and only if Pex = Pgz. At that poini C(E, G) = - log 4 and DEG = 2\nWe first present an intuitive argument that, in order to \"fool' a perfect discriminator, a deterministic BiGAN encoder and generator must invert each other. (Later we will formally state and prove this\nwith the defining property that P(R) = SR fPQ dQ. The RN derivative fPQ : +> R>o is defined for any measures P and Q on space such that P is absolutely continuous with respect to Q: i.e.. for any R , P(R) > 0 => Q(R) > 0.\nProposition 2 The encoder and generator's objective for an optimal discriminator C(E, G) :=- maxp V(D, E, G) = V(D*G, E, G) can be rewritten in terms of the Jensen-Shannon divergence. between measures Pex and Pgz as C(E, G) = 2Djs (PexPgz) - log 4.\nProof. From Proposition 2 we have that C(E, G) = 2Djs (PexPgz) - log4. The Jensen Shannon divergence Djs (P|| Q) 0 for any P and Q, and Djs (PQ) = 0if and only if P = Q Therefore, the global minimum of C(E, G) occurs if and only if Pex = Pgz, and at this point the. value is C(E, G) = - log 4. Finally, Pex = Pgz implies that the optimal discriminator is chance:. dPEX dPEX= 2 dPEX\nThe optimal discriminator, encoder, and generator of BiGAN are similar to the optimal discriminator and generator of the GAN framework (Goodfellow et al.]2014). However, an important difference is that BiGAN optimizes a Jensen-Shannon divergence between a joint distribution over both data X. and latent features Z. This joint divergence allows us to further characterize properties of G and E, as shown below.\nproperty.) Consider a BiGAN discriminator input pair (x, z). Due to the sampling procedure, (x, z must satisfy at least one of the following two properties:.\na) xENx ^ E(x)=z (b) z E Nz ^ G(z) =j\nIf only one of these properties is satisfied, a perfect discriminator can infer the source of (x, z) with certainty: if only (a) is satisfied, (x, z) must be an encoder pair (x, E(x)) and D*g(x, z) = 1; if. only (b) is satisfied, (x, z) must be a generator pair (G(z), z) and D*G(x, z) = 0.\nTherefore, in order to fool a perfect discriminator at (x, z) (so that 0 < D*g(x, z) < 1), E and G must satisfy both (a) and (b). In this case, we can substitute the equality E(x) = z required by (a) into the equality G(z) = x required by (b), and vice versa, giving the inversion properties x = G(E(x)) and z = E(G(z))\nFormally, we show in Theorem2|that the optimal generator and encoder invert one another almost everywhere on the support Nx and Nz of Px and Pz.\nTheorem 2 If E and G are an optimal encoder and generator, then E = G-1 almost everywhere, that is, G(E(x)) = x for Px-almost every x E Nx, and E(G(z)) = z for Pz-almost every z E Nz"}, {"section_index": "4", "section_name": "Proof. Given in Appendix A.4", "section_text": "While Theorem|2|characterizes the encoder and decoder at their optimum, due to the non-convex. nature of the optimization, this optimum might never be reached. Experimentally, Section|4|shows that on standard datasets, the two are approximate inverses; however, they are rarely exact inverses. It is thus also interesting to show what objective BiGAN optimizes in terms of E and G. Next we show that BiGANs are closely related to autoencoders with an lo loss function.."}, {"section_index": "5", "section_name": "3.3 RELATIONSHIP TO AUTOENCODERS", "section_text": "As argued in Section[1] a model trained to predict features z given data x should learn useful semantic. representations. Here we show that the BiGAN objective forces the encoder E to do exactly this: in. order to fool the discriminator at a particular z, the encoder must invert the generator at that z, such that E(G(z)) = z.\nTheorem 3 The encoder and generator objective given an optimal discriminator C(E, G) : maxp V(D, E, G) can be rewritten as an lo autoencoder loss function.\nC(E,G) = Ex~px ] log fEG(x, E(x E(x)Ez^G(E(x))=3 [G(z)ESx^E(G(z))=z] log(1 - fEG(G(z)"}, {"section_index": "6", "section_name": "Proof. Given in Appendix[A.5", "section_text": "Here the indicator function 1[G(E(x)=x] in the first term is equivalent to an autoencoder with lo loss, while the indicator 1[E(G(z))-z] in the second term shows that the BiGAN encoder must invert the generator, the desired property for feature learning. The objective further encourages the functions E(x) and G(z) to produce valid outputs in the support of Pz and Px respectively. Unlike regular autoencoders, the lo loss function does not make any assumptions about the structure or distribution of the data itself; in fact, all the structural properties of BiGAN are learned as part of the discriminator.\nIn practice, as in the GAN framework (Goodfellow et al.2014), each BiGAN module D, G, and E is a parametric function (with parameters 0p, 0g, and 0E, respectively). As a whole, BiGAN can be. optimized using alternating stochastic gradient steps. In one iteration, the discriminator parameters 0p are updated by taking one or more steps in the positive gradient direction VepV(D, E, G) then the encoder parameters 0E and generator parameters 0g are together updated by taking a step in the negative gradient direction -Ve,egV(D, E, G). In both cases, the expectation terms of.\nGoodfellow et al.(2014) found that an objective in which the real and generated labels Y are swapped provides stronger gradient signal to G. We similarly observed in BiGAN training that an \"inverse objective provides stronger gradient signal to G and E. For efficiency, we also update all modules D, G, and E simultaneously at each iteration, rather than alternating between D updates and G, E updates. See AppendixB for details."}, {"section_index": "7", "section_name": "3.5 GENERALIZED B1GAN", "section_text": "It is often useful to parametrize the output of the generator G and encoder E in a different, usually. smaller, space Nx and N, rather than the original Nx and Nz. For example, for visual feature. learning, the images input to the encoder should be of similar resolution to images used in the evaluation. On the other hand, generating high resolution images remains difficult for current. generative models. In this situation, the encoder may take higher resolution input while the generator. output and discriminator input remain low resolution..\nAn identity gx(x) = x and gz(z) = z (and x = Nx, N = Nz) yields the original objective. For. visual feature learning with higher resolution encoder inputs, gx is an image resizing function that downsamples a high resolution image x E Nx to a lower resolution image x' E Nx, as output by the. generator. (gz is identity.)"}, {"section_index": "8", "section_name": "4 EVALUATION", "section_text": "We evaluate the feature learning capabilities of BiGANs by first training them unsupervised as. described in Section[3.4] then transferring the encoder's learned feature representations for use in auxiliary supervised learning tasks. To demonstrate that BiGANs are able to learn meaningful feature representations both on arbitrary data vectors, where the model is agnostic to any underlying structure. as well as very high-dimensional and complex distributions, we evaluate on both permutation-invariant. MNIST (LeCun et al.][1998) and on the high-resolution natural images of ImageNet (Russakovsky. et al.2015)."}, {"section_index": "9", "section_name": "4.1 BASELINE METHODS", "section_text": "Besides the BiGAN framework presented above, we considered alternative approaches to learnin feature representations using different GAN variants.\nDiscriminator The discriminator D in a standard GAN takes data samples x ~ px as input, making its learned intermediate representations natural candidates as feature representations for related tasks\nEz'~pz(|x) [log D(gx(x),z)]] + Ez~pz[Ex'~pc(|z) [log(1- D(x',gz(z)))] log D(gx(x),E(x)) log(1-D(G(z),gz(z)))\nIn this case, the encoder and generator respectively induce probability measures Pex' and PGz' over regions R C ' of the joint space ' := 'x 'z, with Pex(R) := fnx Jnx Jny PEx(x,z')1[(x,z')ER]0(9x(x) - x) dz'dx'dx = Jnx Px(x)1[(gx(x),E(x)ER] dx, and Pgz' defined analogously. For optimal E and G, we can show Pex' = Pgz': a generalization of Theorem|1 When E and G are deterministic and optimal, Theorem|2 that E and G invert one another - can also be generalized: zez{E(x) = gz(z) ^ G(z) = gx(x)} for Px-almost every x E Nx, and xenx{E(x) = gz(z) ^ G(z) = gx(x)} for Pz-almost every z E Nz.\nIn all experiments, each module D, G, and E is a parametric deep (multi-layer) network. The BiGAN discriminator D(x, z) takes data x as its initial input, and at each linear layer thereafter, the latent representation z is transformed using a learned linear transformation to the hidden layer dimension and added to the non-linearity input\nFigure 2: Qualitative results for permutation-invariant MNIST BiGAN training, including generator samples G(z), real data x, and corresponding reconstructions G(E(x))"}, {"section_index": "10", "section_name": "4.2 PERMUTATION-INVARIANT MNIST", "section_text": "We first present results on permutation-invariant MNIST (LeCun et al.1998). In the permutation invariant setting, each 28 28 digit image must be treated as an unstructured 784D vector (Goodfellov et al.] 2013). In our case, this condition is met by designing each module as a multi-layer perceptror (MLP), agnostic to the underlying spatial structure in the data (as opposed to a convnet, for example See Appendix C.1 for more architectural and training details. We set the latent distribution pz = [U(-1, 1)]50 - a 50D continuous uniform distribution.\nTable[1compares the encoding learned by a BiGAN-trained encoder E with the baselines described in Section|4.1] as well as autoencoders (Hinton & Salakhutdinov!2006) trained directly to minimize either l2 or l1 reconstruction error. The same architecture and optimization algorithm is used across all methods. All methods, including BiGAN, perform at roughly the same level. This result is not overly surprising given the relative simplicity of MNIST digits. For example, digits generated by G in a GAN nearly perfectly match the data distribution (qualitatively), making the latent regressor (LR) baseline method a reasonable choice, as argued in Section|4.1] Qualitative results are presented in Figure2"}, {"section_index": "11", "section_name": "4.3 IMAGENET", "section_text": "Next, we present results from training BiGANs on ImageNet LSVRC (Russakovsky et al.]2015) a large-scale database of natural images. GANs trained on ImageNet cannot perfectly reconstruci\nTable 1: One Nearest Neighbors (1NN) classification accuracy (%) on the permutation-invariant MNIST (LeCun et al.]1998) test set in the feature space learned by BiGAN, Latent Regressor (LR), Joint Latent Regressor (JLR), and an autoencoder (AE) using an l1 or l2 distance.\nThis alternative is appealing as it requires no additional machinery, and is the approach used for. unsupervised feature learning in|Radford et al.(2016). On the other hand, it is not clear that the task of distinguishing between real and generated data requires or benefits from intermediate representations. that are useful as semantic feature representations. In fact, if G successfully generates the true data. distribution px(x), D may ignore the input data entirely and predict P(Y = 1) = P(Y = 1|x) = unconditionally, not learning any meaningful intermediate representations..\nLatent regressor We consider an alternative encoder training by minimizing a reconstruction loss L(z, E(G(z))), after or jointly during a regular GAN training, called latent regressor or joint latent regressor respectively. We use a sigmoid cross entropy loss as it naturally maps to a uniformly distributed output space. Intuitively, a drawback of this approach is that, unlike the encoder in a BiGAN, the latent regressor encoder E is trained only on generated samples G(z), and never \"sees' real data x ~ px. While this may not be an issue in the theoretical optimum where pG(x) = px(x) exactly - i.e., G perfectly generates the data distribution px - in practice, for highly complex data distributions px, such as the distribution of natural images, the generator will almost never achieve this perfect result. The fact that the real data x are never input to this type of encoder limits its utility as a feature representation for related tasks, as shown later in this section.\nD E Noroozi & Favaro(2016 G AlexNet-based D Krizhevsky et al.(2012\nx H x G(E(x) x G(E(x)\nFigure 4: Qualitative results for ImageNet BiGAN training, including generator samples G(z), real data x, and corresponding reconstructions G(E(x)).\nthe data, but often capture some interesting aspects. Here, each of D, G, and E is a convnet. In al. experiments, the encoder E architecture follows AlexNet (Krizhevsky et al.]2012) through the fift. and last convolution layer (conv5). We also experiment with an AlexNet-based discriminator D a continuous uniform distribution. Additionally, we experiment with higher resolution encoder inpu. images - 112 112 rather than the 64 64 used elsewhere - using the generalization described i Section [3.5] See Appendix[C.2|for more architectural and training details.\nQualitative results The convolutional filters learned by each of the three modules are shown in Figure[3] We see that the filters learned by the encoder E have clear Gabor-like structure, similar to those originally reported for the fully supervised AlexNet model (Krizhevsky et al.|2012). The filters also have similar \"grouping\"' structure where one half (the bottom half, in this case) is more color sensitive, and the other half is more edge sensitive. (This separation of the filters occurs due to the AlexNet architecture maintaining two separate filter paths for computational efficiency.)\nIn Figure4 we present sample generations G(z), as well as real data samples x and their BiGAN re constructions G(E(x)). The reconstructions, while certainly imperfect, demonstrate empirically that\nFigure 3: The convolutional filters learned by the three modules (D, G, and E) of a BiGAN (left. top-middle) trained on the ImageNet (Russakovsky et al.]2015) database. We compare with the. filters learned by a discriminator D trained with the same architecture (bottom-middle), as well as the filters reported byNoroozi & Favaro(2016), and byKrizhevsky et al.(2012) for fully supervised ImageNet training (right).\nTable 2: Classification accuracy (%) for the ImageNet LSVRC (Russakovsky et al.|2015) validation set with various portions of the network frozen, or reinitialized and trained from scratch, following the evaluation fromNoroozi & Favaro(2016). In, e.g., the conv3 column, the first three layers - conv1 through conv3 - are transferred and frozen, and the last layers - conv4, conv5, and fully connected layers - are reinitialized and trained fully supervised for ImageNet classification. BiGAN is competitive with these contemporary visual feature learning methods, despite its generality. (*Results from[Noroozi & Favaro (2016) are not directly comparable to those of the other methods as a different base convnet architecture with larger intermediate feature maps is used.)\nthe BiGAN encoder E and generator G learn approximate inverse mappings, as shown theoreticall in Theorem[2] In Appendix[C.2] we present nearest neighbors in the BiGAN learned feature space..\nImageNet classification FollowingNoroozi & Favaro(2016), we evaluate by freezing the firs N layers of our pretrained network and randomly reinitializing and training the remainder fully supervised for ImageNet classification. Results are reported in Table[2\nVOC classification, detection, and segmentation We evaluate the transferability of BiGAN rep. resentations to the PASCAL VOC (Everingham et al.]2014) computer vision benchmark tasks,. including classification, object detection, and semantic segmentation. The classification task involves. simple binary prediction of presence or absence in a given image for each of 20 object categories. The object detection and semantic segmentation tasks go a step further by requiring the objects to be localized, with semantic segmentation requiring this at the finest scale: pixelwise prediction of. object identity. For detection, the pretrained model is used as the initialization for Fast R-CNN (Gir shick] [2015) (FRCN) training; and for semantic segmentation, the model is used as the initialization. for Fully Convolutional Network (Long et al.|2015) (FCN) training, in each case replacing the AlexNet (Krizhevsky et al.[[2012) model trained fully supervised for ImageNet classification. We report results on each of these tasks in Table[3] comparing BiGANs with contemporary approaches to unsupervised (Krahenbuhl et al.[2016) and self-supervised (Doersch et al.|2015|Agrawal et al. 2015} [Wang & Gupta!2015f Pathak et al.[2016) feature learning in the visual domain, as well as the baselines discussed in Section4.1"}, {"section_index": "12", "section_name": "4.4 DISCUSSION", "section_text": "Despite making no assumptions about the underlying structure of the data, the BiGAN unsupervisec feature learning framework offers a representation competitive with existing self-supervised and ever weakly supervised feature learning approaches for visual feature learning, while still being a purely. generative model with the ability to sample data x and predict latent representation z. Furthermore BiGANs outperform the discriminator (D) and latent regressor (LR) baselines discussed in Section|4.1 confirming our intuition that these approaches may not perform well in the regime of highly complex data distributions such as that of natural images. The version in which the encoder takes a highe resolution image than output by the generator (BiGAN 112 112 E) performs better still, and thi strategy is not possible under the LR and D baselines as each of those modules take generator outputs. as their input.\nAlthough existing self-supervised approaches have shown impressive performance and thus far tended to outshine purely unsupervised approaches in the complex domain of high-resolution images, purely unsupervised approaches to feature learning or pre-training have several potential benefits.\nconv1 conv2 conv3 conv4 conv5 Random (Noroozi & Favaro 2016) 48.5 41.0 34.8 27.1 12.0 Wang & Gupta(2015 51.8 46.9 42.8 38.8 29.8 Doersch et al.(2015 53.1 47.6 48.7 45.6 30.4 (2016)* 57.1 56.0 52.4 Noroozi & Favaro. 48.3 38.1 BiGAN (ours) 56.2 54.4 49.4 43.9 33.3 BiGAN, 112 112 E (ours) 55.3 53.2 49.3 44.4 34.8\nTable 3: Classification and Fast R-CNN (Girshick2015) detection results for the PASCAL VOC 2007 (Everingham et al. 2014) test set, and FCN (Long et al.1 2015) segmentation results on the PASCAL VOC 2012 validation set, under the standard mean average precision (mAP) or mean. intersection over union (mlU) metrics for each task. Classification models are trained with various portions of the AlexNet (Krizhevsky et al.|2012) model frozen. In the fc8 column, only the linear. classifier (a multinomial logistic regression) is learned - in the case of BiGAN, on top of randomly. initialized fully connected (FC) layers fc6 and fc7. In the fc6-8 column, all three FC layers are trained. fully supervised with all convolution layers frozen. Finally, in the all column, the entire network is. fine-tuned\". BiGAN outperforms other unsupervised (unsup.) feature learning approaches, including. the GAN-based baselines described in Section4.1] and despite its generality, is competitive with. contemporary self-supervised (self-sup.) feature learning approaches specific to the visual domain..\nBiGAN and other unsupervised learning approaches are agnostic to the domain of the data. The. self-supervised approaches are specific to the visual domain, in some cases requiring weak super vision from video unavailable in images alone. For example, the methods are not applicable in the permutation-invariant MNIST setting explored in Section|4.2] as the data are treated as flat vectors rather than 2D images.\nFurthermore, BiGAN and other unsupervised approaches needn't suffer from domain shift betweer. the pre-training task and the transfer task, unlike self-supervised methods in which some aspect of th data is normally removed or corrupted in order to create a non-trivial prediction task. In the contex prediction task (Doersch et al.] 2015), the network sees only small image patches - the global image. structure is unobserved. In the context encoder or inpainting task (Pathak et al.f |2016), each imag is corrupted by removing large areas to be filled in by the prediction network, creating inputs with. dramatically different appearance from the uncorrupted natural images seen in the transfer tasks.."}, {"section_index": "13", "section_name": "ACKNOWLEDGMENTS", "section_text": "FRCN FCN Classification Detection Segmentation (% mAP) (% mAP) (% mIU) trained layers fc8 fc6-8 all all all sup. ImageNet (Krizhevsky et al. 2012 77.0 78.8 78.3 56.8 48.0 Agrawal et al.(2015 31.2 31.0 54.2 43.9 Pathak et al.[(2016) 30.5 34.6 56.5 44.5 30.0 self-sup. Wang & Gupta2015 28.4 55.6 63.1 47.4 Doersch et al. (2015) 44.7 55.1 65.3 51.1 k-means (Krahenbuhl et al.2016) 32.0 39.2 56.6 45.6 32.6 Discriminator (D) 30.7 40.5 56.4 Latent Regressor (LR) 36.9 47.9 57.1 unsup. Joint LR 37.1 47.9 56.5 Autoencoder (l2) 24.8 16.0 53.8 41.9 BiGAN (ours) 37.5 48.7 58.9 46.2 34.9 BiGAN, 112 112 E (ours) 40.7 52.3 60.1 46.9 35.2\nOther approaches (Agrawal et al.] 2015} [Wang & Gupta]2015) rely on auxiliary information un available in the static image domain, such as video, egomotion, or tracking. Unlike BiGAN, such approaches cannot learn feature representations from unlabeled static images.\nWe finally note that the results presented here constitute only a preliminary exploration of the space of model architectures possible under the BiGAN framework, and we expect results to improve sig nificantly with advancements in generative image models and discriminative convolutional networks alike.\nThe authors thank Evan Shelhamer, Jonathan Long, and other Berkeley Vision labmates for helpful discussions throughout this work. This work was supported by DARPA, AFRL, DoD MURI award N000141110688, NSF awards IIS-1427425 and IIS-1212798, and the Berkeley Artificial Intelligence Research laboratory. The GPUs used for this work were donated by NVIDIA."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Pulkit Agrawal, Joao Carreira, and Jitendra Malik. Learning to see by moving. In ICCV, 2015\nEmily L. Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a Laplacian pyramid of adversarial networks. In NIPs, 2015.\nCarl Doersch, Abhinav Gupta, and Alexei A. Efros. Unsupervised visual representation learning by context prediction. In ICCV, 2015.\nMark Everingham, S. M. Ali Eslami, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. The PASCAL Visual Object Classes challenge: A retrospective. IJCV, 2014\nRoss Girshick. Fast R-CNN. In ICCV. 2015\nRoss Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014\nIan Goodfellow. Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozai Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPs, 2014\nAlex Graves, Abdel-rahman Mohamed, and Geoffrey E. Hinton. Speech recognition with deep recurrent neural networks. In ICASSP. 2013\nGeoffrey E. Hinton and Ruslan R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 2006\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training b reducing internal covariate shift. In ICML, 2015.\nYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergic Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv:1408.5093. 2014.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet classification with deep convolu tional neural networks. In NIPS, 2012..\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proc. IEEE, 1998\nJonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.\nAndrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In ICML, 2013.\nVincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro and Aaron Courville. Adversarially learned inference. arXiv:1606.00704, 2016\nIan Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxou networks. In ICML, 2013.\nPhilipp Krahenbuhl, Carl Doersch, Jeff Donahue, and Trevor Darrell. Data-dependent initializations of convolutional neural networks. In ICLR, 2016.\nDeepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros. Contex encoders: Feature learning by inpainting. In CVPR, 2016\nAli Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. CNN features off-the-shelf an astounding baseline for recognition. In CVPR Workshops. 2014.\nRuslan Salakhutdinov and Geoffrey E. Hinton. Deep Boltzmann machines. In AISTATS, 2009\nTheano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv:1605.02688, 2016.\nMatthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In ECCV 2014.\nIlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks In NIPS, 2014.\nProof. For measures P and Q on space , with P absolutely continuous with respect to Q, the RN derivative f PQ := d P exists, and we have dO\ndPEx dPgz d(Pex+Pgz) fEG+fGE = 1 d(Pex+Pgz) d(Pex+Pgz) d(Pex+Pgz)\nProposition 2 The encoder and generator's objective for an optimal discriminator C(E, G) : maxp V(D, E, G) = V(D*G, E, G) can be rewritten in terms of the Jensen-Shannon divergenc. between measures Pex and Pgz as C(E, G) = 2Djs (Pex Pgz) log 4..\nProof. Using Proposition 1 along with (5 GE) we rewrite the objective.\nEx~P[g(x)] = Jo gdP = JogaQ dQ = JogfPQ dQ =Ex~Q[fPQ(x)g(x)]\nV(D, E, G) = E(x,z)~Pex [log D(x,z)] +E(x,z)~PGz[log(1- D(x,z))] =E(x,z)~Psc[2fEG(x,z) log D(x,z)] + E(x,z)~Prc[2fGE(x,z) log(1- D(x,z))] dPEX dPGz dPEG dPEG = 2E(x,z)~Psc[fEG(x,z) log D(x,z) + fGE(x,z) log(1 - D(x,z))] = 2E(x,z)~Psc[fEg(x,z) log D(x,z) + (1- fEG(x,z)) log(1- D(x,z))].\n,G) = maxpV(D,E,G) = V(DEg,E,G) = E(x,z)~Pex [log D*G(x,z)] + E(x,z)~PGz [log(1- D*G(x,z))] = E(x,z)~Pex [log fEG(x, z)] + E(x,z)~PGz [log fGE(x,z)] E(x,z)~Pex [log(2fEG(x,z))] + E(x,z)~PGz [log(2fGE(x,z))] - log 4 DkL(PEx||PEG)+DkL(PGz|PEG)-log4 =DkL(Pex||Prx+Pcz)+DkL(PGz|Pex+PGz)-log 4 2 2Djs(PexPgz)-log4.\nC(E,G) =maxpV(D,E,G) = V(D*g,E,G) = E(x,z)~Pex [log D*g(x,z)] + E(x,z)~PGz [log(1 - DEG(x,z))] = E(x,z)~Pex [log fEG(x,z)] + E(x,z)~PGz [log fGE(x,z)] = E(x,z)~Pex [log(2fEG(x,z))] +E(x,z)~PGz [log(2fGE(x,z))]- log4 DkL(Pex|PEG)+DkL(PGz||PEG)-log4 2 =2Djs(Pex|Pgz)-log4.\nWhile Theorem 1and Propositions[1and 2|hold for any encoder pE(z|x) and generator pg(x|z) stochastic or deterministic, Theorems|2|and|3 assume the encoder E and generator G are deterministic functions; i.e., with conditionals pe(z|x) = (z - E(x)) and pg(x[z) = (x - G(z)) defined as functions.\nFor use in the proofs of those theorems, we simplify the definitions of measures Pex and PGz give in Section|3 for the case of deterministic functions E and G below:\nA.4 PROOF OF THEOREM 2(OPTIMAL GENERATOR AND ENCODER ARE INVERSES)\nTheorem 2 If E and G are an optimal encoder and generator, then E = G-1 almost everywhere that is, G(E(x)) = x for Px-almost every x E Sx, and E(G(z)) = zfor Pz-almost every z E Nz\nProof. Let R% := {x E Nx : x G(E(x))} be the region of Nx in which the inversion property x = G(E(x)) does not hold. We will show that, for optimal E and G, R% has measure zero under Px (i.e., Px(Rx) = 0) and therefore x = G(E(x)) holds Px-almost everywhere.\nPx(x)1[(x,E(x))ER0] dx = PEx(R) = PGz(Ro) Jnz Pz(z)1[(G(z),z)ER0] dz Pz(z)1z=E(G(z)) ^G(z)ER%] 1[z=E(G(z)) ^G(z)FG(E(G(z))] dz =0 for any z,as z=E(G(z)) => G(z)=G(E(G(z)) =0.\nAn analogous argument shows that R := {z E Nz : z E(G(z))} has measure zero on Pz (i.e Pz(R%) = 0) and therefore z = E(G(z)) holds Pz-almost everywhere.\ndPEX E [0, 1] defined in Proposition|1 for most of this proof\nPEx(R) PE(z|x)1[(x,z)ER] dz dx (z-E(x))1[(x,z)ER] dz To.. Px )1[(x,E(x))ER] dx Pgz(R) = Io (z) Jnx PG(x|z)1[(x,z)ER] dx dz (x-G(z))1[(x,z)ER] dx )1[(G(z),z)ER] dz\nPEx(R) = Jo nz PE(Z|x)1[(x,z)ER] dz dx (z- E(x))1[(x,z)ER] dz dz Jox Px (x)1[(x,E(x))ER] dx Pgz(R) = Joz z Pz(z) Jnx PG(x|z)1[(x,z)ER] dx dz o(x-G(z))1[(x,z)ER] dx dz pz(z)1[(G(z),z)ER] dz\nHence region R has measure zero (Px(R) = 0), and the inversion property x = G(E(x)) holds Px-almost everywhere\nAs shown in Proposition2(Section3), the BiGAN objective is equivalent to the Jensen-Shannon divergence between Pex and Pgz. We now go a step further and show that this Jensen-Shannon divergence is closely related to a standard autoencoder loss. Omitting the scale factor, a KL divergence term of the Jensen-Shannon divergence is given as\ndPEx Pex+Pgz DkI = log2 log dPEx d(Pex + Pgz) -log2+ log f dPexy\ndPex F(R) := dPEX log f dPEx R R\nNext we will show that f > O holds Pex-almost everywhere, and hence F is always well definec and finite. We then show that F is equivalent to an autoencoder-like reconstruction loss function.\nProposition|3|ensures that log f is defined Pex-almost everywhere, and F(R) is well-defined. Next we will show that F(R) mimics an autoencoder with lo loss, meaning F is zero for any region in which G(E(x)) x, and non-zero otherwise.\nWe'll first show that in region Rs := \\ supp(Pgz), we have f = 1 Pex-almost everywhere Let Rf<1 := {(x, z) E Rg : f(x, z) < 1} be the region of Rs in which f < 1. Let's assume that Pex(Rf<1) > 0 has non-zero measure. Then, using the definition of the Radon-Nikodym derivative,\nPEx(Rf<1) =SRt<1 f d(PEx +PGz) =JRf<1 dPEx +JRf<1 f dPGz <cPEx(Rf<1) <E<1 0 f1\nwhere e is a constant smaller than 1. But Pex(Rf<1) < Pex(Rf<1) is a contradiction; hence Pex(Rf<1) = 0 and f = 1 Pex-almost everywhere in Rs, implying log f = 0 Pex-almost everywhere in Rs. Hence F(Rs) = 0.\nProposition 5 f < 1 Pgx-almost everywhere in R1\nLet Rf=1 := {(x,z) E R1 : f(x,z) = 1} be the region in which f = 1. Let's assume the set Rf=1 0 is not empty. By definition of the suppor( PEx(Rf=1) > 0 and Pgz(Rf=1) > 0. The Radon-Nikodym derivative on Rf=1 is then given by.\nPEx(Rf=1)\nC(E,G) = Ex~px E(x)Ez^G(E(x))=x log fEGx,E(x Ez~pz [G(z)Ex^E(G(z))=z] log(1- fEG(G(z),z)\nwith log fEG E (-0o, 0) and log (1. -co, 0) Pex-almost and Pgz-almost everywhere\nProof. Proposition4(F( \\supp(PGz)) = 0) and F( \\supp(Pex)) = 0 imply that Rl : supp(Pex) supp(Pgz) is the only region of where F may be non-zero; hence F(Q) = F(R1\nPEx(RJ 1 d(Pex + Pcz) =- Pex(Rf=1 ) + Pgz(Rf=1\nTheorem 3 The encoder and generator objective given an optimal discriminator C(E, G) := maxp V(D, E, G) can be rewritten as an lo autoencoder loss function\nFinally, with Propositions3|and|5] we have f E (0, 1) Pex-almost everywhere in R1, and therefor. log f E (-oo, 0), taking a finite and strictly negative value Pex-almost everywhere\nAn analogous argument (along with the fact that feg + fGE = 1) lets us rewrite the other KI divergence term\nThe Jensen-Shannon divergence is the mean of these two KL divergences, giving C(E, G):\nC(E,G) =2Dgs(PEx|Pgz)-log4 =DkL(Pex|Pex+Pcz) +DkL(PGz|Pex+Pcz) -log4 2 Ex~px [E(x)EAz^G(E(x))=x] log fEG(x,E(x) Ez7 [G(z)Ex^E(G(z)=z] log(1- fEG(G(z),z\nIn practice, 0g and 0F are updated by moving in the positive gradient direction of this inverse objective e=.dcA, rather than the negative gradient direction of the original objective\nPgz) -log2 = F() = F(R) PEX 2 -JR1 log f(x,z) dPex Jo1[(x,z)ER1] log f(x,z) dPEX = E(x,z)~Pex [1[(x,z)ER1] l0g f(x,z)] =Ex~px[1[(x,E(x))ER1] log f(x, E(x)] Ex~px [E(x)Enz^G(E(x)=x] log f(x,E(x)\nPex+PGz) - log2 =Ez~pz )KL. Pgz [G(z)E2x^E(G(z)=z] log fGE(G(z),z = Ez~pz [G(z)Ex^E(G(z)=z] log(1- fEG(G(z),z)\nE,G) =2Djs(PEx||Pgz)-log4 Pex+PGz)+DkL(PGz| Pex+Pgz ) -log4 DkL(PEx 2 2 EEx 1[E(x)EQz^G(E(x)=x]log fEG(x, E(x) Ez~pz 1[G(z)Ex^E(G(z)=z] log(1- fEG(G(z),z)\nIn this section we provide additional details on the BiGAN learning protocol summarized in Sec tion[3.4] Goodfellow et al.(2014) found for GAN training that an objective in which the real and generated labels Y are swapped provides stronger gradient signal to G. We similarly observed in BiGAN training that an \"inverse\" objective A (with the same fixed point characteristics as V) provides stronger gradient signal to G and E, where\nA(D,G,E) = Ex~px[Ez~ps(|x)[log(1- D(x,z)]] + Ez~pz[Ex~pc(|z) [log D(x,z)]] log(1-D(x,E(x))) log D(G(z),z)\nWe also observed that learning behaved similarly when all parameters Op, Og, 0E were updated simultaneously at each iteration rather than alternating between 0p updates and 0g, 0E updates, so we took the simultaneous updating (non-alternating) approach for computational efficiency. (For standard GAN training, simultaneous updates of 0p, 0g performed similarly well, so our standard GAN experiments also follow this protocol.)"}, {"section_index": "15", "section_name": "APPENDIX C MODEL AND TRAINING DETAILS", "section_text": "In the following sections we present additional details on the models and training protocols used ir the permutation-invariant MNIST and ImageNet evaluations presented in Section4\nOptimization For unsupervised training of BiGANs and baseline methods, we use the Adan. optimizer (Kingma & Ba||2015) to compute parameter updates, following the hyperparameters (initia. step size = 2 10-4, momentum 1 = 0.5 and 2 = 0.999) used byRadford et al.(2016 The step size is decayed exponentially to a = 2 10-6 starting halfway through training. The. mini-batch size is 128. l2 weight decay of 2.5 10-5 is applied to all multiplicative weights ir. linear layers (but not to the learned bias or scale y parameters applied after batch normalization. Weights are initialized from a zero-mean normal distribution with a standard deviation of 0.02, witl. one notable exception: BiGAN discriminator weights that directly multiply z inputs to be added tc. spatial convolution outputs have initializations scaled by the convolution kernel size - e.g., for a 5 5. kernel. weights are initialized with a standard deviation of 0.5. 25 times the standard initialization.\nSoftware & hardware We implement BiGANs and baseline feature learning methods using the Theano (Theano Development Team|2016) framework, based on the convolutional GAN implemen tation provided byRadford et al.(2016). ImageNet transfer learning experiments (Section|4.3) use the Caffe (Jia et al.||2014) framework, per the Fast R-CNN (Girshick]2015) and FCN (Long et al. 2015) reference implementations. Most computation is performed on an NVIDIA Titan X or Tesla K40 GPU.\nIn all permutation-invariant MNIST experiments (Section4.2), D, G, and E each consist of two hidden layers with 1024 units. The first hidden layer is followed by a non-linearity; the second is followed by (parameter-free) batch normalization (Ioffe & Szegedy|2015) and a non-linearity. The second hidden layer in each case is the input to a linear prediction layer of the appropriate size. In D and E, a leaky ReLU (Maas et al.|2013) non-linearity with a \"leak\"' of 0.2 is used; in G, a standard ReLU non-linearity is used. All models are trained for 400 epochs."}, {"section_index": "16", "section_name": "C.2 IMAGENET", "section_text": "In all ImageNet experiments (Section4.3), the encoder E architecture follows AlexNet (Krizhevsky et al.2012) through the fifth and last convolution layer (conv5), with local response normalization (LRN) layers removed and batch normalization (Ioffe & Szegedy|2015) (including the learned scaling and bias) with leaky ReLU non-linearity applied to the output of each convolution at unsupervised training time. (For supervised evaluation, batch normalization is not used, and the pre-trained scale and bias is merged into the preceding convolution's weights and bias.).\nThe sole exception is our discriminator baseline feature learning experiment, in which we let the. discriminator D be the AlexNet variant described above. Generally, using AlexNet (or similar convne architecture) as the discriminator D is detrimental to the visual fidelity of the resulting generatec images, likely due to the relatively large convolutional filter kernel size applied to the input image, as. well as the max-pooling layers, which explicitly discard information in the input. However, for fai comparison of the discriminator's feature learning abilities with those of BiGANs, we use the same architecture as used in the BiGAN encoder..\nPreprocessing. To produce a data sample x, we first sample an image from the database, and resize. it proportionally such that its shorter edge has a length of 72 pixels. Then, a 64 64 crop is randomly selected from the resized image. The crop is flipped horizontally with probability 3. Finally, the crop. is scaled to 1, 1, giving the sample x.\nIn most experiments, both the discriminator D and generator G architecture are those used by Radford et al.(2016), consisting of a series of four 5 5 convolutions (or \"deconvolutions'' - fractionally strided convolutions - for the generator G) applied with 2 pixel stride, each followed by batch normalization and rectified non-linearity.\nQuery #1 #2 #3 #4\nFigure 5: For the query images used in Krahenbuhl et al.(2016) (left), nearest neighbors (by minimum cosine distance) from the ImageNet LSVRC (Russakovsky et al.2015) training set in the fc6 feature space of the ImageNet-trained BiGAN encoder E. (The fc6 weights are set randomly; this space is a random projection of the learned conv5 feature space.)\nTiming A single epoch (one training pass over the 1.2 million images) of BiGAN training takes roughly 40 minutes on a Titan X GPU. Models are trained for 100 epochs, for a total training time of under 3 days.\nNearest neighbors In Figure5|we present nearest neighbors in the feature space of the BiGAN encoder E learned in unsupervised ImageNet training"}] |
ByldLrqlx | [{"section_index": "0", "section_name": "DEEPCODER: LEARNING TO WRITE PROGRAMS", "section_text": "Matej Balog\nDepartment of Engineering University of Cambridge"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A dream of artificial intelligence is to build systems that can write computer programs. Recently. there has been much interest in program-like neural network models (Graves et al.|2014) |Westor. et al. 2015 Kurach et al.]2015} Joulin & Mikolov2015 Grefenstette et al.2015] Sukhbaatar et al. 2015 Neelakantan et al.]2016fKaiser & Sutskever2016;Reed & de Freitas2016]Zaremba et al. 2016 Graves et al.|2016), but none of these can write programs; that is, they do not generate. human-readable source code. Only very recently, Riedel et al.(2016); Bunel et al.(2016); Gaunt. et al.[(2016) explored the use of gradient descent to induce source code from input-output examples. via differentiable interpreters, and Ling et al.[(2016) explored the generation of source code fron unstructured text descriptions. However, Gaunt et al. (2016) showed that differentiable interpreter. based program induction is inferior to discrete search-based techniques used by the programming. languages community. We are then left with the question of how to make progress on program. induction using machine learning techniques..\nIn this work, we propose two main ideas: (1) learn to induce programs; that is, use a corpus of. program induction problems to learn strategies that generalize across problems, and (2) integrate neural network architectures with search-based techniques rather than replace them..\nIn more detail, we can contrast our approach to existing work on differentiable interpreters. In dif ferentiable interpreters, the idea is to define a differentiable mapping from source code and input to outputs. After observing inputs and outputs, gradient descent can be used to search for a pro gram that matches the input-output examples. This approach leverages gradient-based optimizatior which has proven powerful for training neural networks, but each synthesis problem is still solvec independently-- solving many synthesis problems does not help to solve the next problem.\nWe argue that machine learning can provide significant value towards solving Inductive Progra Synthesis (IPs) by re-casting the problem as a big data problem. We show that training a neur network on a large number of generated IPS problems to predict cues from the problem descriptic can help a search-based technique. In this work, we focus on predicting an order on the progra space and show how to use it to guide search-based techniques that are common in the programmir anguages community. This approach has three desirable properties: first, we transform a diffict search problem into a supervised learning problem; second, we soften the effect of failures of tl neural network by searching over program space rather than relying on a single prediction; and thir the neural network's predictions are used to guide existing program synthesis systems, allowing us use and improve on the best solvers from the programming languages community. Empirically, v\n*Also affiliated with Max-Planck Institute for Intelligent Systems, Tubingen, Germany. Work done while author was an intern at Microsoft Research..\nAlexander L. Gaunt. Marc Brockschmidt Sebastian Nowozin. Daniel Tarlow."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We develop a first line of attack for solving programming competition-style prob ems from input-output examples using deep learning. The approach is to train a eural network to predict properties of the program that generated the outputs from he inputs. We use the neural network's predictions to augment search techniques rom the programming languages community, including enumerative search and an SMT-based solver. Empirically, we show that our approach leads to an order f magnitude speedup over the strong non-augmented baselines and a Recurrent Neural Network approach, and that we are able to solve problems of difficulty omparable to the simplest problems on programming competition websites.\nIn summary, we define and instantiate a framework for using deep learning for program synthesis. problems like ones appearing on programming competition websites. Our concrete contributions are\nWe begin by providing background on Inductive Program Synthesis, including a brief overview how it is typically formulated and solved in the programming languages community..\nThe Inductive Program Synthesis (IPs) problem is the following: given input-output examples produce a program that has behavior consistent with the examples\nBuilding an IPS system requires solving two problems. First, the search problem: to find consisten programs we need to search over a suitable set of possible programs. We need to define the se (i.e., the program space) and search procedure. Second, the ranking problem: if there are multiple programs consistent with the input-output examples, which one do we return? Both of these problem are dependent on the specifics of the problem formulation. Thus, the first important decision ir formulating an approach to program synthesis is the choice of a Domain Specific Language.\nDomain Specific Languages (DSLs). DSLs are programming languages that are suitable for a specialized domain but are more restrictive than full-featured programming languages. For example one might disallow loops or other control flow, and only allow string data types and a small number of primitive operations like concatenation. Most of program synthesis research focuses on synthesizing programs in DSLs, because full-featured languages like C++ enlarge the search space and complicate synthesis. Restricted DSLs can also enable more efficient special-purpose search algorithms. For example, if a DSL only allows concatenations of substrings of an input string, a dynamic program ming algorithm can efficiently search over all possible programs (Polozov & Gulwani2015). The choice of DSL also affects the difficulty of the ranking problem. For example, in a DSL without i f statements, the same algorithm is applied to all inputs, reducing the number of programs consistent with any set of input-output examples, and thus the ranking problem becomes easier. Of course, the restrictiveness of the chosen DSL also determines which problems the system can solve at all.\nSearch Techniques. There are many techniques for searching for programs consistent with input. output examples. Perhaps the simplest approach is to define a grammar and then enumerate all. derivations of the grammar, checking each one for consistency with the examples. This approach. can be combined with pruning based on types and other logical reasoning (Feser et al.|[2015). While. simple, these approaches can be implemented efficiently, and they can be surprisingly effective.\nIn restricted domains such as the concatenation example discussed above, special-purpose algorithm can be used. FlashMeta (Polozov & Gulwanil2015) describes a framework for DSLs which allov decomposition of the search problem, e.g., where the production of an output string from an inpu. string can be reduced to finding a program for producing the first part of the output and concatenating. it with a program for producing the latter part of the output string..\nAnother class of systems is based on Satisfiability Modulo Theories (SMT) solving. SMT combines. SAT-style search with theories like arithmetic and inequalities, with the benefit that theory-dependent. subproblems can be handled by special-purpose solvers. For example, a special-purpose solver can. easily find integers x, y such that x < y and y < 100 hold, whereas an enumeration strategy may. need to consider many values before satisfying the constraints. Many program synthesis engines. based on SMT solvers exist, e.g., Sketch (Solar-Lezama]2008) and Brahma (Gulwani et al.]2011) They convert the semantics of a DSL into a set of constraints between variables representing the\n1. defining a programming language that is expressive enough to include real-world program. ming problems while being high-level enough to be predictable from input-output examples;. 2. models for mapping sets of input-output examples to program properties; and. 3. experiments that show an order of magnitude speedup over standard program synthesis techniques, which makes this approach feasible for solving problems of similar difficulty as. the simplest problems that appear on programming competition websites..\nprogram and the input-output values, and then call an SMT solver to find a satisfying setting of the program variables. This approach shines when special-purpose reasoning can be leveraged, but complex DSLs can lead to very large constraint problems where constructing and manipulating the constraints can be a lot slower than an enumerative approach..\nFinally, stochastic local search can be employed to search over program space, and there is a long history of applying genetic algorithms to this problem. One of the most successful recent examples is the STOKE super-optimization system (Schkufza et al.]2016), which uses stochastic local search to find assembly programs that have the same semantics as an input program but execute faster.\nRanking. While we focus on the search problem in this work, we briefly mention the ranking problem here. A popular choice for ranking is to choose the shortest program consistent with input output examples (Gulwani]2016). A more sophisticated approach is employed by FlashFill (Singh & Gulwanil2015). It works in a manner similar to max-margin structured prediction, where known ground truth programs are given, and the learning task is to assign scores to programs such that the ground truth programs score higher than other programs that satisfy the input-output specification.\n(1) DSL and Attributes. The choice of DSL is important in LIPS, just as it is in any program. synthesis system. It should be expressive enough to capture the problems that we wish to solve, but restricted as much as possible to limit the difficulty of the search. In LIPS we additionally specify an attribute function A that maps programs P of the DSL to finite attribute vectors a = A(P). (Attribute vectors of different programs need not have equal length.) Attributes serve as the link. between the machine learning and the search component of LIPS: the machine learning model predicts a distribution q(a ), where is the set of input-output examples, and the search procedure aims to search over programs P as ordered by q(A(P) &). Thus an attribute is useful if it is both. predictable from input-output examples, and if conditioning on its value significantly reduces the. effective size of the search space..\nPossible attributes are the (perhaps position-dependent) presence or absence of high-level functions (e.g., does the program contain or end in a call to Sorr). Other possible attributes include control flow templates (e.g., the number of loops and conditionals). In the extreme case, one may set A to the identity function, in which case the attribute is equivalent to the program; however, in our experiments we find that performance is improved by choosing a more abstract attribute function\nStep 2 is to generate a dataset ((P(n), a(n), (n)N=1 of programs P(n) in (2) Data Generation. the chosen DSL, their attributes a(n), and accompanying input-output examples (n). Different ap proaches are possible, ranging from enumerating valid programs in the DSL and pruning, to training a more sophisticated generative model of programs in the DSL. The key in the LIPS formulation is to ensure that it is feasible to generate a large dataset (ideally millions of programs).\n(4) Search. The aim of the search component is to interface with an existing solver, using th predicted q(a &) to guide the search. We describe specific approaches in the next section.\nIn this section we outline the general approach that we follow in this work, which we call Learning. Inductive Program Synthesis (LIPS). The details of our instantiation of LIPS appear in Sect.4 The. components of LIPS are (1) a DSL specification, (2) a data-generation procedure, (3) a machine learn-. ing model that maps from input-output examples to program attributes, and (4) a search procedure that searches program space in an order guided by the model from (3). The framework is related to. the formulation of|Menon et al.(2013); the relationship and key differences are discussed in Sect.6.\n(3) Machine Learning Model. The machine learning problem is to learn a distribution of at-. tributes given input-output examples, q(a ). There is freedom to explore a large space of models,. so long as the input component can encode &, and the output is a proper distribution over attributes (e.g., if attributes are a fixed-size binary vector, then a neural network with independent sigmoid outputs is appropriate; if attributes are variable size, then a recurrent neural network output could be. used). Attributes are observed at training time, so training can use a maximum likelihood objective.."}, {"section_index": "3", "section_name": "4 DEEPCODER", "section_text": "Here we describe DeepCoder, our instantiation of LIPS including a choice of DSL, a data generatior strategy, models for encoding input-output sets, and algorithms for searching over program space\nFollowing this observation, our DSL is loosely inspired by query languages such as SQL or LINQ where high-level functions are used in sequence to manipulate data. A program in our DSL is a sequence of function calls, where the result of each call initializes a fresh variable that is either a singleton integer or an integer array. Functions can be applied to any of the inputs or previously computed (intermediate) variables. The output of the program is the return value of the last function call, i.e., the last variable. See Fig.1for an example program of length T = 4 in our DSL.\nFigure 1: An example program in our DSL that takes a single integer array as its input\nOverall. our DSL contains the first-order functions HEAD, LAST, TAKE, DROp, ACCEss, MInImUM. MAXImUM, REVERSE, SORT, SUM, and the higher-Order functions MAP, FILTER, COUNT, Z1P WiTH, ScANL1. Higher-order functions require suitable lambda functions for their behavior to be. fully specified: forMAP ourDSL provides lambdas (+1), (-1), (*2), (/2), (* (-1)), (**2) (*3), (/3), (*4), (/4);for FILTER and COUNT there are predicates (>0), (<0), (%2==0) (%2==1) and for Z1pW1TH and ScANL1 the DSL provides lambdas (+), (-), (*), M1N, MAX A description of the semantics of all functions is provided in Appendix F."}, {"section_index": "4", "section_name": "4.3 MACHINE LEARNING MODEL", "section_text": "Observe how the input-output data in Fig.1|is informative of the functions appearing in the program. the values in the output are all negative, divisible by 4, they are sorted in decreasing order, and they. happen to be multiples of numbers appearing in the input. Our aim is to learn to recognize such patterns in the input-output examples, and to leverage them to predict the presence or absence of\nWe consider binary attributes indicating the presence or absence of high-level functions in the target program. To make this effective, the chosen DSL needs to contain constructs that are not so low-level that they all appear in the vast majority of programs, but at the same time should be common enough so that predicting their occurrence from input-output examples can be learned successfully..\na1 [int] An input-output example: b<FILTER (<0) a Input: cMAP (*4) b [-17, -3, 4, 11, 0, -5, -9, 13, 6, 6, -8, 11] d SORT c Output: e<REVERSE d [-12, -20, -32, -36, -68]\nNote that while the language only allows linear control flow, many of its functions do perform. branching and looping internally (e.g., SoRT, CounT, ...). Examples of more sophisticated programs. expressible in our DSL, which were inspired by the simplest problems appearing on programming competition websites, are shown in Appendix |A.\nTo generate a dataset, we enumerate programs in the DSL, heuristically pruning away those witl easily detectable issues such as a redundant variable whose value does not affect the program output. or, more generally, existence of a shorter equivalent program (equivalence can be overapproximated oy identical behavior on randomly or carefully chosen inputs). To generate valid inputs for a program. we enforce a constraint on the output value bounding integers to some predetermined range, and ther propagate these constraints backward through the program to obtain a range of valid values for each. nput. If one of these ranges is empty, we discard the program. Otherwise, input-output pairs can be generated by picking inputs from the pre-computed valid ranges and executing the program to obtain. the output values. The binary attribute vectors are easily computed from the program source codes..\nindividual functions. We employ neural networks to model and learn the mapping from input-outpu examples to attributes. We can think of these networks as consisting of two parts:\nFor the encoder we use a simple feed-forward architecture. First, we represent the input and output. types (singleton or array) by a one-hot-encoding, and we pad the inputs and outputs to a maximum. length L with a special NuLL value. Second, each integer in the inputs and in the output is mapped. to a learned embedding vector of size E = 20. (The range of integers is restricted to a finite range. and each embedding is parametrized individually.) Third, for each input-output example separately we concatenate the embeddings of the input types, the inputs, the output type, and the output into a. single (fixed-length) vector, and pass this vector through H = 3 hidden layers containing K = 256 sigmoid units each. The third hidden layer thus provides an encoding of each individual input-output example. Finally, for input-output examples in a set generated from the same program, we pool these. representations together by simple arithmetic averaging. See Appendix|C|for more details..\nDeepCoder learns to predict presence or absence of individual functions of the DSL. We shall see this can already be exploited by various search techniques to large computational gains. We use a decoder that pre-multiplies the encoding of input-output examples by a learned C K matrix, where C = 34 is the number of functions in our DSL (higher-order functions and lambdas are predicted independently), and treats the resulting C numbers as log-unnormalized probabilities (logits) of each function appearing in the source code. Fig.2|shows the predictions a trained neural network made from 5 input-output examples for the program shown in Fig.|1\n(t==Z%) (0==z%) REERRSS WOWININ MANIXNM AACesS HWIMdIZ FLTER dROP COUNT HEAD (t+) (I-*) (Z**) (0<) (0<) LAST dAP TARE (t-) (Z*) (z/) (E*) (E/) (4*) (t/) NIW XAX wUS + - * .0 .0 .1 .0 .0 .0 .0 .0 .0 .0 .0 .2 .0 .0 7 0 .1 .0 .4 .0 .0 .1 .0 .2 .1 .0 .0 .0 .0 0 .0 .0 .0 .0\nFigure 2: Neural network predicts the probability of each function appearing in the source code"}, {"section_index": "5", "section_name": "4.4 SEARCH", "section_text": "One of the central ideas of this work is to use a neural network to guide the search for a program consistent with a set of input-output examples instead of directly predicting the entire source code. This section briefly describes the search techniques and how they integrate the predicted attributes.\nDepth-first search (DFS). We use an optimized version of DFS to search over programs with a. given maximum length T (see Appendix[D|for details). When the search procedure extends a partia]. program by a new function, it has to try the functions in the DSL in some order. At this point DFS. can opt to consider the functions as ordered by their predicted probabilities from the neural network\n\"Sort and add'' enumeration. A stronger way of utilizing the predicted probabilities of functions. in an enumerative search procedure is to use a Sort and add scheme, which maintains a set of actiy functions and performs DFS with the active function set only. Whenever the search fails, the nex. most probable function (or several) are added to the active set and the search restarts with this large. active set. Note that this scheme has the deficiency of potentially re-exploring some parts of the. search space several times, which could be avoided by a more sophisticated search procedure..\nSketch. Sketch (Solar-Lezama2008) is a successfu1 SMT-based program synthesis tool from the programming languages research community. While its main use case is to synthesize programs.\n1. an encoder: a differentiable mapping from a set of M input-output examples generated by a single program to a latent real-valued vector, and. 2. a decoder: a differentiable mapping from the latent vector representing a set of M input-. output examples to predictions of the ground truth program's attributes..\nThe advantage of this encoder lies in its simplicity, and we found it reasonably easy to train. A disadvantage is that it requires an upper bound L on the length of arrays appearing in the input and. output. We confirmed that the chosen encoder architecture is sensible in that it performs empirically at least as well as an RNN encoder, a natural baseline, which may however be more difficult to train\nby filling in \"holes\"' in incomplete source code so as to match specified requirements, it is flexible enough for our use case as well. The function in each step and its arguments can be treated as. the \"holes'\", and the requirement to be satisfied is consistency with the provided set of input-output. examples. Sketch can utilize the neural network predictions in a Sort and add scheme as described. above, as the possibilities for each function hole can be restricted to the current active set..\nX2.X2 (Feser et al.2015) is a program synthesis tool from the programming languages community that combines enumerative search with deduction to prune the search space. It is designed to infer small functional programs for data structure manipulation from input-output examples, by combining. functions from a provided library. X2 can be used in our framework using a Sort and add scheme as. described above by choosing the library of functions according to the neural network predictions."}, {"section_index": "6", "section_name": "4.5 TRAINING LOSS FUNCTION", "section_text": "We use the negative cross entropy loss to train the neural network described in Sect.4.3l so that its predictions about each function can be interpreted as marginal probabilities. The LIPS framework dictates learning q(a ), the joint distribution of all attributes a given the input-output examples and it is not clear a priori how much DeepCoder loses by ignoring correlations between functions However, under the simplifying assumption that the runtime of searching for a program of length T with C functions made available to a search routine is proportional to CT, the following result for Sort and add procedures shows that their runtime can be optimized using marginal probabilities\nLemma 1. For any fixed program length T, the expected total runtime of a Sort and add search scheme can be upper bounded by a quantity that is minimized by adding the functions in the order oJ decreasing true marginal probabilities.\nProof. Predicting source code functions from input-output examples can be seen as a multi-labe classification problem, where each set of input-output examples is associated with a set of relevan labels (functions appearing in the ground truth source code).Dembczynski et al.(2010) showec that in multi-label classification under a so-called Rank loss, it is Bayes optimal to rank the label according to their marginal probabilities. If the runtime of search with C functions is proportiona. to CT, the total runtime of a Sort and add procedure can be monotonically transformed so that it is. upper bounded by this Rank loss. See AppendixE|for more details.."}, {"section_index": "7", "section_name": "5 EXPERIMENTS", "section_text": "In this section we report results from two categories of experiments. Our main experiments (Sect.5.1 show that the LIPS framework can lead to significant performance gains in solving IPS by demon strating such gains with DeepCoder. In Sect. 5.2 we illustrate the robustness of the method by demonstrating a strong kind of generalization ability across programs of different lengths.."}, {"section_index": "8", "section_name": "5.1 DEEPCODER COMPARED TO BASELINES", "section_text": "We trained a neural network as described in Sect.4.3 to predict used functions from input-output examples and constructed a test set of P = 500 programs, guaranteed to be semantically disjoint from all programs on which the neural network was trained (similarly to the equivalence check described in Sect.4.2] we have ensured that all test programs behave differently from all programs used during training on at least one input). For each test program we generated M = 5 input-output examples involving integers of magnitudes up to 256, passed the examples to the trained neural network, and fed the obtained predictions to the search procedures from Sect.4.4] We also considered a RNN-based decoder generating programs using beam search (see Sect.5.3|for details). To evaluate DeepCoder we then recorded the time the search procedures needed to find a program consistent with the M input-output examples. As a baseline, we also ran all search procedures using a simple prior as function probabilities, computed from their global incidence in the program corpus.\nIn the first, smaller-scale experiment (program search space size ~ 2 10) we trained the neural. network on programs of length T = 3, and the test programs were of the same length. Table[1shows. the per-task timeout required such that a solution could be found for given proportions of the test tasks (in time less than or equal to the timeout). For example, in a hypothetical test set with 4 tasks\nTable 1: Search speedups or rams of length T = 3 due to using neural network predictions\nTimeout needed DFS Enumeration 12 Sketch Beam to solve 20% 40% 60% 20% 40% 60% 20% 40% 60% 20% 40% 20% Baseline 41ms 126ms 314ms 80ms 335ms 861ms 18.9s 49.6s 84.2s >10s >10s >103s DeepCoder 2.7ms 33ms 110ms 1.3ms 6.1ms 27ms 0.23s 0.52s 13.5s 2.13s 455s 292s Speedup 15.2 3.9 2.9 62.2 54.6 31.5 80.4 94.6 6.2 >467 >2.2 >3.4x\nIn the main experiment, we tackled a large-scale problem of searching for programs consistent witl. input-output examples generated from programs of length T = 5 (search space size on the order o. 1010), supported by a neural network trained with programs of shorter length T = 4. Here, we only. consider P = 100 programs for reasons of computational efficiency, after having verified that this does not significantly affect the results in Table[1] The table in Fig.3a|shows significant speedups. for DFS, Sort and add enumeration, and X2 with Sort and add enumeration, the search techniques. capable of solving the search problem in reasonable time frames. Note that Sort and add enumeratior without the neural network (using prior probabilities of functions) exceeded the 104 second timeou. in two cases, so the relative speedups shown are crude lower bounds..\n10- Timeout needed. DFS Enumeration 12 depead 102 to solve 20% 40% 60% 20% 40% 60% 20% Baseline 101 163s 2887s 6832s 8181s >104s >104s 463s S DeepCoder 24s 514s 2654s 9s 264s 4640s 48s 100 e none Speedup 6.8 5.6 2.6 907x >37x >2x 9.6 1 2 3 4 5 Length of test programs Ttest (a) (b)\nFigure 3: Search speedups rams of length T = 5 and influence of length of training programs\nWe hypothesize that the substantially larger performance gains on Sort and add schemes as compared. to gains on DFS can be explained by the fact that the choice of attribute function (predicting presence. of functions anywhere in the program) and learning objective of the neural network are better matched to the Sort and add schemes. Indeed, a more appropriate attribute function for DFS would be one. that is more informative of the functions appearing early in the program, since exploring an incorrect. first function is costly with DFS. On the other hand, the discussion in Sect.4.5|provides theoretical. indication that ignoring the correlations between functions is not cataclysmic for Sort and add enumeration, since a Rank loss that upper bounds the Sort and add runtime can still be minimized.\nTo investigate the encoder's generalization ability across programs of different lengths, we trained a network to predict used functions from input-output examples that were generated from programs of length Ttrain E {1, ..., 4}. We then used each of these networks to predict functions on 5 test sets containing input-output examples generated from programs of lengths Ttest E { 1, . . . , 5}, respectively The test programs of a given length T were semantically disjoint from all training programs of the same length T and also from all training and test programs of shorter lengths T' < T.\nFor each of the combinations of Ttrain and Ttest, Sort and add enumerative search was run both with and without using the neural network's predictions (in the latter case using prior probabilities) until it solved 20% of the test set tasks. Fig.3b|shows the relative speedup of the solver having access to predictions from the trained neural networks. These results indicate that the neural networks are able to generalize beyond programs of the same length that they were trained on. This is partly due to the\n103 04 DFS X2 imeout needed Enumeration 102 o solve 20% 40% 60% 20% 40% 60% 20% speeeu 101 2 Baseline 163s 2887s 6832s 8181s >104s >104s 463s DeepCoder 24s 514s 2654s 9s 264s 4640s 48s 100 e none peedup 6.8 5.6 2.6 907x >37x >2x 9.6 1 2 3 4 5 Length of test programs Ttest (a) (b)\n103 ra o 4 DFS Enumeration X2 eout needed depaad 102 0 3 lve 20% 40% 60% 20% 40% 60% 20% 0 2 line 163s 2887s 6832s 8181s >104s >104s 463s 101 S 0 1 Coder 24s 514s 2654s 9s 264s 4640s 48s 100 e none dup 6.8 5.6 2.6 907x >37x >2x 9.6 1 2 3 4 5 Length of test programs Ttest (a)\nIn Appendix G|we analyse the performance of the neural networks used in these experiments, by investigating which attributes (program instructions) tend to be difficult to distinguish from each Other.\nsearch procedure on top of their predictions, which has the opportunity to correct for the presence of functions that the neural network failed to predict. Note that a sequence-to-sequence model trained on programs of a fixed length could not be expected to exhibit this kind of generalization ability.."}, {"section_index": "9", "section_name": "5.3 ALTERNATIVE MODELS", "section_text": "Encoder We evaluated replacing the feed-forward architecture encoder (Sect.4.3) with an RNN, a natural baseline. Using a GRU-based RNN we were able to achieve results almost as good as using the feed-forward architecture, but found the RNN encoder more difficult to train..\nDecoder We also considered a purely neural network-based approach, where an RNN decode. is trained to predict the entire program token-by-token. We combined this with our feed-forwarc. encoder by initializing the RNN using the pooled final layer of the encoder. We found it substantiall more difficult to train an RNN decoder as compared to the independent binary classifiers employec. above. Beam search was used to explore likely programs predicted by the RNN, but it only lead to a. solution comparable with the other techniques when searching for programs of lengths T 2, where. the search space size is very small (on the order of 103). Note that using an RNN for both the encode. and decoder corresponds to a standard sequence-to-sequence model. However, we do do not rule ou. that a more sophisticated RNN decoder or training procedure could be possibly more successful.\nMachine Learning for Inductive Program Synthesis.. There is relatively little work on using. machine learning for programming by example. The most closely related work is that of Menor. et al.(2013), in which a hand-coded set of features of input-output examples are used as \"clues.. When a clue appears in the input-output examples (e.g., the output is a permutation of the input). it reweights the probabilities of productions in a probabilistic context free grammar by a learnec. amount. This work shares the idea of learning to guide the search over program space conditional or. input-output examples. One difference is in the domains. [Menon et al.(2013) operate on short string. manipulation programs, where it is arguably easier to hand-code features to recognize patterns in the. input-output examples (e.g., if the outputs are always permutations or substrings of the input). Oui. work shows that there are strong cues in patterns in input-output examples in the domain of numbers. and lists. However, the main difference is the scale.Menon et al.(2013) learns from a small (280 examples), manually-constructed dataset, which limits the capacity of the machine learning mode that can be trained. Thus, it forces the machine learning component to be relatively simple. Indeed. Menon et al.[(2013) use a log-linear model and rely on hand-constructed features. LIPS automatically. generates training data, which yields datasets with millions of programs and enables high-capacity. deep learning models to be brought to bear on the problem..\nLearning Representations of Program State. Piech et al.(2015) propose to learn joint embed dings of program states and programs to automatically extend teacher feedback to many similar programs in the MOOC setting. This work is similar in that it considers embedding program states but the domain is different, and it otherwise specifically focuses on syntactic differences between semantically equivalent programs to provide stylistic feedback.Li et al.[(2016) use graph neura networks (GNNs) to predict logical descriptions from program states, focusing on data structure shapes instead of numerical and list data. Such GNNs may be a suitable architecture to encode states appearing when extending our DSL to handle more complex data structures.\nLearning to Infer. . Very recently,Alemi et al.(2016) used neural sequence models in tandem with an automated theorem prover. Similar to our Sort and Add strategy, a neural network component is trained to select premises that the theorem prover can use to prove a theorem. A recent exten sion (Loos et al.]2017) is similar to our DFS enumeration strategy and uses a neural network to guide the proof search at intermediate steps. The main differences are in the domains, and that they train on an existing corpus of theorems. More broadly, if we view a DSL as defining a model and search as a form of inference algorithm, then there is a large body of work on using discriminatively-trained models to aid inference in generative models. Examples include Dayan et al.(1995); Kingma & Welling(2014); Shotton et al.(2013); Stuhlmuller et al.(2013);Heess et al.(2013); Jampani et al. (2015)."}, {"section_index": "10", "section_name": "DISCUSSION AND FUTURE WORK", "section_text": "We have presented a framework for improving IPS systems by using neural networks to translate cues in input-output examples to guidance over where to search in program space. Our empirical results. show that for many programs, this technique improves the runtime of a wide range of IPS baselines. by 1-3 orders. We have found several problems in real online programming challenges that can be solved with a program in our language, which validates the relevance of the class of problems that. we have studied in this work. In sum, this suggests that we have made significant progress towards. being able to solve programming competition problems, and the machine learning component plays. an important role in making it tractable..\nThere remain some limitations, however. First, the programs we can synthesize are only the simples oroblems on programming competition websites and are simpler than most competition problem Many problems require more complex algorithmic solutions like dynamic programming and searcl which are currently beyond our reach. Our chosen DSL currently cannot express solutions to man oroblems. To do so, it would need to be extended by adding more primitives and allow for mor flexibility in program constructs (such as allowing loops). Second, we currently use five input-outpu examples with relatively large integer values (up to 256 in magnitude), which are probably mor informative than typical (smaller) examples. While we remain optimistic about LIPS's applicability a the DSL becomes more complex and the input-output examples become less informative, it remain to be seen what the magnitude of these effects are as we move towards solving large subsets o orogramming competition problems.\nWe foresee many extensions of DeepCoder. We are most interested in better data generation pro cedures by using generative models of source code, and to incorporate natural language problem descriptions to lessen the information burden required from input-output examples. In sum, Deep Coder represents a promising direction forward, and we are optimistic about the future prospects of using machine learning to synthesize programs."}, {"section_index": "11", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to express their gratitude to Rishabh Singh and Jack Feser for their valuable guidance and help on using the Sketch and X2 program synthesis systems."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Rudy R Bunel, Alban Desmaison, Pawan K Mudigonda, Pushmeet Kohli, and Philip Torr. Adaptive neural compilation. In Proceedings of the 29th Conference on Advances in Neural Information Processing Systems (NIPS), 2016.\nAlexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathan Taylor, and Daniel Tarlow. Terpret: A probabilistic programming language for program induction.. 'ORR. abs/1608.04428.2016 IRI AA28\nAlex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska. Barwinska, Sergio Gomez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou. et al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016.\nSumit Gulwani. Programming by examples: Applications, algorithms, and ambiguity resolution. In. Proceedings of the 8th International Joint Conference on Automated Reasoning (JCAR). 2016\nDiederik P Kingma and Max Welling. Stochastic gradient VB and the variational auto-encoder. In\nSarah M. Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. Deep network guided proof search. CoRR, abs/1701.06972, 2017. URLhttp://arxiv.org/abs/1701.06972\nArvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Inducing latent pro grams with gradient descent. In Proceedings of the 4th International Conference on Learning Representations (ICLR), 2016.\nRepresentations (1CLR), 2016 Chris Piech, Jonathan Huang, Andy Nguyen, Mike Phulsuksombati, Mehran Sahami, and Leonidas J. Guibas. Learning program embeddings to propagate feedback on student code. In Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015. Oleksandr Polozov and Sumit Gulwani. FlashMeta: a framework for inductive program synthe sis. In Proceedings of the International Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), 2015.\nSebastian Riedel. Matko Bosnjak, and Tim Rocktaschel. Programming with a differentiable forth interpreter. CoRR, abs/1605.06640, 2016. URL|http://arxiv.0rg/abs/1605.06640\nEric Schkufza, Rahul Sharma, and Alex Aiken. Stochastic program optimization. Commununication of the ACM, 59(2):114-122, 2016.\nJamie Shotton, Toby Sharp, Alex Kipman, Andrew Fitzgibbon, Mark Finocchio, Andrew Blake, Mat Cook, and Richard Moore. Real-time human pose recognition in parts from single depth images Communications of the ACM, 56(1):116-124, 2013\nRishabh Singh and Sumit Gulwani. Predicting a correct program in programming by example. In Proceedings of the 27th Conference on Computer Aided Verification (CAV). 2015\nArmando Solar-Lezama. Program Synthesis By Sketching. PhD thesis, EECS Dept., UC Berkeley 2008.\nWojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithm from examples. In Proceedings of the 33nd International Conference on Machine Learning (ICML), 2016.\nSainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks In Proceedings of the 28th Conference on Advances in Neural Information Processing Systems. (NIPS), 2015.\nThis section shows example programs in our Domain Specific Language (DSL), together with input. output examples and short descriptions. These programs have been inspired by simple tasks appearing on real programming competition websites, and are meant to illustrate the expressive power of our DSL.\nProgram 6: 1O example: Description: t[int] Input: Umberto has a large collection of ties and ma p[int] [4 8 11 2], ing pocket squares-too large, his wife says-anc cMAP (-1) t [2 3 4 1] needs to sell one pair. Given their values as array d MAP (-1) p Output: and p, assuming that he sells the cheapest pair, e < ZIpWITH (+) c d selling costs 2, how much will he lose from the sa fMINIMUM e Description: Zack always promised his n friends to buy th candy, but never did. Now he won the lott Program 7: 1O example: and counts how often and how much candy s[int] Input: promised to his friends, obtaining arrays p (n p{ [int] [4 7 2 3], ber of promises) and s (number of promised swee c< SCANL1 (+) p [2 1 3 1] He announces that to repay them, he will 1 d<ZIpWITH (*) s c Output: s[1]+s[2]+...+s [n] pieces of candy for e+Sum d 62 firstp[1] days, then s[2]+s [3]+...+s [n] p[2] days, and so on, until he has fulfilled promises. How much candy will he buy in total? Description: Vivian loves rearranging things. Most of all, wl she sees a row of heaps, she wants to make sure t each heap has more items than the one to its left. Program 8: 1O example: is also obsessed with efficiency, so always moves s[int] Input: least possible number of items. Her dad really disli b< REVERSE s [1 2 4 5 7] if she changes the size of heaps, so she only mo c<ZIPWITH (-) b s Output: single items between them, making sure that the se d<FILTER (>O) c 9 sizes of the heaps is the same as at the start; they e+SuM d only in a different order. When you come in, you heaps of sizes (of course, sizes strictly monotonic increasing) s[0], s[1], s[n]. Wha the maximal number of items that Vivian could h moved?\nFig. 4 shows the predictions made by a neural network trained on programs of length T = 4 that were ensured to be semantically disjoint from all 9 example programs shown in this section. For each task, the neural network was provided with 5 input-output examples."}, {"section_index": "13", "section_name": "EXPERIMENTAL RESULTS", "section_text": "Results presented in Sect.5.1|showcased the computational speedups obtained from the LIPS frame-. work (using DeepCoder), as opposed to solving each program synthesis problem with only the"}, {"section_index": "14", "section_name": "Description.", "section_text": "Zack always promised his n friends to buy them candy, but never did. Now he won the lottery and counts how often and how much candy he promised to his friends, obtaining arrays p (num- ber of promises) and s (number of promised sweets) He announces that to repay them, he will buy s [1]+s [2]+...+s [n] pieces of candy for the firstp[1] days, then s[2]+s[3]+...+s [n] for p[2] days, and so on, until he has fulfilled all promises. How much candy will he buy in total?\nVivian loves rearranging things. Most of all, when she sees a row of heaps, she wants to make sure that each heap has more items than the one to its left. She example: is also obsessed with efficiency, so always moves the ut: least possible number of items. Her dad really dislikes 2 4 5 7] if she changes the size of heaps, so she only moves put: single items between them, making sure that the set of sizes of the heaps is the same as at the start; they are only in a different order. When you come in, you see heaps of sizes (of course, sizes strictly monotonically increasing) s [0l, s[1l, s [n]. What is the maximal number of items that Vivian could have moved?\n(== 2%) (0== 2%) REERSSEE ACCCSS HLIMdIZ CUUNTN WOWINIW FLLEEER HEADD TAREE DROP (I+) (t-) (Z*) (2/ I-*) (Z**) (E+) (E/ 4+) 44 0<) (0>) LAST dAPP NIW XAX wns 0: SORT b|TAKE a c| SUM d .2 1 .0 .0 1 1 .2 2 .2 3 4 .5 1 1 1.0 1: MAP (*3) a |ZIPWITH + b c | MAXIMUM d 1 .1 1 .0 1.0 1 .0 .2 .1 .1 0 .1 .0 .1 .2 0 .0 2: ZIPWITH - b a| COUNT (>0) c 1 .2 .0 1! .0 .0 0 .1 0 2 .3 .3 .0 6 .0 1 1 .0 .0 1.0 O .0 3: SCANL1 MIN a |ZIPWITH - a b |FILTER (>0) c | SUM d 3 .0 .0 .1 .0 2 2 .1 .1 .1 .1 .0 .0 .0 .0 0 .0 .0 .0 1! 1.0 1 0 3 4: SORT a|SORT b | REVERSE d | ZIPWITH * d e |SUM f 0 .0 .0 2 .0 .3 .2 4 .0 .0 .1 4 .1 .4 .2 .0 .0 .2 1 .2 .2 .1 0 .4 1 .0 5: REVERSE a | ZIPWITH MIN a b .2 .2 .0 .2 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 .0 1. .0 .0 .2 .0 1 .0 .0 .0 .0 6: MAP (-1) a |MAP (-1) b|ZIPWITH + c d | MINIMUM e 1 .0 0 .0 .0 .0 .0 .0 .0 .0 N .2 .2 .0 .3 .0 .0 - 7: SCANL1 + b|ZIPWITH * a c| SUM d .0 .0 .0 .0 1 .0 1 .0 .1 1 .1 .0 .0 0 .0 O 8 .5 1.0 1. .0 .0 7 .0 1 0 4 8: REVERSE a|ZIPWITH -b a|FILTER (>0) c|SUM d .0 .1 .0 0 0 .1 .1 .0 .0 .0 .5 .0 .0 .4 .5 .0 1! 1.0 Z 1 1 1 1 1 O 0 .0 3 .0\nFigure 4: Predictions of a neural network on the 9 example programs described in this section Numbers in squares would ideally be close to 1 (function is present in the ground truth source code). whereas all other numbers should ideally be close to 0 (function is not needed)\ninformation about global incidence of functions in source code available. For completeness, here we show plots of raw computation times of each search procedure to solve a given number of problem\nFig.5|shows the computation times of DFS, of Enumerative search with a Sort and add scheme, of the X2 and Sketch solvers with a Sort and add scheme, and of Beam search, when searching for a program consistent with input-output examples generated from P = 500 different test programs of. length T = 3. As discussed in Sect.5.1] these test programs were ensured to be semantically disjoint. from all programs used to train the neural networks, as well as from all programs of shorter length (as discussed in Sect.4.2)\n500 DFS: using neural network 400 DFS: using prior order 300 L2: Sort and add using neural network 200 L2: Sort and add in prior order 100 Enumeration: Sort and add using neural ne 0 L J-11 : J .1.1 uu Enumeration: Sort and add in prior order 10-4 10-3 10-2 10-1 100 101 102 103 Beam search Solver computation time [s] Sketch: Sort and add using neural network Sketch: Sort and add in prior order\n500 DFS: using neural network. 400 DFS: using prior order 300 swea6oud L2: Sort and add using neural network. 200 L2: Sort and add in prior order. 100 Enumeration: Sort and add using neural network 0 L J.1.1Lu Enumeration: Sort and add in prior order. 10-4 10-3 10-2 10-1 100 101 102 103 Beam search Solver computation time [s] Sketch: Sort and add using neural network. Sketch: Sort and add in prior order.\nFigure 5: Number of test problems solved versus computation time\nThe \"steps\"' in the results for Beam search are due to our search strategy, which doubles the size of. the considered beam until reaching the timeout (of 1000 seconds) and thus steps occur whenever. the search for a beam of size 2k is finished. For X2, we observed that no solution for a given set of. allowed functions was ever found after about 5 seconds (on the benchmark machines), but that X2. continued to search. Hence, we introduced a hard timeout after 6 seconds for all but the last iterations of our Sort and add scheme.\nFig.6|shows the computation times of DFS, Enumerative search with a Sort and add scheme, and X2 with a Sort and add scheme when searching for programs consistent with input-output examples generated from P = 100 different test programs of length T = 5. The neural network was trained on programs of length T = 4.\n100 DFS: using neural network. 80 DFS: using prior order. 60 L2: Sort and add using neural network. 40 L2: Sort and add in prior order. 20 Enumeration: Sort and add using neural n 0 L Enumeration: Sort and add in prior order 10-4 10-3 10-2 10-1 100 101 102 103 104 Solver computation time [s].\nFigure 6: Number of test problems solved versus computation time\nAs briefly described in Sect.4.3] we used the following simple feed-forward architecture encoder\nDFS: using neural network DFS: using prior order L2: Sort and add using neural network L2: Sort and add in prior order Enumeration: Sort and add using neural network 1111 Enumeration: Sort and add in prior order 102 103 Beam search Sketch: Sort and add using neural network Sketch: Sort and add in prior order\n100 DFS: using neural network 80 DFS: using prior order 60 L2: Sort and add using neural network 40 L2: Sort and add in prior order 20 Enumeration: Sort and add using neural network 0 L Enumeration: Sort and add in prior order 10-4 10-3 10-2 10-1 100 101 102 103 104\nFor each input-output example in the set generated from a single ground truth program\nFor each input-output example in the set generated from a single ground truth program. - Pad arrays appearing in the inputs and in the output to a maximum length L = 20 with. a special NuLL value. - Represent the type (singleton integer or integer array) of each input and of the output. using a one-hot-encoding vector. Embed each integer in the valid integer range (256. to 255) using a learned embedding into E = 20 dimensional space. Also learn an. embedding for the padding NuLL value.\nFigure 7: Schematic representation of our feed-forward encoder, and the decoder\nWhile DeepCoder learns to embed integers into a E = 20 dimensional space, we built the system up gradually, starting with a E = 2 dimensional space and only training on programs of length T = 1 Such a small scale setting allowed easier investigation of the workings of the neural network, and indeed Fig. 8|below shows a learned embedding of integers in R2. The figure demonstrates that the network has learnt the concepts of number magnitude, sign (positive or negative) and evenness presumably due to FILTER (>0),FILTER (<0),FILTER (%2==0) and FILTER (%2==1) allbeing among the programs on which the network was trained."}, {"section_index": "15", "section_name": "D DEPTH-FIRST SEARCH", "section_text": "We use an optimized C++ implementation of depth-first search (DFS) to search over programs with. a given maximum length T. In depth-first search, we start by choosing the first function (and its arguments) of a potential solution program, and then recursively consider all ways of filling in the. rest of the program (up to length T), before moving on to a next choice of first instruction (if a. solution has not yet been found).\nA program is considered a solution if it is consistent with all M = 5 provided input-output examples Note that this requires evaluating all candidate programs on the M inputs and checking the results for equality with the provided M respective outputs. Our implementation of DFS exploits the sequential structure of programs in our DSL by caching the results of evaluating all prefixes of the currently considered program on the example inputs, thus allowing efficient reuse of computation between candidate programs with common prefixes.\nThis allows us to explore the search space at roughly the speed of ~ 3 106 programs per second\n- Concatenate the representations of the input types, the embeddings of integers in the. inputs, the representation of the output type, and the embeddings of integers in the. output into a single (fixed-length) vector.. - Pass this vector through H = 3 hidden layers containing K = 256 sigmoid units each Pool the last hidden layer encodings of each input-output example together by simple arith-. metic averaging\nFig.[7|shows a schematic drawing of this encoder architecture, together with the decoder that performs independent binary classification for each function in the DSL, indicating whether or not it appears. in the ground truth source code.\nAttribute Predictions Sigmoids Final Activations Pooled Hiddens 3 Hiddens 2 Hiddens 1 State Embeddings Program State Inputs 1 Outputs 1 Inputs 5 Outputs 5\neven positive numbers. 0 even negative number. odd positive numbers. odd negative numbers. zero Null (padding value). 55 Null\nFigure 8: A learned embedding of integers {-256, -255, ..., -1, 0, 1, ..., 255} in intensity corresponds to the magnitude of the embedded integer.\nWhen the search procedure extends a partial program by a new function, it has to try the functions in the DSL in some order. At this point DFS can opt to consider the functions as ordered by their predicted probabilities from the neural network. The probability of a function consisting of a higher order function and a lambda is taken to be the minimum of the probabilities of the two constituent functions."}, {"section_index": "16", "section_name": "E TRAINING LOSS FUNCTION", "section_text": "When the task is to predict a subset of labels y E {0, 1}C, different loss functions can be employed. to measure the prediction error of a classifier h(x) or ranking function f(x).Dembczynski et al. (2010) discuss the following three loss functions:.\nIn Sect.4.5|we outlined a justification for using marginal probabilities of individual functions as a sensible intermediate representation to provide a solver employing a Sort and add scheme (we. considered Enumerative search and the Sketch solver with this scheme). Here we provide a more. detailed discussion.\nPredicting program components from input-output examples can be cast as a multilabel classification problem, where each instance (set of input-output examples) is associated with a set of relevant labels (functions appearing in the code that generated the examples). We denote the number of labels (functions) by C, and note that throughout this work C = 34.\nHamming loss counts the number of labels that are predicted incorrectly by a classifier h\nC LH(y,h(x))= c=1\nLr(y,f(x)) = fi (i,j):yi=1,yj=0\nLs(y,h(x)) = l{y/h(x)}\nDembczynski et al.(2010) proved that Bayes optimal decisions under the Hamming and Rank los. functions, i.e., decisions minimizing the expected loss under these loss functions, can be compute from marginal probabilities pc(yc x). This suggests that:.\nTraining the neural network with the negative cross entropy loss function as the training objective is. precisely a method for properly estimating the marginal probabilities of labels (functions appearing. in source code). It is thus a sensible step in preparation for making predictions under a Rank loss\nIt remains to discuss the relationship between the Rank loss and the actual quantity we care about which is the total runtime of a Sort and add search procedure. Recall the simplifying assumption that the runtime of searching for a program of length T with C functions made available to the search is proportional to CT, and consider a Sort and add search for a program of length T, where the size of the active set is increased by 1 whenever the search fails. Starting with an active set of size 1, the total time until a solution is found can be upper bounded by\nNow fix a valid program solution P that requires Cp functions, and let y p E {0, 1}C be the indicator vector of functions used by P. Let D := CA - Cp be the number of redundant operations added into the active set until all operations from P have been added..\nExample 1. Suppose the labels, as sorted by decreasing predicted marginal probabilities f(x), are as follows:\nThen the solution P contains Cp = 6 functions, but the active set needs to grow to size CA = 1. to include all of them, adding D = 5 redundant functions along the way. Note that the rank loss o. the predictions f(x) is Lr(yp, f(x)) = 2 + 5 = 7, as it double counts the two redundant functions. which are scored higher than two relevant labels..\nNoting that in general Lr(yp, f(x)) D, the previous upper bound on the runtime of Sort and ad. can be further upper bounded as follows:.\nHence we see that for a constant value of T, this upper bound can be minimized by optimizing the Rank loss of the predictions f(x). Note also that Lr(yp, f(x)) = 0 would imply D = 0, in which case CA = Cp.\nHere we provide a description of the semantics of our DSL from Sect.4.1] both in English and as a Python implementation. Throughout, NuLL is a special value that can be set e.g. to an integer outside the working integer range..\nFirst-order functions:\nHeAD :: [int] -> int lambda xs: xs[0] if len(xs)>0 else Nul1 Given an array, returns its first element (or NuLL if the array is empty) LAsT :: [int1 -> int lambda xs: xs[-1] if len(xs)>0 else Nul1 Given an array, returns its last element (or NuLL if the array is empty)\n. Multilabel classification under these two loss functions may not benefit from considering dependencies between the labels. \"Instead of minimizing the Rank loss directly, one can simply use any approach for single label prediction that properly estimates the marginal probabilities. (Dembczynski et al. 2012)\n1T +2 ... CcT A\nwhere CA is the size of the active set when the search finally succeeds (i.e., when the active set finally contains all necessary functions for a solution to exist). Hence the total runtime of a Sort and add. search can be upper bounded by a quantity that is proportional to CT.\n(Cp+ D)T const + const DT const+ const Lr(yp,f(x))?\nTAKE :: int -> [int1 i nt lambda n, xs: xs[:n] Given an integer n and array xs, returns the array truncated after the n-th ele length of xs was no larger than n in the first place, it is returned without modi Drop :: int -> [int] -> int lambda n, xs: xs[n:] Given an integer n and array xs, returns the array with the first n elements dro. length of xs was no larger than n in the first place, an empty array is returned. Access :: int -> [int] -> int lambda n, xs: xs[n] if n>=0 and len(xs)>n else Null. Given an integer n and array xs, returns the (n+1)-st element of xs. (If the 1 was less than or equal to n, the value NuLL is returned instead.). MINImum :: [int] -> int lambda xs: min(xs) if len(xs)>0 else Null. Given an array, returns its minimum (or NuLL if the array is empty). MAXIMUM :: [int1 -> int lambda xs: max(xs) if len(xs)>0 else Null. Given an array, returns its maximum (or NuLL if the array is empty). ReVERsE :: [int] -> [int] lambda xs: list(reversed(xs)) Given an array, returns its elements in reversed order.. SoRT :: [int] -> [int] lambda xs: sorted(xs) Given an array, return its elements in non-decreasing order.. Sum :: [int] -> int lambda xs: sum(xs) Given an array, returns the sum of its elements. (The sum of an empty array is\nHigher-order functions:\nTheINT->INTlambdas (+1),(-1),(*2),(/2),(*(-1)), (**2),(*3), (/3), (*4),(/4) provided by our DSL map integers to integers in a self-explanatory manner. The INT->BoOL lambdas (>0), (<0), (%2==0) , (%2==1) respectively test positivity, negativity, evenness and oddness of\nMap :: (int -> int) -> [int] -> [int] lambda f, xs: [f(x) for x in xs] Given a lambda function f mapping from integers to integers, and an array xs, returns the array resulting from applying f to each element of x s. FilTER :: (int -> bool) -> [int] -> [int] lambda f, xs: [x for x in xs if f(x)] Given a predicate f mapping from integers to truth values, and an array xs, returns the elements of xs satisfying the predicate in their original order. CounT :: (int -> bool) -> [int] -> int lambda f, xs: len([x for x in xs if f(x)l) Given a predicate f mapping from integers to truth values, and an array xs, returns the number of elements in x s satisfying the predicate. ZipWitH :: (int -> int -> int) -> [int] -> [int] -> [int] lambda f, xs, ys: [f(x, y) for (x, y) in zip(xs, ys)] Given a lambda function f mapping integer pairs to integers, and two arrays xs and ys, returns the array resulting from applying f to corresponding elements of xs and ys. The length of the returned array is the minimum of the lengths of xs and ys. ScANL1 :: (int -> int -> int) -> [int] -> [int] Given a lambda function f mapping integer pairs to integers, and an array xs, returns an array ys of the same length as xs and with its content defined by the recurrence ys [ 0 ] = xs[0],ys[n] = fys[n-1], xs[n]) forn>1.\nthe input integer value. Finally, the INT->INT->INT lambdas (+), (-), (*) , MIN, MAX apply a function to a pair of integers and produce a single integer.\nAs an example, consider the function ScANL1 MAX, consisting of the higher-order function SCANL1 and the INT->INT->INT lambda MAX. Given an integer array a of length L, this function computes the running maximum of the array a. Specifically, it returns an array b of the same length L whose i-th element is the maximum of the first i elements in a..\n(139) (134) (133) (138) (131) (136) (132) (138) (137) (138) ZIPWITH .01 .01 .01 .01 .01 . 02 .01 .02 .03 .02 .03 .02 .02 .14 .09 .04 .11 .08 .02 .04 .02 04 . 03 .03 .04 .07 .04 .02 .03 04 .03 .04 .00 (423) (428) (427) (425) (426) (409) (407) (335) (408) (401) (403) (397) (413) (219) (314) (306) (259) (284) (289) (327) (404) (400) (410) (414) (397) (411) (413) (399) (404) (402) (406) (406) (424) .01 .01 .03 .01 .00 .03 .01 .01 02 .01 .02 .02 .00 .14 .09 . 05 .11 09 .02 .03 .04 .03 .03 .03 .02 05 .02 .03 .06 .01 .03 .04 .00 SCANL1 (125) (125) (122) (124) (122) (118) (120) (109) (121) (116) (119) (122) (119) (79) (78) (73) (69) (64) (99) (21) (121) (118) (120) (119) (119) (123) (122) (123) (118) (118) (119) (121) (123) SORT .02 .05 .00 01 .01 .04 .07 .05 .01 .04 .03 .01 .06 .17 .18 .02 09 06 .00 02 .09 .09 .06 .04 .03 07 .03 .02 .01 .02 .03 .01 .00 (33) (33) (30) (31) (33) (32) (32) (28) (31) (33) (30) (32) (32) (27) (27) (23) (26) (26) (26) (6) (29) (31) (32) (33) (31) (33) (33) (33) (33) (32) (33) (31) (31) REVERSE .00 01 00 00 .00 01 .09 .02 02 01 .02 .04 .06 .21 .16 . 05 01 02 03 06 .08 08 .09 .01 .06 11 .01 .00 .00 . 04 .03 .03 .00 (39) (40) (39) (40) (40) (38) (38) (32) (38) (38) (36) (38) (38) (23) (30) (34) (33) (32) (29) (9) (33) (38) (39) (38) (37) (39) (39) (37) (39) (40) (37) (40) .03 04 01 .01 .00 . 02 .03 05 03 .03 .04 .03 .02 .16 .10 06 09 .02 .05 .06 .02 .13 .02 .05 10 .02 .00 .03 .05 .02 .03 .00 (*-1) (26) (26) (25) (26) (27) (26) (26) (24) (26) (26) (26) (27) (27) (15) (22) (20) (22) (23) (6) (22) (26) (26) (27) (24) (27) (26) (25) (26) (26) (27) (27) (27) (**2) .00 .01 00 .03 .01 .01 .07 .01 .04 .00 .00 .01 .00 .12 .02 .06 .04 .39 .09 .05 .00 . 02 .01 .04 .05 .01 .05 .07 04 .01 .02 .00 (19) (21) (20) (21) (21) (19) (21) (20) (21) (20) (21) (21) (21) (16) (15) (16) (16) (4) (15) (21) (19) (21) (20) (18) (20) (19) (21) (21) (19) (21) (21) (+1) .00 .01 .00 .00 .00 .01 .04 .01 01 .00 .04 .01 01 .08 .03 .09 02 .03 06 .03 .01 .07 .03 .02 .43 .04 .01 .00 .06 .02 .03 .00 (39) (39) (38) (39) (39) (34) (38) (35) (39) (38) (39) (36) (39) (30) (35) (20) (28) (31) (5) (33) (37) (36) (36) (38) (39) (39) (39) (38) (39) (38) (39) .01 . 01 01 .01 .02 .02 .00 .02 04 .03 .02 .01 01 .04 .07 .12 01 .05 .08 .03 .02 05 . 01 .07 .23 .04 .00 .02 .08 .03 .04 .00 (-1) (27) (27) (27) (28) (28) (25) (23) (26) (26) (28) (28) (27) (27) (24) (28) (23) (21) (19) (8) (26) (28) (27) (28) (25) (28) (28) (23) (27) (28) (24) (27) (28) (*2) .01 . 00 00 .00 .01 .00 .00 .01 .01 .00 .00 .03 .00 .02 .01 .39 .03 .00 .10 .05 .07 . 02 .01 .00 .01 03 .01 .06 .03 .06 .02 .00 (24) (23) (24) (24) (24) (21) (22) (20) (23) (23) (24) (22) (24) (19) (19) (19) (19) (20) (6) (21) (24) (23) (23) (23) (24) (24) (23) (21) (21) (24) (21) (24) (*3) .01 .00 00 00 .00 .00 .00 .00 .00 .00 00 .00 00 .02 . 02 .14 02 .08 . 02 .06 01 .06 . 02 .04 .02 02 .04 .05 .06 .03 .03 .00 (37) (37) (38) (38) (38) (38) (34) (31) (36) (37) (36) (36) (38) (30) (32) (31) (28) (27) (6) (36) (38) (35) (36) (36) (38) (33) (37) (36) (37) (36) (36) (38) (*4) .01 .00 03 00 .00 .02 .00 .00 01 .00 .00 .04 .00 .02 .00 11 16 .05 .06 .06 .04 .05 .02 .02 .03 .04 .06 .06 .05 .00 .00 .00 (35) (35) (35) (35) (34) (34) (33) (29) (32) (34) (34) (33) (34) (24) (27) (25) (27) (29) (8) (28) (35) (34) (34) (35) (35) (34) (32) (33) (35) (32) (35) (34) (/2) .00 .00 00 .00 .00 .00 .00 .04 01 .01 .00 .01 01 .16 .03 .08 .07 .00 .03 .07 .03 .02 04 .03 .08 .13 .05 .00 .01 .10 .13 .00 (31) (32) (32) (32) (32) (29) (31) (30) (30) (32) (31) (32) (31) (26) (22) (22) (25) (22) (3) (25) (31) (32) (31) (32) (31) (32) (29) (31) (32) (30) (30) (32) (/3) .02 . 02 .03 .01 .00 .00 .01 .07 03 .02 .01 .04 01 .07 04 .11 .04 .06 .06 .02 01 . 01 .06 .02 .10 08 .02 .00 .01 .12 .15 .00 (33) (34) (33) (33) (33) (31) (31) (31) (34) (34) (31) (33) (33) (27) (28) (27) (24) (30) (9) (28) (34) (31) (34) (32) (34) (30) (34) (32) (31) (32) (34) (33) .02 .02 .00 00 .00 .01 .00 .01 02 .00 .00 .03 .00 .07 .03 09 10 .00 .03 .02 .03 .02 .01 .01 .02 .02 .01 .00 .01 13 .17 .8 (/4) (27) (27) (27) (2 7) (26) (26) (26) (23) (27) (26) (26) (25) (27) (20) (19) (21) (22) (22) (2) (23) (25) (24) (27) (27) (26) (26) (24) (25) (27) (25) (27) SUM .01 .07 .13 .06 .01 .10 .00 .15 .07 .04 .06 .03 .00 .31 .09 . 02 .11 .18 .02 .06 .18 .01 .00 .07 .09 .12 09 .06 .14 .00 . 02 .05 .11 8 8 (8) (8)\nFigure 9: Conditional confusion matrix for the neural network and test set of P = 500 programs of. length T = 3 that were used to obtain the results presented in Table[1| Each cell contains the average. false positive probability (in larger font) and the number of test programs from which this average. was computed (smaller font, in brackets). The color intensity of each cell's shading coresponds to the magnitude of the average false positive probability.\nWe analyzed the performance of trained neural networks by investigating which program instructions tend to get confused by the networks. To this end, we looked at a generalization of confusion matrices to the multilabel classification setting: for each attribute in a ground truth program (rows) measure how likely each other attribute (columns) is predicted as a false positive. More formally, in this matrix the (i, j)-entry is the average predicted probability of attribute j among test programs that dc\npossess attribute i and do not possess attribute j. Intuitively, the i-th row of this matrix shows how the presence of attribute i confuses the network into incorrectly predicting each other attribute j.\nFigure[9|shows this conditional confusion matrix for the neural network and P = 500 program test set configuration used to obtain Table[1] We re-ordered the confusion matrix to try to expose block. structure in the false positive probabilities, revealing groups of instructions that tend to be difficult to. distinguish. Figure 10show the conditional confusion matrix for the neural network used to obtair the table in Fig.3al While the results are somewhat noisy, we observe a few general tendencies:.\n(76) (79) (65) (90) (87) (86) (88) (87) (88) (90) (87) (90) (80) (81) (81) (96) ZIPWITH .05 .08 .04 .04 .04 .06 .06 .13 .08 .09 .09 .09 .05 .06 .11 .10 .11 .13 .06 .07 .03 .09 .04 .04 .05 .05 .04 .03 .01 .04 .06 .04 .01 (318) (317) (290) (314) (305) (266) (253) (192) (256) (276) (272) (276) (226) (91) (215) (212) (215) (243) (242) (219) (303) (297) (310) (305) (301) (310) (308) (304) (310) (287) (294) (283) (334) SCANL1 .04 .09 .05 .05 .05 .09 .07 .15 .09 .12 .10 .11 .07 .08 .10 .10 .10 .13 .05 .13 .06 .08 .05 .02 .04 .04 .02 .03 .01 .03 .05 .05 .02 (175) (176) (151) (174) (176) (141) (137) (120) (141) (153) (160) (153) (114) (68) (107) (100) (116) (133) (160) (72) (175) (174) (173) (180) (168) (180) (180) (184) (175) (169) (168) (168) (191) SORT .05 .09 .06 .02 .03 .03 .09 .22 .12 .08 .16 .10 .04 .11 .14 12 .10 .09 .07 .10 .10 .13 .07 .02 .04 .05 .03 .02 .00 .04 .05 .04 .00 (52) (52) (44) (50) (50) (37) (40) (44) (50) (43) (49) (46) (32) (22) (45) (43) (42) (42) (50) (21) (40) (51) (53) (51) (51) (54) (52) (52) (51) (51) (51) (50) (56) REVERSE .05 .07 .02 .02 .06 .05 .11 .15 .09 .06 .10 .11 .05 .08 .15 .14 .12 .08 .07 .15 .10 .10 .06 .02 .05 .10 . 04 .05 .02 .07 .07 .05 .00 (62) (59) (47) (59) (62) (51) (54) (43) (52) (49) (55) (52) (39) (18) (44) (48) (52) (47) (55) (23) (47) (59) (58) (57) (57) (61) (61) (59) (60) (57) (51) (55) (64) (*-1) .05 .06 .03 .03 .05 .03 .07 .12 .06 .15 .06 .06 .04 .18 .09 .11 .11 .04 .13 .05 .06 .12 .03 .06 .03 .01 .03 .01 .05 .08 .03 .03 (43) (43) (41) (43) (44) (33) (37) (29) (32) (42) (41) (41) (37) (38) (32) (38) (38) (37) (19) (29) (44) (41) (45) (45) (39) (39) (42) (43) (42) (45) (42) (47) (**2) .07 .09 .03 .04 .04 .04 .03 .18 09 .10 .06 .09 .03 .13 12 .14 .07 .43 .18 .11 .02 .06 .07 .02 .05 .02 .13 .03 .03 .07 .05 .01 (44) (47) (37) (40) (42) (36) (33) (35) (40) (44) (35) (43) (34) (36) (38) (38) (31) (39) (14) (36) (42) (40) (45) (40) (43) (47) (45) (46) (38) (43) (40) (46) (+1) .05 .09 .06 .04 .06 .05 .09 .13 .05 .07 .16 .10 .07 .12 .15 .08 .06 .08 .14 .10 .06 .08 .07 .04 .10 .02 .03 .02 .04 .06 .08 .01 (61) (57) (49) (58) (55) (45) (47) (44) (49) (49) (62) (58) (45) (48) (48) (51) (47) (55) (27) (41) (59) (57) (62) (57) (64) (62) (61) (58) (59) (58) (53) (62) .07 .08 .04 .03 .08 .10 .08 .10 .04 .15 .12 .03 .06 .13 .11 .08 .11 .06 .10 .06 .06 .07 (-1) .04 .03 .16 .01 .03 .02 .09 .07 .05 .00 (40) (43) (38) (38) (45) (39) (32) (22) (29) (38) (39) (31) (31) (35) (34) (35) (36) (37) (17) (34) (43) (42) (37) (41) (45) (40) (40) (44) (43) (42) (38) (45) .03 .06 .04 .02 .05 .05 .10 .16 .04 .11 .05 .06 .01 .11 .07 .25 .19 .03 .08 .04 .03 .13 .02 .03 .07 .03 .02 .03 (*2) .07 .07 .01 .01 (38) (34) (30) (36) (38) (26) (30) (27) (28) (29) (36) (33) (23) (26) (26) (29) (35) (32) (8) (27) (34) (35) (30) (38) (36) (33) (33) (36) (35) (35) (33) (37) (*3) .04 .08 .04 .01 .04 .05 .05 .10 04 .09 .12 .07 .03 .09 .10 .04 .11 .04 .12 .06 .04 .05 .04 .02 .04 .04 .05 .03 .05 .05 .02 .01 (49) (47) (43) (42) (43) (39) (34) (25) (34) (38) (43) (39) (33) (38) (33) (36) (47) (40) (15) (42) (45) (44) (44) (47) (46) (44) (44) (48) (44) (45) (38) (49) (*4) .04 .05 .05 .01 .03 .01 .05 .20 05 .14 .10 .17 .03 .10 .05 .07 .15 .13 .06 .06 .03 .10 .05 .05 .04 .04 .07 .07 .04 .05 .03 .02 (35) (33) (34) (34) (37) (30) (25) (27) (28) (33) (32) (32) (23) (27) (21) (20) (34) (31) (9) (21) (32) (33) (33) (36) (31) (36) (35) (36) (35) (33) (34) (37) (/2) .04 .10 .05 .03 .06 .06 .07 .14 .09 .09 .09 .11 .08 .10 .13 .13 .11 .05 .08 .06 .04 .08 .05 .03 .04 .07 .03 .03 .02 .11 .08 .01 (60) (67) (57) (58) (62) (53) (50) (48) (57) (57) (56) (62) (49) (50) (48) (54) (52) (53) (18) (47) (64) (62) (64) (60) (64) (67) (66) (64) (67) (64) (63) (69) .05 .08 .03 .03 .05 .05 .08 .17 (/3) .09 .09 .07 .10 .08 .16 .13 .07 .13 .04 .12 .06 .04 06 .07 .03 .05 .05 .03 .04 .02 .06 .09 .01 (64) (65) (58) (60) (68) (51) (56) (48) (57) (61) (64) (57) (50) (58) (49) (53) (55) (56) (27) (48) (66) (58) (69) (67) (65) (68) (68) (67) (67) (66) (63) (70) .05 (/4) .07 .04 .04 .03 .04 .08 .12 .06 .11 .07 .09 .03 .12 .09 .10 .14 .05 .11 .06 .03 .07 .07 .03 .07 .07 .02 .01 .00 .09 .16 .00 (69) (70) (67) (72) (70) (57) (58) (45) (61) (65) (62) (63) (57) (50) (56) (57) (57) (60) (20) (52) (69) (66) (70) (68) (64) (68) (70) (64) (72) (69) (67) (74) .07 .09 .27 .20 .14 .07 .05 SUM .09 .06 .15 .16 .14 .14 .18 .22 .30 .28 .18 11 .59 .03 .02 .04 .02 .01 .06 .01 .13 .00 .02 .05 .02 (6) (5) (5) (5) (6) (4) (3) (4) (5) (4) (4) (6) (3) (5) (6) (4) (5) (6) (2) (6) (6) (6) (6) (5) (4) (6) (5) (6) (6) (6) (5) (5)\nFigure 10: Conditional confusion matrix for the neural network and test set of P = 500 programs of length T = 5. The presentation is the same as in Figure|9\nThere is increased confusion amongst instructions that select out a single element from an array: HEAD, LAST, ACCESS, MINIMUM, MAXIMUM. Some common attributes get predicted more often regardless of the ground truth program FILTER, (>0), (<0), (%2==1), (%2==0), MIN, MAX, (+), (-), ZIpWITH. There are some groups of lambdas that are more difficult for the network to distinguish. within: (+) vs (-); (+1) vs (-1); (/2) vs (/3) vs (/4). When a program uses (**2) , the network often thinks it's using (*) , presumably because. both can lead to large values in the output.."}] |
SyQq185lg | [{"section_index": "0", "section_name": "LATENT SEOUENCE DECOMPOSITIONS", "section_text": "William Chan\nCarnegie Mellon University\nwilliamchan@cmu.edu\n1g Massachusetts Institute of Technology yzhang87@mit.edu\n{qvl,ndjaitly}@google.com\nSequence-to-sequence models rely on a fixed decomposition of the target se quences into a sequence of tokens that may be words, word-pieces or characters The choice of these tokens and the decomposition of the target sequences into a sequence of tokens is often static, and independent of the input, output data do- mains. This can potentially lead to a sub-optimal choice of token dictionaries, as the decomposition is not informed by the particular problem being solved. In this paper we present Latent Sequence Decompositions (LSD), a framework in which the decomposition of sequences into constituent tokens is learnt during the train- ing of the model. The decomposition depends both on the input sequence and on the output sequence. In LSD, during training, the model samples decompositions incrementally, from left to right by locally sampling between valid extensions. We experiment with the Wall Street Journal speech recognition task. Our LSD model achieves 12.9% WER compared to a character baseline of 14.8% WER. When combined with a convolutional network on the encoder, we achieve a WER of 9.6%."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Previous work has assumed a fixed deterministic decomposition for each output sequence. The output representation is usually a fixed sequence of words (Sutskever et al.||2014;. Cho et al.[2014) phonemes (Chorowski et al.[|2015), characters (Chan et al.. 2016fBahdanau et al. 2016a) or even a mixture of characters and words (Luong & Manning|2016). However, in all these cases, the models are trained towards one fixed decomposition for each output sequence..\nWe argue against using fixed deterministic decompositions of a sequence that has been defined a. priori. Word segmented models (Luong et al.2015} Jean et al., 2015) often have to deal with large softmax sizes, rare words and Out-of-Vocabulary (OOV) words. Character models (Chan et al.. 2016, Bahdanau et al.]2016a) overcome the OOV problem by modelling the smallest output unit. however this typically results in long decoder lengths and computationally expensive inference. And. even with mixed (but fixed) character-word models (Luong & Manning2016), it is unclear whether. such a predefined segmentation is optimal. In all these examples. the output decomposition is only\nWork done at Google Brain"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "a function of the output sequence. This may be acceptable for problems such as translations, bu inappropriate for tasks such as speech recognition, where segmentation should also be informed by. the characteristics of the inputs, such as audio..\nWe want our model to have the capacity and flexibility to learn a distribution of sequence decompo. sitions. Additionally, the decomposition should be a sequence of variable length tokens as deemed most probable. For example, language may be more naturally represented as word pieces (Schuster & Nakajima| 2012) rather than individual characters. In many speech and language tasks, it is prob-. ably more efficient to model \"qu'' as one output unit rather than \"q + \"u\"' as separate output units. (since in English, \"q\"' is almost always followed by \"u''). Word piece models also naturally solve. rare word and OOV problems similar to character models..\nWe present the Latent Sequence Decompositions (LSD) framework. LSD does not assume a fixed decomposition for an output sequence, but rather learns to decompose sequences as function of both the input and the output sequence. Each output sequence can be decomposed to a set of latent sequence decompositions using a dictionary of variable length output tokens. The LSD framework produces a distribution over the latent sequence decompositions and marginalizes over them during training. During test inference, we find the best decomposition and output sequence, by using beam search to find the most likely output sequence from the model.."}, {"section_index": "3", "section_name": ") LATENT SEOUENCE DECOMPOSITIONS", "section_text": "In this section, we describe LSD more formally. Let x be our input sequence, y be our outpu sequence and z be a latent sequence decomposition of y. The latent sequence decomposition z consists of a sequence of z; E Z where Z is the constructed token space. Each token z; need no be the same length, but rather in our framework, we expect the tokens to have different lengths Specifically, Z U-1C' where C is the set of singleton tokens and n is the length of the larges. output token. In ASR , C would typically be the set of English characters, while Z would be wor. pieces (i.e., n-grams of characters)..\nTo give a concrete example, consider a set of tokens {\"a, \"b\", \"c\", \"at\"',\"ca\", \"cat''}. With this set of tokens, the word \"cat\"' may be represented as the sequence \"c\", \"a\", \"t, or the sequence \"ca\", \"', or alternatively as the single token \"cat'. Since the appropriate decomposition of the word \"cat' is not known a priori, the decomposition itself is latent.\nASR we typically use characters as the output unit of choice (Chan et al.]2016, Bahdanau et al.. 2016a) but word pieces may be better units as they more closely align with the acoustic entities such as syllables. However, the most appropriate decomposition z* for a given is (x,y) pair is often unknown. Given a particular y, the best z* could even change depending on the input sequence x (i.e., speaking style).\nIn LSD, we want to learn a probabilistic segmentation mapping from x -> z -> y. The mode produces a distribution of decompositions, z, given an input sequence x, and the objective is tc maximize the log-likelihood of the ground truth sequence y. We can accomplish this by factorizin\nThe output sequence decomposition should be a function of both the input sequence and the output sequence (rather than output sequence alone). For example, in speech, the choice of emitting \"ing' as one word piece or as separate tokens of \"i\" + \"n\" + \"g\"' should be a function of the current output word as well as the audio signal (i.e., speaking style)\nNote that the length |za| of a decomposition zg need not be the same as the length of the output. sequence, y (for example \"ca\", \"'' has a length of 2, whereas the sequence is 3 characters long). Similarly, a different decomposition z, (for example the 3-gram token \"cat') of the same sequence may be of a different length (in this case 1)..\nEach decomposition, collapses to the target output sequence using a trivial collapsing function y =. collapse(z). Clearly, the set of decompositions, {z : collapse(z) = y}, of a sequence, y, using a non-trivial token set, Z, can be combinatorially large..\nIf there was a known, unique, correct segmentation z* for a given pair, (x, y), one could simply train the model to output the fixed deterministic decomposition z*. However, in most problems, we do not know the best possible decomposition z*; indeed it may be possible that the output can be correctly decomposed into multiple alternative but valid segmentations. For example, in end-to-end ASR we typically use characters as the output unit of choice (Chan et al.]2016, Bahdanau et al.\nand marginalizing over all possible z latent sequence decompositions under our model p(z|x; 6 with parameters 0:\nSimilarly, computing the exact gradient is intractable. However, we can derive a gradient estimator by differentiating w.r.t. to 0 and taking its expectation:\n1 logp(y|x;0) = p(y|x,z)p(z|x;0) de p(y|x;0) d0 1 p(y|x,z)Vop(z|x;0) p(y|x;0 1 p(y|x,z)p(z|x;0)Ve logp(z|x;6 p(yx;0 Z =Ez~p(z|x,y;0) [Ve logp(z|x;0)]\nEquation6uses the identity Vefe(x) = fe(x)Velogfe(x) assuming fe(x) 0 V x. Equa tion 7gives us an unbiased estimator of our gradient. It tells us to sample some latent sequence decomposition z ~ p(z|y, x; 0) under our model's posterior, where z is constraint to be a valid sequence that collapses to y, i.e. z E {z' : collapse(z') = y}. To train the model, we sample z ~ p(z|y,x; 0) and compute the gradient of Ve logp(z|x; 0) using backpropagation. However, sampling z ~ p(z|y, x; 0) is difficult. Doing this exactly is computationally expensive, because it would require sampling correctly from the posterior - it would be possible to do this using a particle filtering like algorithm, but would require a full forward pass through the output sequence to do this.\nInstead, in our implementation we use a heuristic to sample z ~ p(z[y, x; 0). At each output time step t when producing tokens Z1, 22 : :: Z(t-1), we sample from zt ~ p (zt|x, y, Z<t, 0) in a left-to- right fashion. In other words, we sample valid extensions at each time step t. At the start of the training, this left-to-right sampling procedure is not a good approximation to the posterior, since the next step probabilities at a time step include probabilities of all future paths from that point.\nFor example, consider the case when the target word is \"cat\"', and the vocabulary includes all possible characters and the tokens \"ca\"', and \"cat'. At time step 1, when the valid next step options are \"c\" \"ca\", \"cat', their relative probabilities reflect all possible sequences \"c*\", \"ca*\", \"cat*\" respectively that start from the first time step of the model. These sets of sequences include sequences other than the target sequence \"cat'. Thus sampling from the distribution at step 1 is a biased procedure.\nHowever, as training proceeds the model places more and more mass only on the correct hypothe ses, and the relative probabilities that the model produces between valid extensions gets closer to the posterior. In practice, we find that the when the model is trained with this method, it quickly collapses to using single character targets, and never escapes from this local minima' Thus, we fol- low an e-greedy exploration strategy commonly found in reinforcement learning literature (Sutton & Barto]1998) - we sample zt from a mixture of a uniform distribution over valid next tokens and p (zt[x, y, z<t, 0). The relative probability of using a uniform distribution vs. p(|x, y, z<t, 0) is varied over training. With this modification the model learns to use the longer n-grams of characters appropriately, as shown in later sections.\n'One notable exception was the word piece \"qu' (\"u' is almost always followed by \"q' in English). The model does learn to consistently emit \"qu\"' as one token and never produce \"q\"' + \"u\"' as separate tokens.\nlogp(y|x;0) = log p(y,z|x;0) Z = log p(y|z,x)p(z|x;0) Z - log ) p(y|z)p(z|x;0) Z\nwhere p(y[z) = 1(collapse(z) = y) captures path decompositions z that collapses to y. Due to the. exponential number of decompositions of y, exact inference and search is intractable for any non trivial token set Z and sequence length y]. We describe a beam search algorithm to do approximate. inference decoding in Section4\n1 p(y|x,z)p(z|x;0) (4) O :H de p(y|x;0) a0 Z 1 p(y|x,z)Vep(z|x;0) (5) p(y|x;0) Z 1 p(y|x,z)p(z|x;0)Vg log p(z|x;0) (6) p(y|x; 0) = Ez~p(z|x,y;e) [Ve logp(z|x;0)] (7)"}, {"section_index": "4", "section_name": "3 MODEL", "section_text": "In this work, we model the latent sequence decompositions p(z|x) with an attention-based seq2seq model (Bahdanau et al.]2015). Each output token z; is modelled as a conditional distribution over all previously emitted tokens z<; and the input sequence x using the chain rule..\nThe output sequence z is generated with an attention-based transducer (Bahdanau et al.|2015) one Z; token at a time:\nS; = DecodeRNN([zi-1, Ci-1], Si-1 c; = AttentionContext(s, h) p(zi|x, z<i) = TokenDistribution(si, Ci\nThe AttentionContext function generates c, with a content-based MLP attention network (Bah-. danau et al.2015). Energies e; are computed as a function of the encoder features h and current transducer state Si. The energies are normalized into an attention distribution Q,. The attention. context c; is created as a a; weighted linear sum over h:.\n(v, tanh($(si, hs) i.i = eX\nwhere is linear transform function. TokenDistribution is a MLP function with softmax outputs modelling the distribution p(zx, z<."}, {"section_index": "5", "section_name": "4 DECODING", "section_text": "During inference we want to find the most likely word sequence given the input acoustics:\nz = arg max logp(z|x Z y = collapse(z)\np(z|x;0) = p(Zi|X,Z<i\nThe input sequence x is processed through an EncodeRNN network. The EncodeRNN function. transforms the features x into some higher level representation h. In our experimental implemen- tation EncodeRNN is a stacked Bidirectional LSTM (BLSTM) (Schuster & Paliwal!. 1997 Graves et al. 2013) with hierarchical subsampling (Hihi & Bengio! 1996Koutnik et al.2014):\nh = EncodeRNN(x)\nS; = DecodeRNN([zi-1, Ci-1], Si-1 c; = AttentionContext(s, h). x, z<) = TokenDistribution(S, C\nThe DecodeRNN produces a transducer state s; as a function of the previously emitted token Zi-1, previous attention context ci-1 and previous transducer state s,-1. In our implementation, DecodeRNN is a LSTM (Hochreiter & Schmidhuber 1997) function without peephole connec. tions.\n)`logp(y|z)p(z|x) y = argmax y Z\nhowever this is obviously intractable for any non-trivial token space and sequence lengths. We simply approximate this by decoding for the best word piece sequence z and then collapsing it to its corresponding word sequence y:\nTable 1: Wall Street Journal test eval92 Word Error Rate (WER) varying the n sized word piece vocabulary without any dictionary or language model. We compare Latent Sequence Decomposi- tions (LSD) versus the Maximum Extension (MaxExt) decomposition. The LSD models all learn better decompositions compared to the baseline character model, while the MaxExt decomposition appears to be sub-optimal."}, {"section_index": "6", "section_name": "5 EXPERIMENTS", "section_text": "We experimented with the Wall Street Journal (WsJ) ASR task. We used the standard configuratio of train si284 dataset for training, dev93 for validation and eval92 for test evaluation. Our input fea. tures were 80 dimensional filterbanks computed every 1Oms with delta and delta-delta acceleratio normalized with per speaker mean and variance as generated by Kaldi (Povey et al.]2011). Th. EncodeRNN function is a 3 layer BLSTM with 256 LSTM units per-direction (or 512 total) an 4 = 22 time factor reduction. The DecodeRNN is a 1 layer LSTM with 256 LSTM units. All th. weight matrices were initialized with a uniform distribution U(-0.075, 0.075) and bias vectors t 0. Gradient norm clipping of 1 was used, gaussian weight noise N(0, 0.075) and L2 weight deca 1e-5 (Graves]2011). We used ADAM with the default hyperparameters described in (Kingma Ba][2015), however we decayed the learning rate from 1e-3 to 1e-4. We used 8 GPU workers fc asynchronous SGD under the TensorFlow framework (Abadi et al.]2015). We monitor the dev9 Word Error Rate (WER) until convergence and report the corresponding eval92 WER. The model. took around 5 days to converge.\nWe created our token vocabulary Z by looking at the n-gram character counts of the training dataset We explored n E {2, 3, 4, 5} and took the top {256, 512, 1024} tokens based on their count frequen cies (since taking the full n-cartesian exponent of the unigrams would result in an intractable numbe of tokens for n > 2). We found very minor differences in WER based on the vocabulary size, foi our n = {2, 3} word piece experiments we used a vocabulary size of 256 while our n = {4, 5} worc piece experiments used a vocabulary size of 512. Additionally, we restrict (space) to be a unigrar token and not included in any other word pieces, this forces the decompositions to break on word boundaries.\nTable 1compares the effect of varying the n sized word piece vocabulary. The Latent Sequence. Decompositions (LSD) models were trained with the framework described in Section 2|and the. (Maximum Extension) MaxExt decomposition is a fixed decomposition. MaxExt is generated in. a left-to-right fashion, where at each step the longest word piece extension is selected from the vocabulary. The MaxExt decomposition is not the shortest z possible sequence, however it is. a deterministic decomposition that can be easily generated in linear time on-the-fly. We decoded. these models with simple n-best list beam search without any external dictionary or Language Model. (LM).\nThe baseline model is simply the unigram or character model and achieves 14.76% WER. We find the LSD n = 4 word piece vocabulary model to perform the best at 12.88% WER or yielding a 12.7% relative improvement over the baseline character model. None of our MaxExt models beat our character model baseline, suggesting the maximum extension decomposition to be a poor decomposition choice. However, all our LSD models perform better than the baseline suggesting the LSD framework is able to learn a decomposition better than the baseline character decomposition.\nn LSD WER MaxExt WER Baseline 14.76 2 13.15 15.56 3 13.08 15.61 4 12.88 14.96 5 13.52 15.03\nWe also look at the distribution of the characters covered based on the word piece lengths during. inference across different n sized word piece vocabulary used in training. We define the distribution. of the characters covered as the percentage of characters covered by the set of word pieces with the same length across the test set, and we exclude (space) in this statistic. Figure [1[plots the.\n100 LSD MaxExt 80 7 ehreeeeeherrreee 7 60 7 7 40 7 20 A 1 2 3 4 5 0 2 3 4 5 n-gram Word Piece Vocabulary\nFigure 1: Distribution of the characters covered by the n-grams of the word piece models. We train Latent Sequence Decompositions (LSD) and Maximum Extension (MaxExt) models with n E {2, 3, 4, 5} sized word piece vocabulary and measure the distribution of the characters covered by the word pieces. The bars with the solid fill represents the LSD models, and the bars with the star hatch fill represents the MaxExt models. Both the LSD and MaxExt models prefer to use n > 2 sized word pieces to cover the majority of the characters. The MaxExt models prefers longer word pieces to cover characters compared to the LSD models.\ndistribution of the {1, 2, 3, 4, 5}-ngram word pieces the model decides to use to decompose the sequences. When the model is trained to use the bigram word piece vocabulary, we found the mode1 to prefer bigrams (55% of the characters emitted) over characters (45% of the characters emitted) in the LSD decomposition. This suggest that a character only vocabulary may not be the best vocabulary to learn from. Our best model, LSD with n = 4 word piece vocabulary, covered the word characters with 42.16%, 39.35%, 14.83% and 3.66% of the time using 1, 2, 3, 4 sized worc pieces respectively. In the n = 5 word piece vocabulary model, the LSD model uses the n = 5 sized word pieces to cover approximately 2% of the characters. We suspect if we used a larger dataset, we could extend the vocabulary to cover even larger n > 5.\nThe MaxExt model were trained to greedily emit the longest possible word piece, consequently this. prior meant the model will prefer to emit long word pieces over characters. While this decomposition. results in the shorter [z| length, the WER is slightly worse than the character baseline. This suggest the much shorter decompositions generated by the MaxExt prior may not be best decomposition. This falls onto the principle that the best z* decomposition is not only a function of y* but as a function of (x, y*). In the case of ASR, the segmentation is a function of the acoustics as well as. the text.\nTable [2|compares our WSJ results with other published end-to-end models. The best CTC model achieved 27.3% WER with REINFORCE optimization on WER (Graves & Jaitly2014). The pre viously best reported basic seq2seq model on WSJ WER achieved 18.0% WER (Bahdanau et al. 2016b) with Task Loss Estimation (TLE). Our baseline, also a seq2seq model, achieved 14.8% WER. Main differences between our models is that we did not use convolutional locational-basec\nDistribution of the Characters covered by the Word Pieces for LSD and MaxExt Model\nTable 2: Wall Street Journal test eval92 Word Error Rate (WER) results across Connectionist Tem poral Classification (CTC) and Sequence-to-sequence (seq2seq) models. The Latent Sequence De. composition (LSD) models use a n = 4 word piece vocabulary (LSD4). The Convolutional Neural. Network (CNN) model is with deep residual connections, batch normalization and convolutions The best end-to-end model is seq2seq + LSD + CNN at 9.6% WER.\nModel WER Graves & Jaitly(2014 CTC 30.1 CTC + WER 27.3 Hannun et al.. (2014 CTC 35.8 Bahdanau et al.. (2016a seq2seq 18.6 Bahdanau et al.. (2016b seq2seq + TLE 18.0 Zhang et al.(2017) seq2seq + CNN 2 11.8 Our Work. seq2seq 14.8 seq2seq + LSD4 12.9 seq2seq + LSD4 + CNN 9.6\npriors and we used weight noise during training. The deep CNN model with residual connections batch normalization and convolutions achieved a WER of 11.8% (Zhang et al.2017)\nPlease see Appendix A for the decompositions generated by our model. The LSD model learn multiple word piece decompositions for the same word sequence.\nWord piece models with seq2seq have also been recently used in machine translation. Sennrich et al. (2016) used word pieces in rare words, while Wu et al.[(2016) used word pieces for all the words, however the decomposition is fixed and defined by heuristics or another model. The decompositions in these models are also only a function of the output sequence, while in LSD the decomposition is a\n2 For our CNN architectures, we use and compare to the \"(C (3 3) / 2) 2 + NiN' architecture fron Table 2 line 4.\nOur LSD model using a n = 4 word piece vocabulary achieves a WER of 12.9% or 12.7% relatively better over the baseline seq2seq model. If we combine our LSD model with the CNN (Zhang et al.J[2017) model, we achieve a combined WER of 9.6% WER or 35.1% relatively better over the baseline seq2seq model. These numbers are all reported without the use of any language model.\nConnectionist Temporal Classification (CTC) (Graves et al.2006 Graves & Jaitly2014) based models assume conditional independence, and can rely on dynamic programming for exact infer- ence. Similarly,Ling et al.(2016) use latent codes to generate text, and also assume conditional in- dependence and leverage on dynamic programming for exact maximum likelihood gradients. Such models can not learn the output language if the language distribution is multimodal. Our seq2seq models makes no such Markovian assumptions and can learn multimodal output distributions. Col- lobert et al.[(2016) andZweig et al.(2016) developed extensions of CTC where they used some word pieces. However, the word pieces are only used in repeated characters and the decompositions are fixed.\nfunction of both the input and output sequence. The LSD framework allows us to learn a distributio of decompositions rather than learning just one decomposition defined by a priori."}, {"section_index": "7", "section_name": "7 CONCLUSION", "section_text": "We presented the Latent Sequence Decompositions (LSD) framework. LSD allows us to learn de compositions of sequences that are a function of both the input and output sequence. We presentec a biased training algorithm based on sampling valid extensions with an e-greedy strategy, and ar approximate decoding algorithm. On the Wall Street Journal speech recognition task, the sequence to-sequence character model baseline achieves 14.8% WER while the LSD model achieves 12.9% Using a a deep convolutional neural network on the encoder with LSD, we achieve 9.6% WER."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Ashish Agarwal, Philip Bachman, Dzmitry Bahdanau, Eugene Brevdo, Jan Chorowski. Jeff Dean, Chris Dyer, Gilbert Leung, Mohammad Norouzi, Noam Shazeer, Xin Pan, Luke Vilnis Oriol Vinyals and the Google Brain team for many insightful discussions and technical assistance.."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation by Jointl Learning to Align and Translate. In International Conference on Learning Representations, 2015\nJan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio Attention-Based Models for Speech Recognition. In Neural Information Processing Systems 2015.\nVinyals et al. (2016) used seq2seq to outputs sets, the output sequence is unordered and used fixed length output units; in our decompositions we maintain ordering use variable lengthed output units. Reinforcement learning (i.e., REINFORCE and other task loss estimators) (Sutton & Barto]1998 Graves & Jaitly]2014] Ranzato et al.[[2016) learn different output sequences can yield different task losses. However, these methods don't directly learn different decompositions of the same sequence. Future work should incorporate LSD with task loss optimization methods.\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin- cent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Watten- berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URLhttp: //tensorf1ow. org/| Software available from tensorflow.org.\nAwni Hannun, Andrew Maas, Daniel Jurafsky, and Andrew Ng. First-Pass Large Vocabulary Con tinuous Speech Recognition using Bi-Directional Recurrent DNNs. In arXiv:1408.2873, 2014\nSalah Hihi and Yoshua Bengio. Hierarchical Recurrent Neural Networks for Long-Term Dependen cies. In Neural Information Processing Systems, 1996\nSepp Hochreiter and Jurgen Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8 1735-1780, November 1997.\nJan Koutnik, Klaus Greff, Faustino Gomez, and Jurgen Schmidhuber. A Clockwork RNN. In International Conference on Machine Learning, 2014.\nWang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, Andrew Senior, FUmin Wang, and Phil Blunsom. Latent Predictor Networks for Code Generation. In Association for Computational Linguistics, 2016\nSebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On Using Very Large Target Vocabulary for Neural Machine Translation. In Association for Computational Linguistics 2015.\nDiederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations, 2015.\nIlya Sutskever, Oriol Vinyals, and Quoc Le. Sequence to Sequence Learning with Neural Networks In Neural Information Processing Systems, 2014.\nRichard Sutton and Andrew Barto. Reinforcement Learning: An Introduction. MIT Press, 1998\nRico Sennrich, Barry Haddow, and Alexandra Birch. Neural Machine Translation of Rare Words with Subword Units. In Association for Computational Linguistics, 2016.\nRita Singh, Bhiksha Raj, and Richard Stern. Automatic generation of subword units for speech\nDriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. Gram mar as a foreign language. In Neural Information Processing Systems, 2015a."}, {"section_index": "10", "section_name": "LEARNING THE DECOMPOSITIONS", "section_text": "We give the top 8 hypothesis generated by a baseline seq2seq character model, a Latent Sequence Decompositions (LSD) word piece model and a Maximum Extension (MaxExt) word piece model We note that \"shamrock's'' is an out-of-vocabulary word while \"shamrock'' is in-vocabulary. The ground truth is \"shamrock's pretax profit from the sale was one hundred twenty five million dollars a spokeswoman said'. Note how the LSD model generates multiple decompostions for the same word sequence, this does not happen with the MaxExt model.\nPl!\nq01d80T 8988E'I- I 85S71- 6L709 S- 3881808 7924L0 - 799S 18 - 09088 - 4278804- S70828'$ 064171S- L5665 - S8I S88 - L898E4- S8511'85- 88721 87 28882581- 1880167- 0996S1 6Z- 24749 61- 069160 484842 S- L6608 67-"}] |
Bygq-H9eg | [{"section_index": "0", "section_name": "AN ANALYSIS OF DEEP NEURAL NETWORK MODELS FOR PRACTICAL APPLICATIONS", "section_text": "Alfredo Canziani & Eugenio Culurciello\nWeldon School of Biomedical Engineering Purdue University\ncanziani,euge}@purdue.edu\nSince the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilisation of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important met rics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power con sumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint are an upper bound on the maximum achievable accuracy and model complexity; (4) the number of oper ations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In the ImageNet classification challenge, the ultimate goal is to obtain the highest accuracy in a. multi-class classification problem framework, regardless of the actual inference time. We believ. that this has given rise to several problems. Firstly, it is now normal practice to run several trainec. instances of a given model over multiple similar instances of each validation image. This practice. also know as model averaging or ensemble of DNNs, dramatically increases the amount of com. putation required at inference time to achieve the published accuracy. Secondly, model selection is. hindered by the fact that different submissions are evaluating their (ensemble of) models a differen number of times on the validation images, and therefore the reported accuracy is biased on the spe. cific sampling technique (and ensemble size). Thirdly, there is currently no incentive in speeding up. inference time, which is a key element in practical applications of these models, and affects resource. utilisation, power-consumption, and latency..\nThis article aims to compare state-of-the-art DNN architectures, submitted for the ImageNet chal- lenge over the last 4 years, in terms of computational requirements and accuracy. We compare these architectures on multiple metrics related to resource utilisation in actual deployments: accuracy,. memory footprint, parameters, operations count, inference time and power consumption. The pur- pose of this paper is to stress the importance of these figures, which are essential hard constraints for the optimisation of these networks in practical deployments and applications.."}, {"section_index": "2", "section_name": "2 METHODS", "section_text": "In order to compare the quality of different models, we collected and analysed the accuracy values reported in the literature. We immediately found that different sampling techniques do not allow for a direct comparison of resource utilisation. For example, central-crop (top-5 validation) errors of a\nFaculty of Mathematics, Informatics and Mechanics University of Warsaw.\na.paszke@students.mimuw.edu.pl"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "80 80 Re 75 75 % [%] Poeaneae t-do] 70 70 65 65 P r-do 60 60 55 55 50 50 0 ENet 34 AlexNet BN-NIN VGG-1 VGG-1 BN-AlexNet GoogLeNet ResNet-18 ResNet-3 ResNet-50 ResNet-101 ResNet-152 Inception-v3 Inception-v4\nFigure 1: Top1 vs. network. Single-crop top-1 vali. dation accuracies for top scoring single-model archi-. tectures. We introduce with this chart our choice of colour scheme, which will be used throughout this. publication to distinguish effectively different archi-. tectures and their correspondent authors. Notice that. networks of the same group share the same hue, for. example ResNet are all variations of pink.\nsingle run of VGG-1d'Simonyan & Zisserman 2014) and GoogLeNet (Szegedy et al.2014) are 8.70% and 10.07% respectively, revealing that VGG-16 performs better than GoogLeNet. When models are run with 10-crop sampling/2[then the errors become 9.33% and 9.15% respectively, anc therefore VGG-16 will perform worse than GoogLeNet, using a single central-crop. For this reason we decided to base our analysis on re-evaluations of top-1 accuracies3|for all networks with a single central-crop sampling technique (Zagoruyko2016).\nFor inference time and memory usage measurements we have used Torch7 (Collobert et al. 2011 with cuDNN-v5 (Chetlur et al.|[2014) and CUDA-v8 back-end. All experiments were conducted or a JetPack-2.3 NVIDIA Jetson TX1 board (nVIDIA): an embedded visual computing system witl a 64-bit ARM(R) A57 CPU, a 1 T-Flop/s 256-core NVIDIA Maxwell GPU and 4 GB LPDDR of shared RAM. We use this resource-limited device to better underline the differences betwee. network architecture, but similar results can be obtained on most recent GPUs, such as the NVIDIA K40 or Titan X, to name a few. Operation counts were obtained using an open-source tool that we developed (Paszke 2016). For measuring the power consumption, a Keysight 1146B Hall effec current probe has been used with a Keysight MSO-X 2024A 200 MHz digital oscilloscope with a sampling period of 2 s and 50 kSa/s sample rate. The system was powered by a Keysight E3645A GPIB controlled DC power supply.\nIn this section we report our results and comparisons. We analysed the following DDNs: AlexNet (Krizhevsky et al.||2012), batch normalised AlexNet (Zagoruyko|2016), batch normalised Network In Network (NIN) (Lin et al.2013). ENet (Paszke et al.2016) for ImageNet (Culurciello2016) GoogLeNet (Szegedy et al.2014), VGG-16 and -19 (Simonyan & Zisserman2014), ResNet-18 -34, -50, -101 and -152 (He et al.2015), Inception-v3 (Szegedy et al.2015) and Inception-v4 (Szegedy et al.2016) since they obtained the highest performance, in these four years, on the ImageNet (Russakovsky et al.| 2015) challenge.\nInception-v4 80 Inception-v3 ResNet-152 ResNet-50 VGG-16 VGG-19 75 ResNet-101 ResNet-34 70 ResNet-18 Reeennee o: GoogLeNet ENet 65 1-do1 BN-NIN 60 5M.. .35m.....65m.....95m.....125m...155m. BN-AlexNet 55 AlexNet 50 0 5 10 15 20 25 30 35 40 Operations [G-Ops]\nFigure 2: Top1 vs. operations, size parameters. Top-1 one-crop accuracy versus amount of operations required for a single forward pass. The size of the. blobs is proportional to the number of network pa. rameters; a legend is reported in the bottom right cor. ner, spanning from 5 10 to 155 106 params. Both these figures share the same y-axis, and the grey dots. nighlight the centre of the blobs..\n200 BN-NIN GoogLeNet :Inception-v3 100 Inception-v4 [sw] AlexNet BN-AlexNet VGG-16 50 VGG-19 ResNet-18 ResNet-34 ResNet-50 20 ResNet-101 ResNet-152 ENet 10 5 1 2 4 8 16 32 64 Batch size [ / ]\nFigure 3: Inference time vs. batch size. This chart show inference time across different batch sizes with a logarithmic ordinate and logarithmic abscissa Missing data points are due to lack of enough system memory required to process larger batches. A speed up of 3 is achieved by AlexNet due to better optimi- sation of its fully connected layers for larger batches\nFigure 2 provides a different, but more informative view of the accuracy values, because it alsc visualises computational cost and number of network's parameters. The first thing that is very ap parent is that VGG, even though it is widely used in many applications, is by far the most expensive architecture - both in terms of computational requirements and number of parameters. Its 16- anc 19-layer implementations are in fact isolated from all other networks. The other architectures form a steep straight line, that seems to start to flatten with the latest incarnations of Inception and ResNet This might suggest that models are reaching an inflection point on this data set. At this inflectior point, the costs - in terms of complexity - start to outweigh gains in accuracy. We will later show that this trend is hyperbolic."}, {"section_index": "4", "section_name": "3.3 POWER", "section_text": "In figure4|we see that the power consumption is mostly independent with the batch size. Low power values for AlexNet (batch of 1) and VGG (batch of 2) are associated to slower forward times per image, as shown in figure3\n14 13 12 11 10 BN-NIN BN-AlexNet ResNet-50 GoogLeNet VGG-16 ResNet-101 9 Inception-v3 VGG-19 ResNet-152 Inception-v4 ResNet-18 ENet AlexNet ResNet-34 8 1 2 4 8 16 32 64 Batch size [ / ]\nFigure 4: Power vs. batch size. Net power consump- tion (due only to the forward processing of several DNNs) for different batch sizes. The idle power of the TX1 board, with no HDMI screen connected, was 1.30 W on average. The max frequency component of power supply current was 1.4 kHz, corresponding to a Nyquist sampling frequency of 2.8 kHz\nFigure 3|reports inference time per image on each architecture, as a function of image batch size (from 1 to 64). We notice that VGG processes one image in a fifth of a second, making it a less likely contender in real-time applications on an NVIDIA TX1. AlexNet shows a speed up of roughly 3 going from batch of 1 to 64 images, due to weak optimisation of its fully connected layers. It is a very surprising finding, that will be further discussed in the next subsection.\nPower measurements are complicated by the high frequency swings in current consumption, which required high sampling current read-out to avoid aliasing. In this work, we used a 200 MHz digital oscilloscope with a current probe, as reported in section[2] Other measuring instruments, such as an AC power strip with 2 Hz sampling rate, or a GPIB controlled DC power supply with 12 Hz sampling rate, did not provide enough bandwidth to properly conduct power measurements.\nBN-NIN VGG-19 2000 GoogLeNet ResNet-18 Inception-v3 ResNet-34 ae nerernnneenn nn nennnn AlexNet ResNet-50 BN-AlexNet ResNet-101 VGG-16 1000 500 300 200 1 2 4 8 16 32 64 Batch size [/ ]\nBN-NIN VGG-19 2000 Batch of 1 image GoogLeNet ResNet-18 800 Inception-v3 ResNet-34 AlexNet ResNet-50 700 BN-AlexNet ResNet-101 VGG-16 1000 600 ui 500 mnmnmnm wn Maamm 300 300 200 200 100 - 2 4 8 16 32 64 0 100 200 300 400 500 Batch size [/ ] Parameters [MB]\nFigure 5: Memory vs. batch size. Maximum sys- tem memory utilisation for batches of different sizes. Memory usage shows a knee graph, due to the net- work model memory static allocation and the variable memory used by batch size.\nBatch o image Batch of 6 images 40 35 30 [sdo- G 25 20 15 10 5 0 0 20 40 60 80 100 120 140 160 0 20 40 60 80 100 120 140 160 Foward time per image [ms] Foward time per image [ms]\nFigure 7: Operations vs. inference time, size parameters. Relationship between operations and inference time, for batches of size 1 and 16 (biggest size for which all architectures can still run). Not surprisingly, we notice a linear trend, and therefore operations count represent a good estimation of inference time. Furthermore. we can notice an increase in the slope of the trend for larger batches, which correspond to shorter inference time due to batch processing optimisation."}, {"section_index": "5", "section_name": "3.5 OPERATIONS", "section_text": "Operations count is essential for establishing a rough estimate of inference time and hardware circui size, in case of custom implementation of neural network accelerators. In figure[7] for a batch o1 16 images, there is a linear relationship between operations count and inference time per image Therefore, at design time, we can pose a constraint on the number of operation to keep processing speed in a usable range for real-time applications or resource-limited deployments.\nBatch of 1 image. 800 [AR] 700 n nrrnrnneernn nn nennnnn 600 500 400 300 200 100 64 0 100 200 300 400 500 Parameters [MB]\nFigure 6: Memory vs. parameters count. De- tailed view on static parameters allocation and cor- responding memory utilisation. Minimum memory of 200 MB. linear afterwards with slope 1.30.\nWe analysed system memory consumption of the TX1 device, which uses shared memory for both CPU and GPU. Figure 5 shows that the maximum system memory usage is initially constant and then raises with the batch size. This is due the initial memory allocation of the network model -- which is the large static component - and the contribution of the memory required while processing the batch, proportionally increasing with the number of images. In figure[6|we can also notice that the initial allocation never drops below 200 MB, for network sized below 100 MB, and it is linear afterwards. with respect to the parameters and a slope of 1.30.\nFigure 8: Operations vs. power consumption, size parameters. Independency of power and operations is shown by a lack of directionality of the distributions shown in these scatter charts. Full resources utilisation and lower inference time for AlexNet architecture is reached with larger batches.\nBatch of I image Batch of 16 images 80 75 70 [%] 65 60 55 50 0 20 40 60 80 100 120 140 20 40 60 80 100 120 140 Images per second [Hz] Images per second [Hz]\nFigure 9: Accuracy vs. inferences per second, size operations. Non trivial linear upper bound is showr. in these scatter plots, illustrating the relationship between prediction accuracy and throughput of all examine. architectures. These are the first charts in which the area of the blobs is proportional to the amount of operations instead of the parameters count. We can notice that larger blobs are concentrated on the left side of the charts in correspondence of low throughput, i.e. longer inference times. Most of the architectures lay on the linea interface between the grey and white areas. If a network falls in the shaded area, it means it achieves exceptiona accuracy or inference speed. The white area indicates a suboptimal region. E.g. both AlexNet architectures. improve processing speed as larger batches are adopted, gaining 80 Hz.."}, {"section_index": "6", "section_name": "3.6 OPERATIONS AND POWER", "section_text": "In this section we analyse the relationship between power consumption and number of operations. required by a given model. Figure 8|reports that there is no specific power footprint for different ar- chitectures. When full resources utilisation is reached, generally with larger batch sizes, all networks consume roughly an additional 11.8 W, with a standard deviation of 0.7 W. Idle power is 1.30 W. This corresponds to the maximum system power at full utilisation. Therefore, if energy consumption. is one of our concerns, for example for battery-powered devices, one can simply choose the slowest. architecture which satisfies the application minimum requirements.."}, {"section_index": "7", "section_name": "3.7 ACCURACY AND THROUGHPUT", "section_text": "We note that there is a non-trivial linear upper bound between accuracy and number of inferences. per unit time. Figure [9|illustrates that for a given frame rate, the maximum accuracy that can be achieved is linearly proportional to the frame rate itself. All networks analysed here come from several publications, and have been independently trained by other research groups. A linear fit of the accuracy shows all architecture trade accuracy vs. speed. Moreover, chosen a specific inference time, one can now come up with the theoretical accuracy upper bound when resources are fully\nBatch of 1 image Batch of 16 images 40 35 30 [sdo- 25 suo 20 10 5 0 9 10 11 12 13 10 11 12 13 Net power consumption [W] Net power consumption [W]\nFigure 10: Accuracy per parameter vs. network. Information density (accuracy per parameters) is an effi. ciency metric that highlight that capacity of a specific architecture to better utilise its parametric space. Model: like VGG and AlexNet are clearly oversized, and do not take fully advantage of their potential learning abil. ity. On the far right, ResNet-18, BN-NIN, GoogLeNet and ENet (marked by grey arrows) do a better job a. 'squeezing\" all their neurons to learn the given task, and are the winners of this section..\nAs the spoiler in section |3.1|gave already away, the linear nature of the accuracy vs. throughput relationship translates into a hyperbolical one when the forward inference time is considered instead Then, given that the operations count is linear with the inference time, we get that the accuracy has an hyperbolical dependency on the amount of computations that a network requires."}, {"section_index": "8", "section_name": "3.8 PARAMETERS UTILISATION", "section_text": "DNNs are known to be highly inefficient in utilising their full learning power (number of parameters / degrees of freedom). Prominent work (Han et al.2015) exploits this flaw to reduce network file size up to 50, using weights pruning, quantisation and variable-length symbol encoding. It is worth noticing that, using more efficient architectures to begin with may produce even more compact representations. In figure|10|we clearly see that, although VGG has a better accuracy than AlexNet (as shown by figure [1), its information density is worse. This means that the amount of degrees of freedom introduced in the VGG architecture bring a lesser improvement in terms of accuracy. Moreover, ENet (Paszke et al.|2016) which we have specifically designed to be highly efficient and it has been adapted and retrained on ImageNet (Culurciello|[2016) for this work - achieves the highest score, showing that 24 less parameters are sufficient to provide state-of-the-art results."}, {"section_index": "9", "section_name": "4 CONCLUSIONS", "section_text": "In this paper we analysed multiple state-of-the-art deep neural networks submitted to the ImageNet. challenge, in terms of accuracy, memory footprint, parameters, operations count, inference time. and power consumption. Our goal is to provide insights into the design choices that can lead to efficient neural networks for practical application, and optimisation of the often-limited resources ir. actual deployments, which lead us to the creation of ENet - or Efficient-Network -- for ImageNet- We show that accuracy and inference time are in a hyperbolic relationship: a little increment in accuracy costs a lot of computational time. We show that number of operations in a network model can effectively estimate inference time. We show that an energy constraint will set a specific upper. bound on the maximum achievable accuracy and model complexity, in terms of operations counts.. Finally, we show that ENet is the best architecture in terms of parameters space utilisation, squeezing. up to 13 more information per parameter used respect to the reference mode1 AlexNet, and 24 respect VGG-19.\n12 [Feeneeae/%] Peeenp Peennnne t-de 10 8 6 ENet VGG-19 VGG-16 AlexNet BN-NIN BN-AlexNet ResNet-152 ResNet-101 ResNet-50 ResNet-34 Inception-v3 ResNet-18 Inception-v4 GoogLeNet\nutilised, as seen in section|3.6 Since the power consumption is constant, we can even go one step further, and obtain an upper bound in accuracy even for an energetic constraint, which could possibly be an essential designing factor for a network that needs to run on an embedded system."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evar Shelhamer. cuDNN: Efficient Primitives for Deep Learning. arXiv.org arXiv:1410.0759. 2014\nSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks wit pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arX preprint arXiv:1512.03385, 2015.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neura networks. In Advances in neural information processing systems, pp. 1097-1105, 2012.\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013\nAdam Paszke, Abhishek Chaurasia, Sangpil Kim, and Eugenio Culurciello. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv preprint arXiv:1606.02147, 2016.\nOlga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, An drej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge International Journal of Computer Vision, 115(3):211-252, 2015.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition arXiv preprint arXiv:1409.1556, 2014.\nThis paper would have not look so pretty without the Python Software Foundation, the matplot -. 1ib library and the communities of stackoverflow and TX of StackExchange which I ought to. thank. This work is partly supported by the Office of Naval Research (ONR) grants N00014-12-1-. 0167, N00014-15-1-2791 and MURI N00014-10-1-0278. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the TX1, Titan X, K40 GPUs used for this research.\nRonan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011."}] |
HJ0UKP9ge | [{"section_index": "0", "section_name": "BI-DIRECTIONAL ATTENTION FLOW FOR MACHINE COMPREHENSION", "section_text": "Machine comprehension (MC), answering a query about a given context para graph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC. Typ. ically these methods use attention to focus on a small portion of the con text and summarize it with a fixed-size vector, couple attentions temporally and/or often form a uni-directional attention. In this paper we introduce the. Bi-Directional Attention Flow (BiDAF) network, a multi-stage hierarchical pro cess that represents the context at different levels of granularity and uses bi. directional attention flow mechanism to obtain a query-aware context represen tation without early summarization. Our experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering. Dataset (SQuAD) and CNN/DailyMail cloze test.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The tasks of machine comprehension (MC) and question answering (QA) have gained significant popularity over the past few years within the natural language processing and computer vision com- munities. Systems trained end-to-end now achieve promising results on a variety of tasks in the text and image domains. One of the key factors to the advancement has been the use of neural attention mechanism, which enables the system to focus on a targeted area within a context paragraph (for MC) or within an image (for Visual QA), that is most relevant to answer the question (Weston et al. 2015} Antol et al.] 2015] Xiong et al.]2016a). Attention mechanisms in previous works typically have one or more of the following characteristics. First, the computed attention weights are ofter used to extract the most relevant information from the context for answering the question by sum- marizing the context into a fixed-size vector. Second, in the text domain, they are often temporally dynamic, whereby the attention weights at the current time step are a function of the attended vector at the previous time step. Third, they are usually uni-directional, wherein the query attends on the context paragraph or the image.\nIn this paper, we introduce the Bi-Directional Attention Flow (BiDAF) network, a hierarchica. multi-stage architecture for modeling the representations of the context paragraph at different level of granularity (Figure|1). BiDAF includes character-level, word-level, and contextual embeddings and uses bi-directional attention flow to obtain a query-aware context representation. Our attentior. mechanism offers following improvements to the previously popular attention paradigms. First, ou. attention layer is not used to summarize the context paragraph into a fixed-size vector. Instead, th attention is computed for every time step, and the attended vector at each time step, along with the. representations from previous layers, is allowed to flow through to the subsequent modeling layer This reduces the information loss caused by early summarization. Second, we use a memory-les.. attention mechanism. That is, while we iteratively compute attention through time as in Bahdanai. et al.(2015), the attention at each time step is a function of only the query and the context para. graph at the current time step and does not directly depend on the attention at the previous time step. We hypothesize that this simplification leads to the division of labor between the attention laye. and the modeling layer. It forces the attention layer to focus on learning the attention between the. query and the context, and enables the modeling layer to focus on learning the interaction within th.\n* The majority of the work was done while the author was interning at the Allen Institute for AI"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Figure 1: BiDirectional Attention Flow Model (best viewed in color.\nquery-aware context representation (the output of the attention layer). It also allows the attentior at each time step to be unaffected from incorrect attendances at previous time steps. Our experi ments show that memory-less attention gives a clear advantage over dynamic attention. Third, we use attention mechanisms in both directions, query-to-context and context-to-query, which provide complimentary information to each other.\nOur B1DAF model'|outperforms all previous approaches on the highly-competitive Stanford Ques- tion Answering Dataset (SQuAD) test set leaderboard at the time of submission. With a modification to only the output layer, B1DAF achieves the state-of-the-art results on the CNN/DailyMail cloze test. We also provide an in-depth ablation study of our model on the SQuAD development set, vi- sualize the intermediate feature spaces in our model, and analyse its performance as compared to a more traditional language model for machine comprehension (Rajpurkar et al.]2016)."}, {"section_index": "3", "section_name": "2 MODEL", "section_text": "Start End Query2Context Softmax Dense + Softmax LSTM + Softmax Output Layer uj mT xe m m U2 U1 Modeling Layer WIS h1 h2 ht g1 92 gt Context2Query Attention Flow Query2Context andContext2Query Layer Attention uj h hT U2 Contextual U1 Embed Layer S h1 h2 ht Word Embed Layer Character Word Character Embed Layer Embedding Embedding X1 X2 X3 XT q1 qj GLOVE Char-CNN Context Query\nOur machine comprehension model is a hierarchical multi-stage process and consists of six layers (Figure1):\n1. Character Embedding Layer maps each word to a vector space using character-level. CNNs. 2. Word Embedding Layer maps each word to a vector space using a pre-trained word em-. bedding model. 3. Contextual Embedding Layer utilizes contextual cues from surrounding words to refine. the embedding of the words. These first three layers are applied to both the query and context. 4. Attention Flow Layer couples the query and context vectors and produces a set of query-. aware feature vectors for each word in the context.. 5. Modeling Layer employs a Recurrent Neural Network to scan the context.. 6. Output Layer provides an answer to the query..\n1. Character Embedding Layer. Character embedding layer is responsible for mapping each. word to a high-dimensional vector space. Let {x1,... xT} and {q1,... qJ} represent the words in. the input context paragraph and query, respectively. FollowingKim (2014), we obtain the character-. level embedding of each word using Convolutional Neural Networks (CNN). Characters are embed ded into vectors, which can be considered as 1D inputs to the CNN, and whose size is the inpul channel size of the CNN. The outputs of the CNN are max-pooled over the entire width to obtain a fixed-size vector for each word..\nThe concatenation of the character and word embedding vectors is passed to a two-layer Highway Network (Srivastava et al.]2015). The outputs of the Highway Network are two sequences of d dimensional vectors, or more conveniently, two matrices: X E IRdxT for the context and Q E Rd J for the query.\nIt is worth noting that the first three layers of the model are computing features from the query and context at different levels of granularity, akin to the multi-stage feature computation of convolutional neural networks in the computer vision field.\n4. Attention Flow Layer. Attention flow layer is responsible for linking and fusing information from the context and the query words. Unlike previously popular attention mechanisms (Weston et al.[[2015, Hill et al.[2016f Sordoni et al.[[2016] Shen et al.[2016), the attention flow layer is not used to summarize the query and context into single feature vectors. Instead, the attention vector at each time step, along with the embeddings from previous layers, are allowed to flow through to the subsequent modeling layer. This reduces the information loss caused by early summarization.\nThe inputs to the layer are contextual vector representations of the context H and the query U. The outputs of the layer are the query-aware vector representations of the context words, G, along with. the contextual embeddings from the previous layer..\nIn this layer, we compute attentions in two directions: from context to query as well as from query to context. Both of these attentions, which will be discussed below, are derived from a shared similarity matrix, S E RT J, between the contextual embeddings of the context (H) and the query (U), where St; indicates the similarity between t-th context word and j-th query word. The similarity matrix is computed by\nContext-to-query Attention. Context-to-query (C2Q) attention signifies which query words are most relevant to each context word. Let at E R represent the attention weights on the query words by t-th context word, at; = 1 for all t. The attention weight is computed by a = softmax(St:) E R J, and subsequently each attended query vector is U.t = , at U.j. Hence U is a 2d-by-T matrix containing the attended query vectors for the entire context.\nQuery-to-context Attention. Query-to-context (Q2C) attention signifies which context words have the closest similarity to one of the query words and are hence critical for answering the query.\n3. Contextual Embedding Layer. We use a Long Short-Term Memory Network (LSTM) (Hochreiter & Schmidhuber. 1997) on top of the embeddings provided by the previous layers to model the temporal interactions between words. We place an LSTM in both directions, and concatenate the outputs of the two LSTMs. Hence we obtain H E IR2dxT from the context word vectors X, and U E R2dx J from query word vectors Q. Note that each column vector of H and U is 2d-dimensional because of the concatenation of the outputs of the forward and backward LSTMs, each with d-dimensional output.\nStj = a(H:t, U:j) E R\nG.t=(H.t,Ut,H.t) E RdG\nwhere G:t is the t-th column vector (corresponding to t-th context word), is a trainable vector. function that fuses its (three) input vectors, and dg is the output dimension of the function While the function can be an arbitrary trainable neural network, such as multi-layer perceptron, a. simple concatenation as following still shows good performance in our experiments: (h, u, h) =. [h; u; h o u; h o h] E R8dxT (i.e., dG = 8d).\n5. Modeling Layer. The input to the modeling layer is G, which encodes the query-aware rep. resentations of context words. The output of the modeling layer captures the interaction among. the context words conditioned on the query. This is different from the contextual embedding layer which captures the interaction among context words independent of the query. We use two layers. of bi-directional LSTM, with the output size of d for each direction. Hence we obtain a matrix. M E R2dxT, which is passed onto the output layer to predict the answer. Each column vector of. M is expected to contain contextual information about the word with respect to the entire context. paragraph and the query.\n6. Output Layer. The output layer is application-specific. The modular nature of BiDAF allows us to easily swap out the output layer based on the task, with the rest of the architecture remaining exactly the same. Here, we describe the output layer for the QA task. In section|5l we use a sligh modification of this output layer for cloze-style comprehension.\nTraining. We define the training loss (to be minimized) as the sum of the negative log probabilities of the true start and end indices by the predicted distributions, averaged over all examples:\nwhere 0 is the set of all trainable weights in the model (the weights and biases of CNN filters an LSTM cells, w(s), w(p1) and w(p2)), N is the number of examples in the dataset, yt and y? are the true start and end indices of the i-th example, respectively, and pk indicates the k-th value of the vector p.\nTest. The answer span (k, l) where k < l with the maximum value of pip? is chosen, which can be computed in linear time with dynamic programming\nMachine comprehension. A significant contributor to the advancement of MC models has been the availability of large datasets. Early datasets such as MCTest (Richardson et al.2013) were too\nWe obtain the attention weights on the context words by b = softmax(maxcot(S)) E RT, where the maximum function (maxcot) is performed across the column. Then the attended context vector is h = t btH.t E R2d. This vector indicates the weighted sum of the most important words in the\nFinally, the contextual embeddings and the attention vectors are combined together to yield G where each column vector can be considered as the query-aware representation of each context word. We define G by\nThe QA task requires the model to find a sub-phrase of the paragraph to answer the query. The phrase is derived by predicting the start and the end indices of the phrase in the paragraph. We obtain the probability distribution of the start index over the entire paragraph by\n= softmax(w(p1)[G; M])\nwhere w(p1) E R10d is a trainable weight vector. For the end index of the answer phrase, we pass M to another bidirectional LSTM layer and obtain M2 E IR2dT. Then we use M2 to obtain the probability distribution of the end index in a similar manner:\n= softmax(w(p2) [G; M2])\nN 1 log(r log N 2\nsmall to train end-to-end neural models. Massive cloze test datasets (CNN/DailyMail byHermann et al.(2015) and Childrens Book Test by Hill et al.(2016)), enabled the application of deep neural architectures to this task. More recently, Rajpurkar et al. (2016) released the Stanford Question Answering (SQuAD) dataset with over 100,000 questions. We evaluate the performance of our comprehension system on both SQuAD and CNN/DailyMail datasets.\nPrevious works in end-to-end machine comprehension use attention mechanisms in three distinc ways. The first group (largely inspired byBahdanau et al.(2015)) uses a dynamic attention mech anism, in which the attention weights are updated dynamically given the query and the context as well as the previous attention.Hermann et al.[(2015) argue that the dynamic attention model per forms better than using a single fixed query vector to attend on context words on CNN & DailyMai datasets. Chen et al. (2016) show that simply using bilinear term for computing the attention weight. in the same model drastically improves the accuracy. Wang & Jiang(2016) reverse the direction of the attention (attending on query words as the context RNN progresses) for SQuAD. In contrast tc these models, B1DAF uses a memory-less attention mechanism.\nThe third group (considered as variants of Memory Network (Weston et al.. 2015)) repeats comput ing an attention vector between the query and the context through multiple layers, typically referred to as multi-hop (Sordoni et al.]2016, Dhingra et al.]2016). Shen et al.(2016) combine Memory Networks with Reinforcement Learning in order to dynamically control the number of hops. One can also extend our B1DAF model to incorporate multiple hops..\nVisual question answering. The task of question answering has also gained a lot of interest in the computer vision community. Early works on visual question answering (VQA) involved encoding the question using an RNN, encoding the image using a CNN and combining them to answer the question (Antol et al.]2015) Malinowski et al.]2015). Attention mechanisms have also been suc cessfully employed for the VQA task and can be broadly clustered based on the granularity of thei attention and the approach to construct the attention matrix. At the coarse level of granularity, the question attends to different patches in the image (Zhu et al.]2016] [Xiong et al.]2016a). At a fine level, each question word attends to each image patch and the highest attention value for each spatia location (Xu & Saenko| 2016) is adopted. A hybrid approach is to combine questions representa tions at multiple levels of granularity (unigrams, bigrams, trigrams) (Yang et al.]2015). Severa approaches to constructing the attention matrix have been used including element-wise product element-wise sum, concatenation and Multimodal Compact Bilinear Pooling (Fukui et al. 2016).\nLu et al.(2016) have recently shown that in addition to attending from the question to image patches attending from the image back to the question words provides an improvement on the VQA task. This finding in the visual domain is consistent with our finding in the language domain, where. our bi-directional attention between the query and context provides improved results. Their model. however, uses the attention weights directly in the output layer and does not take advantage of the. attention flow to the modeling layer.."}, {"section_index": "4", "section_name": "4 OUESTION ANSWERING EXPERIMENTS", "section_text": "In this section, we evaluate our model on the task of question answering using the recently released SQuAD (Rajpurkar et al.|2016), which has gained a huge attention over a few months. In the next section, we evaluate our model on the task of cloze-style reading comprehension.\nDataset. SQuAD is a machine comprehension dataset on a large set of Wikipedia articles, with more than 100,O00 questions. The answer to each question is always a span in the context. The model is given a credit if its answer matches one of the human written answers. Two metrics are used to evaluate models: Exact Match (EM) and a softer metric, F1 score, which measures the weighted average of the precision and recall rate at character level. The dataset consists of 90k/10k\nThe second group computes the attention weights once, which are then fed into an output layer for final prediction (e.g.,Kadlec et al.(2016)). Attention-over-attention model (Cui et al.] 2016) uses a 2D similarity matrix between the query and context words (similar to Equation1) to compute the weighted average of query-to-context attention. In contrast to these models, BiDAF does not summarize the two modalities in the attention layer and instead lets the attention vectors flow into the modeling (RNN) layer.\nSingle Model Ensemble EM F1 EM F1 EM F1 Logistic Regression Baselinea 40.4 51.0 No char embedding 65.0 75.4 Dynamic Chunk Readerb 62.5 71.0 No word embedding 55.5 66.8 Fine-Grained Gating 62.5 73.3 No C2Q attention 57.2 67.7 Match-LSTMd 64.7 73.7 67.9 77.0 No Q2C attention 63.6 73.7 Multi-Perspective Matchinge 65.5 75.1 68.2 77.2 Dynamic attention 63.5 73.6 Dynamic Coattention Networksf 66.2 75.9 71.6 80.4 B1DAF (single) 67.7 77.3 R-Net9 68.4 77.5 72.1 79.7 B1DAF (ensemble) 72.6 80.7 B1DAF (Ours) 68.0 77.3 73.3 81.1 (b) Ablations on the SOuAD dey set\nTable 1: (1a) The performance of our model B1DAF and competing approaches byRajpurkar et al.. (2016a,Yu et al.(2016)b,Yang et al.(2016)c,Wang & Jiang(2016)d, 1BM Watsone (unpublished), Xiong et al.(2016b)f, and Microsoft Research Asia9 (unpublished) on the SQuAD test set. A. concurrent work by[Lee et al.(2016) does not report the test scores. All results shown here reflect the SQuAD leaderboard (stanford-qa. com) as of 6 Dec 2016, 12pm PST. (1b) The performance of our model and its ablations on the SQuAD dev set. Ablation results are presented only for single. runs.\ntrain/dev question-context tuples with a large hidden test set. It is one of the largest available MC datasets with human-written questions and serves as a great test bed for our model..\nModel Details. The model architecture used for this task is depicted in Figure[1 Each paragrapl and question are tokenized by a regular-expression-based word tokenizer (PTB Tokenizer) and fec into the model. We use 100 1D filters for CNN char embedding, each with a width of 5. The hidden state size (d) of the model is 100. The model has about 2.6 million parameters. We use the AdaDelta (Zeiler2012) optimizer, with a minibatch size of 60 and an initial learning rate of 0.5, fo1 12 epochs. A dropout (Srivastava et al.|2014) rate of 0.2 is used for the CNN, all LSTM layers, and the linear transformation before the softmax for the answers. During training, the moving averages of all weights of the model are maintained with the exponential decay rate of 0.999. At test time, the moving averages instead of the raw weights are used. The training process takes roughly 20 hours on a single Titan X GPU. We also train an ensemble model consisting of 12 training runs with the identical architecture and hyper-parameters. At test time, we choose the answer with the highest sum of confidence scores amongst the 12 runs for each question.\nResults. The results of our model and competing approaches on the hidden test are summarized ir Table[1a B1DAF (ensemble) achieves an EM score of 73.3 and an F1 score of 81.1, outperforming all previous approaches.\nAblations. Table[1b|shows the performance of our model and its ablations on the SQuAD dev. set. Both char-level and word-level embeddings contribute towards the model's performance. We conjecture that word-level embedding is better at representing the semantics of each word as a whole,. while char-level embedding can better handle out-of-vocab (OOV) or rare words. To evaluate bi directional attention, we remove C2Q and Q2C attentions. For ablating C2Q attention, we replace the attended question vector U with the average of the output vectors of the question's contextual. embedding layer (LSTM). C2Q attention proves to be critical with a drop of more than 10 points on. both metrics. For ablating Q2C attention, the output of the attention layer, G, does not include terms that have the attended Q2C vectors, H. To evaluate the attention flow, we study a dynamic attention. model, where the attention is dynamically computed within the modeling layer's LSTM, following. previous work (Bahdanau et al.]2015) Wang & Jiang2016). This is in contrast with our approach. where the attention is pre-computed before flowing to the modeling layer. Despite being a simpler. attention mechanism, our proposed static attention outperforms the dynamically computed attention. by more than 3 points. We conjecture that separating out the attention layer results in a richer set of features computed in the first 4 layers which are then incorporated by the modeling layer. We also show the performance of B1DAF with several different definitions of a and functions (Equation|1. and2) in Appendix B\nTable 2: Closest context words to a given query word, using a cosine similarity metric computed in the Wor Embedding feature space and the Phrase Embedding feature space.\nVisualizations. We now provide a qualitative analysis of our model on the SQuAD dev set. First. we visualize the feature spaces after the word and contextual embedding layers. These two layers are responsible for aligning the embeddings between the query and context words which are the. inputs to the subsequent attention layer. To visualize the embeddings, we choose a few frequen query words in the dev data and look at the context words that have the highest cosine similarity tc. the query words (Table 2). At the word embedding layer, query words such as When, Where and. Who are not well aligned to possible answers in the context, but this dramatically changes in the. contextual embedding layer which has access to context from surrounding words and is just 1 laye. below the attention layer. When begins to match years, Where matches locations, and Who matches. names.\nWe also visualize these two feature spaces using t-SNE in Figure[2] t-SNE is performed on a larg. fraction of dev data but we only plot data points corresponding to the months of the year. Ar. interesting pattern emerges in the Word space, where May is separated from the rest of the month because May has multiple meanings in the English language. The contextual embedding layer uses. contextual cues from surrounding words and is able to separate the usages of the word May. Finall. we visualize the attention matrices for some question-context tuples in the dev data in Figure[3] Ir the first example, Where matches locations and in the second example, many matches quantities anc. numerical symbols. Also, entities in the question typically attend to the same entities in the context. thus providing a feature for the model to localize possible answers..\nDiscussions. We analyse the performance of our our model with a traditional language-feature. based baseline (Rajpurkar et al.]2016). Figure2p shows a Venn diagram of the dev set questions. correctly answered by the models. Our model is able to answer more than 86% of the questions\nLayer Query Closest words in the Context using cosine similarity Word When when, When, After, after, He, he, But, but, before, Before Contextual When When, when, 1945, 1991, 1971, 1967, 1990, 1972, 1965, 1953 Word Where Where, where, It, IT, it, they, They, that, That, city Contextual Where where, Where, Rotterdam, area, Nearby, location, outside, Area, across, locations Word Who Who, who, He, he, had, have, she, She, They, they Contextual Who who, whose, whom, Guiscard, person, John, Thomas, families, Elway, Louis Word city City, city, town, Town, Capital, capital, district, cities, province, Downtown Contextual city city, City, Angeles, Paris, Prague, Chicago, Port, Pittsburgh, London, Manhattan Word January July, December, June, October, January, September, February, April, November, March Contextual January January, March, December, August, December, July, July, July, March, December Word Seahawks Seahawks, Broncos, 49ers, Ravens, Chargers, Steelers, quarterback, Vikings, Colts, NFL Contextual Seahawks Seahawks, Broncos, Panthers, Vikings, Packers, Ravens, Patriots, Falcons, Steelers, Chargers Word date date, dates, until, Until, June, July, Year, year, December, deadline Contextual date date, dates, December, July, January, October, June, November, March, February\nWord Embed Space Phrase Embed Space BiDAF (Ours -15 15 Questions answered correctly by our BiDAF model Baseline and the more traditional baseline model what (4753) 10 May -20 how (1090) 5 from 28 January to 25 who (1061) may but by September had been 0 debut on May 5. when (696) 25 -5 Opening in May 1852 at. which (454) -10 509 3734 3585 in (444) -30 -15 where433 Januar of these may be more. -20 effect and may result in. why (151) -35 September the state may not aid. on44 August -25 Baseline to 43 -40 -30 -10 -5 5 10 15 20 25 24 26 28 30 32 34 36 38 40 42 BiDAF t-SNE Dimension 1 t-SNE Dimension 1 40 100 % of questions with correct answers. (a) (b) (c)\nFigure 2: (a) t-SNE visualizations of the months names embedded in the two feature spaces. The contextual embedding layer is able to distinguish the two usages of the word May using context from the surrounding text. (b) Venn diagram of the questions answered correctly by our model and the more traditional baseline (Rajpurkar. et al.|[2016). (c) Correctly answered questions broken down by the 10 most frequent first words in the question.\nSuper Bowl 50 was an American football game Where at, the, at, Stadium, Levi, in, Santa, Ana to determine the champion of the National Football League ( NFL ) for the 2015 season. The American Football Conference (AFC) did [] champion Denver Broncos defeated the National Football Conference (NFC) champion Super Super, Super, Super, Super, Super Carolina Panthers 24-10 to earn their third Super Bowl title. The game was played on Bowl Bowl, Bowl, Bowl, Bowl, Bowl February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. 50 50 As this was the 5Oth Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as take temporarily suspending the tradition of naming each Super Bowl game with Roman place numerals (under which the game would have. been known as \"Super Bowl L\"), so that the initiatives logo could prominently feature the Arabic numerals 50. There are 13 natural reserves in Warsaw- How [] among others, Bielany Forest, Kabaty Woods, Czerniakow Lake . About 15 many hundreds, few, among, 15, several, only, 13, 9 kilometres ( 9 miles ) from Warsaw, the Vistula river's environment changes natural natural, of strikingly and features a perfectly preserved ecosystem, with a habitat of animals that reserves reserves includes the otter, beaver and hundreds of are are, are, are, are, are, includes bird species. There are also several lakes in Warsaw mainly the oxbow lakes, like there [] Czerniakow Lake, the lakes in the tazienki or Wilanow Parks, Kamionek Lake. There are in [] Iot of small lakes in the parks, but only a few are permanent-the majority are emptied Warsaw Warsaw, Warsaw, Warsaw before winter to clean them of plants and sediments. 2. inter species\ncorrectly answered by the baseline. The 14% that are incorrectly answered does not have a clear pattern. This suggests that neural architectures are able to exploit much of the information captured by the language features. We also break this comparison down by the first words in the questions (Figure[2h). Our model outperforms the traditional baseline comfortably in every category.\nError Analysis. We randomly select 50 incorrect questions (based on EM) and categorize then into 6 classes. 50% of errors are due to the imprecise boundaries of the answers, 28% involve syntactic complications and ambiguities, 14% are paraphrase problems, 4% require external knowl edge, 2% need multiple sentences to answer, and 2% are due to mistakes during tokenization. See Appendix [A|for the examples of the error modes."}, {"section_index": "5", "section_name": "CLOZE TEST EXPERIMENTS", "section_text": "Dataset. In a cloze test, the reader is asked to fill in words that have been removed from a passage for measuring one's ability to comprehend text. Hermann et al. (2015) have recently compiled a mas. sive Cloze-style comprehension dataset, consisting of 300k/4k/3k and 879k/65k/53k (train/dev/test) examples from CNN and DailyMail news articles, respectively. Each example has a news article and an incomplete sentence extracted from the human-written summary of the article. To distinguish this task from language modeling and force one to refer to the article to predict the correct missing word the missing word is always a named entity, anonymized with a random ID. Also, the IDs must be shuffled constantly during test, which is also critical for full anonymization.\nModel Details. The model architecture used for this task is very similar to that for SQuAD (Sec tion 4) with only a few small changes to adapt it to the cloze test. Since each answer in the. CNN/DailyMail datasets is always a single word (entity), we only need to predict the start index (p1); the prediction for the end index (p2) is omitted from the loss function. Also, we mask out all. non-entity words in the final classification layer so that they are forced to be excluded from possible answers. Another important difference from SQuAD is that the answer entity might appear more than once in the context paragraph. To address this, we follow a similar strategy from Kadlec et al.. (2016). During training, after we obtain p1, we sum all probability values of the entity instances.\nFigure 3: Attention matrices for question-context tuples. The left palette shows the context paragraph (correct. answer in red and underlined), the middle palette shows the attention matrix (each row is a question word, each column is a context word), and the right palette shows the top attention points for each question word, above a. threshold.\nin the context that correspond to the correct answer. Then the loss function is computed from the summed probability. We use a minibatch size of 48 and train for 8 epochs, with early stop when the accuracy on validation data starts to drop. Inspired by the window-based method (Hill et al. 2016), we split each article into short sentences where each sentence is a 19-word window around each entity (hence the same word might appear in multiple sentences). The RNNs in BiDAF are not feed-forwarded or back-propagated across sentences, which speed up the training process by par- allelization. The entire training process takes roughly 60 hours on eight Titan X GPUs. The other hyper-parameters are identical to the model described in Section4\nResults. The results of our single-run models and competing approaches on the CNN/DailyMai datasets are summarized in Table[3] * indicates ensemble methods. BiDAF outperforms previou single-run models on both datasets for both val and test data. On the DailyMail test, our single-rur model even outperforms the best ensemble method..\nTable 3: Results on CNN/DailyMail datasets. We also include the results of previous ensemble methods (marked with *) for completeness"}, {"section_index": "6", "section_name": "6 CONCLUSION", "section_text": "In this paper, we introduce BiDAF, a multi-stage hierarchical process that represents the context at different levels of granularity and uses a bi-directional attention flow mechanism to achieve a query- aware context representation without early summarization. The experimental evaluations show that our model achieves the state-of-the-art results in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze test. The ablation analyses demonstrate the importance of each compo nent in our model. The visualizations and discussions show that our model is learning a suitable representation for MC and is capable of answering complex questions by attending to correct loca- tions in the given paragraph. Future work involves extending our approach to incorporate multiple hops of the attention layer."}, {"section_index": "7", "section_name": "ACKNOWLEDGMENTS", "section_text": "This research was supported by the NSF (IIS 1616112), NSF (III 1703166), Allen Institute for AI (66-9175), Allen Distinguished Investigator Award, Google Research Faculty Award, and Samsung GRO Award. We thank the anonymous reviewers for their helpful comments.\nCNN DailyMail val test val test tentive Reader (Hermann et al.. 2015 61.6 63.0 70.5 69.0 emNN (Hill et aI.)]2016) 63.4 6.8 S Reader (Kadlec et al.2016) 68.6 69.5 75.0 73.9 ER Network (Kobayashi et al.. 2016 71.3 72.9 1 1 rative Attention (Sordoni et al.. 2016 72.6 73.3 1 1 iReader (Trischler et al.)2016) 73.4 74.0 - anford AR (Chen et al.,|2016 73.8 73.6 77.6 76.6 AReader (Dhingra et al.j2016 73.0 73.8 76.7 75.7 A Reader (Cu1 et al.|2016) 73.1 74.4 - 1 asoNet (Shen et al.j2016 72.9 74.7 77.6 76.6 DAF (Ours) 76.3 76.9 80.3 79.6 emNN* (Hill et al.|2016 66.2 69.4 SReader*(Kadlec et al. 2016) 73.9 75.4 78.7 77.7 rative Attention* (Sordoni et al. 2016 74.5 75.7 A Reader* (Dhingra et al.,2016)) 76.4 77.4 79.1 78.1 anford AR* 7Chen et al.,. 2016 77.2 77.6 80.2 79.2"}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015.\nDanqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn/daily mail reading comprehension task. In ACL, 2016.\nYiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over attention neural networks for reading comprehension. arXiv preprint arXiv:1607.04423, 2016.\nBhuwan Dhingra, Hanxiao Liu, William W Cohen, and Ruslan Salakhutdinoy. Gated-attentior readers for text comprehension. arXiv preprint arXiv:1606.01549. 2016\nAkira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach Multimodal compact bilinear pooling for visual question answering and visual grounding. In EMNLP, 2016.\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustaf Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In NIPs, 2015.\nFelix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Readin children's books with explicit memory representations. In ICLR, 2016..\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 1997\nRudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention sum reader network. In ACL, 2016.\nYoon Kim. Convolutional neural networks for sentence classification. In EMNLP, 2014\nSosuke Kobayashi, Ran Tian, Naoaki Okazaki, and Kentaro Inui. Dynamic entity representation with max-pooling improves machine reading. In NAACL-HLT, 2016..\nKenton Lee, Tom Kwiatkowski, Ankur Parikh, and Dipanjan Das. Learning recurrent span repre sentations for extractive question answering. arXiv preprint arXiv:1611.01436, 2016.\nJiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Hierarchical question-image co-attention for visual question answering. In NIPS, 2016..\nMateusz Malinowski, Marcus Rohrbach, and Mario Fritz. Ask your neurons: A neural-based ap proach to answering questions about images. In ICCV, 2015.\nJeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In EMNLP, 2014.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 10o,o0o+ question for machine comprehension of text. In EMNLP, 2016..\nMatthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, 2013.\nAlessandro Sordoni, Phillip Bachman, and Yoshua Bengio. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245, 2016.\nNitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov Dropout: a simple way to prevent neural networks from overfitting. JMLR. 2014..\nStanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zit nick, and Devi Parikh. Vqa: Visual question answering. In ICCV, 2015..\nason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In ICLR, 2015\nHuijuan Xu and Kate Saenko. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In ECCV, 2016..\nZichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks for image question answering. arXiv preprint arXiv:1511.02274, 2015\nYang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. End-to-end reading comprehension with dynamic answer chunk ranking. arXiv preprint arXiv:1610.09996, 2016\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 2012.\nYuke Zhu, Oliver Groth, Michael S. Bernstein, and Li Fei-Fei. Visual7w: Grounded question an swering in images. In CVPR, 2016.\nShuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer arXiv preprint arXiv:1608.07905, 2016\nCaiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textua1 question answering. In ICML, 2016a\nZhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W Cohen, and Ruslan Salakhut dinov. Words or characters? fine-grained gating for reading comprehension. arXiv preprint. arXiv:1611.01724, 2016"}, {"section_index": "9", "section_name": "A ERROR ANALYSIS", "section_text": "Table4|summarizes the modes of errors by B1DAF and shows examples for each category of erro. in SQuAD.\nTable 4: Error analysis on SQuAD. We randomly selected EM-incorrect answers and classified then into 6 different categories. Only relevant sentence(s) from the context shown for brevity.\nContext: \"The Free Movement of Workers Regulation articles 1 to 7 set out the main provisions on equal treatment of workers\" Imprecise Question: \"Which articles of the Free Movement of Workers answer 50 Regulation set out the primary provisions on equal treatment of boundaries workers?\" Prediction: \"1 to 7\", Answer: \"articles 1 to 7\" Syntactic Context: \"A piece of paper was later found on which Luther had written his last statement.'. complications 28 Question: \"What was later discovered written by Luther?' and Prediction: \"A piece of paper', Answer: \"his last statement' ambiguities Context: \"Generally, education in Australia follows the three tier model which includes primary education (primary schools), followed by secondary education (secondary schools/high schools) and tertiary education (universities and/or TAFE Paraphrase 14 colleges)\" problems Question: \"What is the first model of education, in the Aus- tralian system?\" Prediction: \"three-tier', Answer: \"primary education'\"' Context: \"On June 4, 2014, the NFL announced that the practice of branding Super Bowl games with Roman numerals, a practice established at Super Bowl V, would be temporarily suspended, and that the game would be named using Arabic External 4 numerals as Super Bowl 50 as opposed to Super Bowl L' knowledge Question: \"If Roman numerals were used in the naming of the 5Oth Super Bowl, which one would have been used? Prediction: \"Super Bowl 50\", Answer: \"L\" Context: \"Over the next several years in addition to host to host interactive connections the network was enhanced to support. terminal to host connections, host to host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP Multi- and additional public universities in Michigan join the network 2 sentence All of this set the stage for Merit's role in the NSFNET project starting in the mid-1980s.\". Question: \"What set the stage for Merits role in NSFNET\" Prediction: \"All of this set the stage for Merit 's role in the NSFNET project starting in the mid-1980s\", Answer: \"Ethernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the network'' Context: \"English chemist John Mayow (1641-1679) refined this work by showing that fire requires only a part of air that Incorrect 2 he called spiritus nitroaereus or just nitroaereus. preprocessing Question: \"John Mayow died in what year?' Prediction: \"1641-1679', Answer: \"1679'\nEM F1 Eqn. 1 dot product 65.5 75.5 Eqn linear 59.5 69.7 Eqn. bilinear 61.6 71.8 Eqn linear after MLP 66.2 76.4 Eqn. 2 MLP after concat 67.1 77.0 B1DAF (single) 68.0 77.3\nTable 5: Variations of similarity function a (Equation[1) and fusion function 3 (Equation [2) anc their performance on the dev data of SQuAD. See Appendix|B|for the details of each variation.\nIn this appendix section, we experimentally demonstrate how different choices of the similarity function (Equation [1) and the fusion function (Equation 2) impact the performance of our model. Each variation is defined as following:.\nEqn.1 dot product. Dot product a is defined as\nEqn.1 linear. Linear a is defined as\nq(h, u) = wn [h; u]\nEqn.1} bilinear. Bilinear a is defined as\nEqn.1 linear after MLP. We can also perform linear ma ing after single laver of perceptron\n3(h, u, h) = max(0, Wmlp[h; u; h o u; h o h] + bmlp\nwhere Wmlp E R2d8d and bmlp E R2d are trainable weight matrix and bias. This is equivalent to adding ReLU after linearly transforming the original definition of . Since the output dimension of 3 changes, the input dimension of the first LSTM of the modeling layer will change as well.\na(h,u) =h`u\nwhere T indicates matrix transpose. Dot product has been used for the measurement of similarity between two vectors byHill et al.(2016)\nwhere wi. E R4d is a trainable weight matrix. This can be considered as the simplification of Equation|1 by dropping the term h o u in the concatenation.\na(h,u) = h'Wbiu\na(h, u) = wiin tanh(Wmlp[h; u] + bmlp\n3(h, u, h) = max(0, Wmlp[h; u; h o u; h o h] + bmlp.\nThe results of these variations on the dev data of SQuAD are shown in Table 5] It is important to note that there are non-trivial gaps between our definition of a and other definitions employed by previous work. Adding MLP in does not seem to help, yielding slightly worse result than without MLP."}] |
Sywh5KYex | [{"section_index": "0", "section_name": "LEARNING IDENTITY MAPPINGS WITH RESIDUAI GATES", "section_text": "Pedro H. P. Savarese\nFederal University of Rio de Janeiro Rio de Janeiro, Brazil.\nWe propose a layer augmentation technique that adds shortcut connections with. a linear gating mechanism, and can be applied to almost any network model. By. using a scalar parameter to control each gate, we provide a way to learn identity mappings by optimizing only one parameter. We build upon the motivation behind Highway Neural Networks and Residual Networks, where a layer is reformulated. in order to make learning identity mappings less problematic to the optimizer. The augmentation introduces only one extra parameter per layer, and provides easier. optimization by making degeneration into identity mappings simpler. Experimen- tal results show that augmenting layers provides better optimization, increased performance, and more layer independence. We evaluate our method on MNIST using fully-connected networks, showing empirical indications that our augmen- tation facilitates the optimization of deep models, and that it provides high toler- ance to full layer removal: the model retains over 90% of its performance even after half of its layers have been randomly removed. In our experiments, aug-. mented plain networks - which can be interpreted as simplified Highway Neural Networks - perform similarly to ResNets, raising new questions on how shortcut. connections should be designed. We also evaluate our model on CIFAR-1O and. CIFAR-100 using augmented Wide ResNets, achieving 3.65% and 18.27% test error, respectively."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recently, models such as Residual Networks (He et al.. (2015b)) and Highway Neural Networks (Srivastava et al.(2015)) permitted the design of networks with hundreds of layers. A key idea of these models is to allow for information to flow more freely through the layers, by using shortcut connections between the layer's input and output. This layer design greatly facilitates training, due to shorter paths between the lower layers and the network's error function. In particular, these models can more easily learn identity mappings in the layers, thus allowing the network to be deeper.\nLeonardo O. Mazza\nLeonardo O. Mazza Poli Federal University of Rio de Janeiro Rio de Janeiro, Brazil. leonardomazza@poli.ufrj.br\nleonardomazza@poli.ufri.br"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "As the number of layers of neural networks increase, effectively training its parameters becomes a fundamental problem (Larochelle et al.[(2009)). Many obstacles challenge the training of neural networks, including vanishing/exploding gradients (Bengio et al. (1994)), saturating activation func- tions (Xu et al.[(2016)) and poor weight initialization (Glorot & Bengio(2010)). Techniques such as unsupervised pre-training (Bengio et al.|(2007)), non-saturating activation functions (Nair & Hinton (2010) and normalization (Ioffe & Szegedy(2015)) target these issues and enable the training of deeper networks. However, stacking more than a dozen layers still lead to a hard to train model.\nIncreasing the depth of networks significantly increases its representational capacity and conse quently its performance, an observation supported by theory (Eldan & Shamir(2015) Telgarsky 2016)Bianchini & Scarselli|(2014)|Montufar et al.(2014)) and practice (He et al.(2015b) Szeged et al.[(2014)). Moreover, He et al.[(2015b) showed that, by construction, one can increase a net work's depth while preserving its performance. These two observations suggest that it suffices tc stack more layers to a network in order to increase its performance. However, this behavior is no observed in practice even with recently proposed models, in part due to the challenge of training ever deeper networks.\nIn this work we aim to improve the training of deep networks by proposing a layer augmentation. that builds on the idea of using shortcut connections, such as in Residual Networks and Highway. Neural Networks. The key idea is to facilitate the learning of identity mappings by introducing a. shortcut connection with a linear gating mechanism, as illustrated in Figure[1] Note that the shortcut. connection is controlled by a gate that is parameterized with a scalar, k. This is a key difference from Highway Networks, where a tensor is used to regulate the shortcut connection, along with the incoming data. The idea of using a scalar is simple: it is easier to learn k = 0 than to learn Wg = 0. for a weight tensor W, controlling the gate. Indeed, this single scalar allows for stronger supervision. on lower layers, by making gradients flow more smoothly in the optimization..\nu u ^ f(x,W) f(x,W) g(k X X\nFigure 1: Gating mechanism applied to the shortcut connection of a layer. The key difference witl. Highway Networks is that only a scalar k is used to regulate the gates instead of a tensor.\nNote that layers that degenerated into identity mappings have no impact in the signal propagating. through the network, and thus can be removed without affecting performance. The removal of sucl layers can be seen as a transposed application of sparse encoding (Glorot et al.(2011)): transposing. the sparsity from neurons to layers provides a form to prune them entirely from the network. In\nWe apply our proposed layer re-design to plain and residual layers, with the latter illustrated in. Figure2] Note that when augmenting a residual layer it becomes simply u = g(k)fr(x, W) + x, where Fr denotes the layer's residual function. Thus, the shortcut connection allows the input to flow freely without any interference of g(k) through the layer. In the next sections we will call. augmented plain networks (illustrated in Figure 1) Gated Plain Network and augmented residual networks (illustrated in Figure2) Gated Residual Network, or GResNet. Again, note that in both. cases learning identity mappings is much easier in comparison to the original models..\nu u q (k f(x,W) fr(x,W) q (k fr(x,W) X X\nFigure 2: Proposed network design applied to Residual Networks. Note that the joint network design. results in a shortcut path where the input remains unchanged. In this case, g(k) can be interpreted as an amplifier or suppressor for the residual fr(x, W).\nWe evaluate the performance of the proposed design in two experiments. First, we evaluate fully. connected Gated PlainNets and Gated ResNets on MNIST and compare them with their nor augmented counterparts, showing superior performance and robustness to layer removal. Second. we apply our layer re-design to Wide ResNets (Zagoruyko & Komodakis((2016)) and test its perfor. mance on CIFAR, obtaining results that are superior to all previously published results (to the bes. of our knowledge). These findings indicate that learning identity mappings is a fundamental aspec. of learning in deep networks. and designing models where this is easier seems highly effective.\nRecall that a network's depth can always be increased without affecting its performance - it suffice to add layers that perform identity mappings. Consider a plain fully-connected ReLU network witl. layers defined as u = ReLU((x, W)). When adding a new layer, if we initialize W to the identit. matrix I. we have\nu = ReLU((x,I)) = ReLU(x) = x\nThe last step holds since x is an output of a previous ReLU layer, and ReLU(ReLU(x)). ReLU(x). Thus, adding more layers should only improve performance. However, how can a net- work with more layers learn to yield performance superior than a network with less layers? A key observation is that if learning identity mapping is easy, then the network with more layers is more likely to yield superior performance, as it can more easily recover the performance of a smaller. network through identity mappings.\n>ReLU((x,Wi)) ReLU((x,W2) ReLU((x,Wm))->U x - x -ReLU((x,Wi)) ReLU((x,W2)) ReLU((x,Wm)) ReLU((x,I))>U\nFigure 3: A network can have layers added to it without losing performance. Initially, a network has m ReLU layers with parameters {W1, ..., Wm}. A new, (m+1)-th layer is added with Wm+1 = I This new layer will perform an identity mapping, therefore the two models are equivalent.\nThe layer design of Highway Neural Networks and Residual Networks allows for deeper models to be trained due to their shortcut connections. Note that in ResNets the identity mapping is learned\nwhen W = 0 instead of W = I. Similarly, a Highway layer can degenerate into an identity mapping when the gating term T(x, WT) equals zero for all data points. Since learning identity mappings in Highway Neural Networks strongly depends on the choice of the trasnform function T (and is non-trivial when T is the sigmoid function, since T-1(0) is not defined) we will focus our analysis on ResNets due to their simplicity. Considering a residual layer u = ReLU((x, W)) + x, we have:\nu = ReLU((x.0))+x = ReLU(0) +x = x\nRecent work (Zhang et al.(2016)) suggests that the L2 norm of a critical point is an important factor. regarding how easily the optimizer will reach it. More specifically, residual layers can be interpreted as a translation of the parameter set W = I to W = 0, which is more accessible in the optimization. process due to its inferior L2 norm.\nHowever, assuming that residual layers can trivially learn the parameter set W = 0 implies ignor ing the randomness when initializing the weights. We demonstrate this by calculating the expecte component-wise distance between W, and the origin. Here, W, denotes the weight tensor after ini tialization and prior to any optimization. Note that the distance between W, and the origin capture. the effort for a network to learn identity mappings:\nSome initialization schemes propose a variance in the order of O() (Glorot & Bengio|(2010), He et al.[(2015a)), however this represents the distance for each individual parameter in W. For tensors with O(n2) parameters, the total distance - either absolute or Euclidean - between W, and the Origin will be in the order of O(n)."}, {"section_index": "3", "section_name": "2.2 RESIDUAL GATES", "section_text": "As previously mentioned, the key contribution in this work is the proposal of a layer augmentation technique where learning a single scalar parameter suffices in order for the layer to degenerate into an identity mapping, thus making optimization easier for increased depths. As in Highway Networks. we propose the addition of gated shortcut connections. Our gates, however, are parameterized by a single scalar value, being easier to analyze and learn. For layers augmented with our technique the effort required to learn identity mappings does not depend on any parameter, such as the layer. width, in sharp contrast to prior models..\nOur design is as follows: a layer u = f(x, W) becomes u = g(k) f(x, W) +(1 g(k))x, where k is. a scalar parameter. This design is illustrated in Figure|1] Note that such layer can quickly degenerate. by setting q(k) to 0. Using the ReLU activation function as q, it suffices that k < 0 for q(k) = 0.\nBy adding an extra parameter, the dimensionality of the cost surface also grows by one. This nev dimension, however, can be easily understood due to the specific nature of the layer reformulation The original surface is maintained on the k = 1 slice, since the gated model becomes equivalent tc the original one. On the k = 0 slice we have an identity mapping, and the associated cost for al points in such slice is the same cost associated with the point {k = 1, W = I}: this follows since both parameter configurations correspond to identity mappings, therefore being equivalent. Lastly due to the linear nature of g(k) and consequently of the gates, all other slices k 0, k 1 will be a linear combination between the slices k = 0 and k = 1.\nIntuitively, residual layers can degenerate into identity mappings more effectively since learning an all-zero matrix is easier than learning the identity matrix. To support this argument, consider weight parameters randomly initialized with zero mean. Hence, the point W = 0 is located exactly in the center of the probability mass distribution used to initialize the weights.\nW.. Var W.\nNote that the distance is given by the distribution's variance, and there is no reason to assume it to be negligible. Additionally, the fact that Residual Networks still suffer from optimization issues caused by depth (Huang et al.(2016a)) further supports this claim..\nIn addition to augmenting plain layers, we also apply our technique to residual layers. Although. it might sound counterintuitive to add residual gates to a residual layer, we can see in Figure 2 that our augmentation provides ResNets means to regulate the residuals, therefore a linear gating.\nmechanism might not only allow deeper models, but could also improve performance. Having the original design of a residual layer as:\nu =f(x,W)=fr(x,W)+s\nwhere fr(x, W) is the layer's residual function - in our case, BN-ReLU-Conv-BN-ReLU-Conv Our approach changes this layer by adding a linear gate, yielding:.\n= g(k)f(x,W)+(1-g(k))x g(k)(fr(x,W)+x)+(1-g(k)) = g(k)fr(x,W) + x\nThe resulting layer maintains the shortcut connection unaltered, which according to|He et al.(2016 is a desired property when designing residual blocks. As (1 - g(k)) vanishes from the formulation. g(k) stops acting as a dual gating mechanism and can be interpreted as a flow regulator. Note that this model introduces a single scalar parameter per layer block. This new dimension can be interpreted as discussed above, except that the slice k = 0 is equivalent to {k = 1, W = 0}, since an identity mapping is learned when W = 0 in ResNets.."}, {"section_index": "4", "section_name": "3.1 MNIST", "section_text": "The networks consist of a linear layer with 50 neurons, followed by d layers with 50 neurons each and lastly a softmax layer for classification. Only the d middle layers differ between the four archi tectures - the first linear layer and the softmax layer are the same in all experiments.\nFor plain networks, each layer performs dot product, followed by Batch Normalization and a ReLU activation function.\nFor preprocessing, we divided each pixel value by 255, normalizing their values to [0, 1]\nThe training curves for plain networks, Gated PlainNets, ResNets and Gated ResNets with varying depth are shown in Figure4] The distance between the curves increase with the depth, showing that the augmentation helps the training of deeper models.\nTable[1|shows the test error for each depth and architecture. Augmented models perform better in all settings when compared to the original ones, and the performance boost is more noticeable with. increased depths. Interestingly, Gated PlainNets performed better than ResNets, suggesting that the. reason for Highway Neural Networks to underperform ResNets might be due to an overly complex gating mechanism.\nl =g(k)f(x,W)+(1-g(k))x =g(k)(fr(x,W)+x)+(1-g(k))x g(k)fr(x,W) + x\nAll models were implemented on Keras (Chollet (2015)) or on Torch (Collobert et al.(2011), and were executed on a Geforce GTX 1070. Larger models or more complex datasets, such as the ImageNet (Russakovsky et al.(2015)), were not explored due to hardware limitations.\nThe MNIST dataset (Lecun et al.[(1998)) is composed of 60, 000 greyscale images with 28 28 oixels. Images represent handwritten digits, resulting in a total of 10 classes. We trained four types of fully-connected models: classical plain networks, ResNets, Gated Plain networks and Gated ResNets.\nInitial tests with pre-activations (He et al.(2016)) resulted in poor performance on the validation set, therefore we opted for the traditional Dot-BN-ReLU layer when designing Residual Networks. Each residual block consists of two layers, as conventional.\nAll networks were trained using Adam (Kingma & Ba(2014) with Nesterov momentum (Dozat for a total of 100 epochs using mini-batches of size 128. No learning rate decay was used: we kept the learning rate and momentum fixed to 0.002 and 0.9 during the whole training..\n10 10 10 d=2 d=10 d=20 eror erroessrrsss 10 10 10 10-3 10 10-3 0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100 Epochse Epochse Epochse 10-1 10-1 Plain d=50 d=100 Residual Gated Residual Gated Plain 10 10 10-3 10 0 20 40 60 80 100 0 20 40 60 80 100 Epochse Epochse\n10 10 10 d=2 d=10 d=20 10-2 10 10 103 10-3 10-3 0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100 Epochs Epochs Epochse 10-1 10-1 Plain d=50 d=100 Residual Gated Residual Gated Plain 10-2 102 103 0 20 40 60 80 100 20 40 60 80 100 Epochs Epochs\nFigure 4: Train loss for plain and residual networks, along with their augmented counterparts, with d = {2, 10, 20, 50, 100}. As the models get deeper, the error reduction due to the augmentation. increases.\nDepth = d + 2 Plain ResNet Gated PlainNet Gated ResNet d = 2 2.29 2.20 2.04 2.17 d = 10 2.22 1.64 1.78 1.60 d = 20 2.21 1.61 1.59 1.57 d = 50 60.37 1.62 1.36 1.48 d = 100 90.20 1.50 1.29 1.26\nTable 1: Test error (%) on the MNIST dataset for fully-connected networks. Augmented models outperform their original counterparts in all experiments. Non-augmented plain networks perform worse and fail to converge for d = 50 and d = 100.\nDepth = d + 2 Gated PlainNet Gated ResNet d = 2 10.57 5.58 d = 10 1.19 2.54 d = 20 0.64 1.73 d = 50 0.46 1.04 d = 100 0.41 0.67\nTable 2: Mean k for increasingly deep Gated PlainNets and Gated ResNets\nAs observed in Table 2 the mean values of k decrease as the model gets deeper, showing that. shortcut connections have less impact on shallow networks. This agrees with empirical results that. ResNets perform better than classical plain networks as the depth increases. Note that the significant difference between mean values for k in Gated PlainNets and Gated ResNets has an intuitive expla. nation: in order to suppress the residual signal against the shortcut connection, Gated PlainNets. require that k < 0.5 (otherwise the residual signal will be enhanced). Conversely, Gated ResNets suppress the residual signal when k < 1.0, and enhance it otherwise..\nWe also analyzed how layer removal affects ResNets and Gated ResNets. We compared how the deepest networks (d = 100) behave as residual blocks composed of 2 layers are completely removed from the models. The final values for each k parameter, according to its corresponding residual block, is shown in Figure[5] We can observe that layers close to the middle of the network have a smaller k than these in the beginning or the end. Therefore, the middle layers have less importance by due to being closer to identity mappings.\n1.4 100 90 1.2 80 70 1.0 ACeenney 60 eest 50 0.8 40 GResNet - Greedy 30 0.6 GResNet - Random ResNet - Random 20 0.4 10 10 20 30 40 50 0 10 20 30 40 50 Residual Block index Removed units\nFigure 5: Left: Values for k according to ascending order of residual blocks. The first block consisted of the first two layers of the network, has index 1, while the last block - right before the softmax layer - has index 50. Right: Test accuracy (%) according to the number of removed layers Gated Residual Networks are more robust to layer removal, and maintain decent results even after half of the layers have been removed.\nThe greedy strategy is slightly better for Gated Residual Networks, showing that the k paramete is indeed a good indicator of a layer's importance for the model, but that layers tend to assume the same level of significance. In a fair comparison, where both models are pruned randomly, Gatec. ResNets retain a satisfactory performance even after half of its layers have been removed, while. ResNets suffer performance decrease after just a few layers.\nTherefore augmented models are not only more robust to layer removal, but can have a fair share of their layers pruned and still perform well. Faster predictions can be generated by using a pruned. version of an original model.\nThe CIFAR datasets (Krizhevsky[(2009)) consists of 60, 000 color images with 32 32 pixels each CIFAR-10 has a total of 10 classes, including pictures of cats, birds and airplanes. The CIFAR-100 dataset is composed of the same number of images, however with a total of 100 classes\nResidual Networks have surpassed state-of-the-art results on CIFAR. We test Gated ResNets, Wide Gated ResNets (Zagoruyko & Komodakis(2016)) and compare them with their original, non augmented models.\nFor pre-activation ResNets, as described in He et al. .(2016), we follow the original implementation details. We set an initial learning rate of O.1, and decrease it by a factor of 10 after 50% and 75% epochs. SGD with Nesterov momentum of 0.9 are used for optimization, and the only pre- processing consists of mean subtraction. Weight decay of O.ooo1 is used for regularization, and Batch Normalization's momentum is set to 0.9.\nWe follow the implementation fromZagoruyko & Komodakis (2016) for Wide ResNets. The learn-. ing rate is initialized as 0.1, and decreases by a factor of 5 after 30%, 60% and 80% epochs. Images. are mean/std normalized, and a weight decay of O.0005 is used for regularization. We also apply 0.3 dropout (Srivastava et al.(2014)) between convolutions, whenever specified. All other details are. the same as for ResNets.\nResults are shown in Figure [5] For Gated Residual Networks, we prune pairs of layers following two strategies. One consists of pruning layers in a greedy fashion, where blocks with the smallest k are removed first. In the other we remove blocks randomly. We present results using both strategies for Gated ResNets, and only random pruning for ResNets since they lack the k parameter.\nFor both architectures we use moderate data augmentation: images are padded with 4 pixels, and we take random crops of size 32 32 during training. Additionally, each image is horizontally flipped with 50% probability. We use batch size 128 for all experiments..\nTable 3: Test error (%) on the CIFAR-10 dataset, for ResNets, Wide ResNets and their augmented counterparts. k decay is when weight decay is also applied to the k parameters in an augmented network. Results for the original models are as reported in He et al.(2015b) and Zagoruyko & Komodakis(2016).\nTable 3 shows the test error for two architectures: a ResNet with n = 5, and a Wide ResNet with n = 4, n = 10. Augmenting each model adds 15 and 12 parameters, respectively. We observe that k decay hurts performance in both cases, indicating that they should either remain unregularized or suffer a more subtle regularization compared to the weight parameters. Due to its direct connection to layer degeneration, regularizing k results in enforcing identity mappings, which might harm the model.\nModel Original Gated Wide ResNet (2,4) 24.03 23.29 Wide ResNet (4,10) 19.25 18.89 Wide ResNet (4,10) + Dropout 18.85 18.27 Wide ResNet (8,1). 29.89 28.20\nAs in the previous experiment, in Figure6we present the final k values for each block, in this case of. the Wide ResNet (4,10) on CIFAR-10. We can observe that the k values follow an intriguing pattern:. the lowest values are for the blocks of index 1, 5 and 9, which are exactly the ones that increase the feature map dimension. This indicates that, in such residual blocks, the convolution performed . in the shortcut connection to increase dimension is more important than the residual block itself.. Additionally, the peak value for the last residual block suggests that its shortcut connection is of. little importance, and could as well be fully removed without greatly impacting the model..\nFigure 7|shows the loss curves for Gated Wide ResNet (4,10) + Dropout, both on CIFAR-10 and CIFAR-10o. The optimization behaves similarly to the original model. suggesting that the gates do\nFor all gated networks, we initialize k with a constant value of 1. One crucial question is whether weight decay should be applied to the k parameters. We call this 'k decay', and also compare Gated ResNets and Wide Gated ResNets when it is applied with the same magnitude of the weight decay: 0.0001 for Gated ResNet and O.0005 for Wide Gated ResNet.\nDue to the indications that a regularization on the k parameter results in a negative impact on the nodel's performance, we proceed to test other models - having different depths and widening fac- tors - with the goal of evaluating the effectiveness of our proposed augmentation. Tables 4|and|5 show that augmented Wide ResNets outperform the original models without changing any hyperpa-. rameter, both on CIFAR-10 and CIFAR-100.\nTable 4: Test error (%) on the CIFAR-10 dataset, for Wide ResNets and their augmented counter parts. Results for non-gated Wide ResNets are fromZagoruyko & Komodakis (2016)\n6 5 4 3 2 1 0 2 4 6 8 10 12 Residual Block index\nFigure 6: Values for k according to ascending order of residual blocks. The first block, consisted o. the first two layers of the network, has index 1, while the last block - right before the softmax laye. -has index 12.\nCIFAR-10 CIFAR-100 20 20 50 50 40 40 15 15 (%hrror 30 10 10 fest 20 10 10 0 0 0! 0 0 50 100 150 200 0 50 100 150 200 Epoch Epoch\nResults of different models on the CIFAR datasets are shown in Table[6] The training and test errors. are presented in Figure[7] To the authors' knowledge, those are the best results on CIFAR-10 and CIFAR-100 with moderate data augmentation - only random flips and translations.\nGreff et al. (2016) showed how Residual and Highway layers can be interpreted as performin iterative refinements on learned representations. In this view, there is a connection on a layer learned parameters and the level of refinement applied on its input: for Highway Neural Network. T(x) having components close to 1 results in a layer that generates completely new representations As seen before, components close to O result in an identity mapping, meaning that the representation are not refined at all.\nFigure 7: Training and test curves for the Wide ResNet (4,10) with 0.3 dropout, showing error (% on training and test sets. Dashed lines represent training error, whereas solid lines represent test error.\nnot have any side effects on the network. The performance gains presented on Table|4 point that. however predictable and extremely simple, our augmentation technique is powerful enough to aid on the optimization of state-of-the-art models.\nMethod Params C10+ C100+ Network in Network (Lin et al. (2013)) 8.81 FitNet (Romero et al.(2014) 8.39 35.04 Highway Neural Network (Srivastava et al.(2015)) 2.3M 7.76 32.39 All-CNN (Springenberg et al.(2014) 7.25 33.71 ResNet-110 (He et al.[(2015b) 1.7M 6.61 ResNet in ResNet (Targ et al.. (2016)) 1.7M 5.01 22.90 Stochastic Depth (Huang et al.. (2016a)) 10.2M 4.91 ResNet-1001 (He et al.(2016) 10.2M 4.62 22.71 FractalNet (Larsson et al.(2016)) 38.6M 4.60 23.73 Wide ResNet (4,10) (Zagoruyko & Komodakis (2016) 36.5M 3.89 18.85 DenseNet (Huang et al.(2016b)) 27.2M 3.74 19.25 Wide GatedResNet (4,10) + Dropout 36.5M 3.65 18.27\nTable 6: Test error (%) on the CIFAR-10 and CIFAR-100 dataset. All results are with standard data augmentation (crops and flips).\nIn particular, given the mapping performed by a layer, we can estimate how much more abstract its representations are compared to the inputs. For our technique, this estimation can be done by observing the k parameter of the corresponding layer: in Gated PlainNets, k = 0 corresponds to an identity mapping, and therefore there is no modification on the learned representations. For k = 1, the shortcut connection is ignored and therefore a jump in the representation's complexity is observed.\nFor Gated ResNets, the shortcut connection is never completely ignored in the generation of output However, we can see that as k grows to infinity the shortcut connection's contribution goes to zero. and the learned representation becomes more abstract compared to the layer's inputs.\nTable|6|shows how the layers that change the data dimensionality learn more abstract representations compared to dimensionality-preserving layers, which agrees with|Greff et al. (2016). The last layer's k value, which is the biggest among the whole model, indicates a severe jump in the abstraction of its representation, and is intuitive once we see the model as being composed of two main stages: a convolutional one and a fully-connected one, specific for classification.\nFinally, Table2|shows that the abstraction jumps decrease as the model grows deeper and the per formance increases. This agrees with the idea that depth allows for more refined representations tc be learned. We believe that an extensive analysis on the rate that these measures -- depth, abstraction jumps and performance - interact with each other could bring further understanding on the practical benefits of depth in networks."}, {"section_index": "5", "section_name": "4 CONCLUSION", "section_text": "We have proposed a novel layer augmentation technique that facilitates the optimization of deep networks by making identity mappings easy to learn. Unlike previous models, layers augmented by our technique require optimizing only one parameter to degenerate into identity, and by designing our method such that randomly initialized parameter sets are always close to identity mappings, our design offers less optimization issues caused by depth.\nOur experiments showed that augmenting plain and residual layers improves performance and fa-. cilitates learning in settings with increased depth. In the MNIST dataset, augmented plain networks outperformed ResNets, suggesting that models with gated shortcut connections - such as Highway Neural Networks - could be further improved by redesigning the gates..\nWe have shown that applying our technique to ResNets yield a model that can regulate the resid. uals. This model performed better in all our experiments with negligible extra training time and\nHowever, the dependency of T(x) on the incoming data makes it difficult to analyze the level of re finement performed by a layer given its parameters. This is more clearly observed once we consider how each component of T(x) is a function not only on the parameter set WT, but also on x.\nparameters. Lastly, we have shown how it can be used for layer pruning, effectively removing large numbers of parameters from a network without necessarily harming its performance.\nFranois Chollet. keras. https: //github. com/fchollet/keras. 2015.\nRonan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A matlab-like environmen for machine learning. In BigLearn, NIPS Workshop, 2011..\nTimothy Dozat. Incorporating nesteroy momentum into adam\nY. Bengio, P. Simard, and P Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 1994.. Y. Bengio, P. Lamblin, D Popovici, and H Larochelle. Greedy layer-wise training of deep networks.. NIPS, 2007. Y. Bengio, A. Courville, and P. Vincent. Representation Learning: A Review and New Perspectives.. ArXiv e-prints, June 2012.\nMonica Bianchini and Franco Scarselli. On the complexity of neural network classifiers: A com parison between shallow and deep architectures. IEEE Transactions on Neural Networks and Learning Systems, 25(8):1553 - 1565, 2014. doi: 10.1109/TNNLS.2013.2293637.\nAlex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009\nM. Lin. O. Chen, and S. Yan. Network In Network. ArXiv e-prints, December 2013\nRupesh Kumar Srivastava, Klaus Greff, and Jurgen Schmidhuber. Training very deep networks CoRR, abs/1507.06228,2015. URLhttp://arxiv.0rg/abs/1507.06228\nM. Telgarsky. Benefits of depth in neural networks. ArXiv e-prints, February 2016\nB. Xu, R. Huang, and M. Li. Revise Saturated Activation Functions. ArXiv e-prints, February 2016\nYann Lecun, Lon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, pp. 2278-2324, 1998.\nA. Romero, N. Ballas, S. Ebrahimi Kahou, A. Chassang, C. Gatta, and Y. Bengio. FitNets: Hints. for Thin Deep Nets. ArXiv e-prints, December 2014.. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng. Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. doi: 10.1007/s11263-015-0816-y. J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for Simplicity: The All. Convolutional Net. ArXiv e-prints, December 2014.. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi-. nov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Ma-. chine Learning Research. 15:1929-1958. 2014. 1 115/"}] |
r1te3Fqel | [{"section_index": "0", "section_name": "END-TO-END ANSWER CHUNK EXTRACTION IAND RANKING FOR READING COMPREHENSION", "section_text": "Mo Yu, Bing Xiang. Yang Yu* Wei Zhang* Bowen Zhou Kazi Hasan\n{ yu, zhangwei, zhou, kshasan, yum, bingxia} @us.ibm.com IBM Watson, Yorktown Heights, NY, USA"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "This paper proposes dynamic chunk reader (DCR), an end-to-end neural reading. comprehension (RC) model that is able to extract and rank a set of answer candi dates from a given document to answer questions. DCR is able to predict answers. of variable lengths, whereas previous neural RC models primarily focused on pre-. dicting single tokens or entities. DCR encodes a document and an input question. with recurrent neural networks, and then applies a word-by-word attention mech anism to acquire question-aware representations for the document, followed by. the generation of chunk representations and a ranking module to propose the top-. ranked chunk as the answer. Experimental results show that DCR could achieve. a 66.3% Exact match and 74.7% F1 score on the Stanford Question Answering. Dataset (Rajpurkar et al.2016)."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Reading comprehension-based question answering (RCQA) is the task of answering a question with. a chunk of text taken from related document(s). A variety of neural models have been proposed re-. cently either for extracting a single entity or a single token as an answer from a given text (Hermann et al.]2015] Kadlec et al.2016} Trischler et al.]2016b] Dhingra et al.]2016f Chen et al.2016 Sordoni et al.[2016f Cui et al.]2016a); or for selecting the correct answer by ranking a small set. of human-provided candidates (Yin et al.]2016]Trischler et al.[2016a). In both cases, an answer boundary is either easy to determine or already given..\nDifferent from the above two assumptions for RCQA, in the real-world QA scenario, people may ask questions about both entities (factoid) and non-entities such as explanations and reasons (non factoid) (see Table|1|for examples).\nIn this regard, RCQA has the potential to complement other QA approaches that leverage structurec. data (e.g., knowledge bases) for both the above question types. This is because RCQA can exploi. the textual evidences to ensure increased answer coverage, which is particularly helpful for non factoid answers. However, it is also challenging for RCQA to identify answer in arbitrary positior. in the passage with arbitrary length, especially for non-factoid answers which might be clauses o1. sentences.\nCompared to the relatively easier RC task of predicting single tokens/entities' predicting answer. of arbitrary lengths and positions significantly increase the search space complexity:\nthe number of possible candidates to consider is in the order of O(n2), where n is the number of passage words. In contrast, for previous works in which answers are single tokens/entities or from candidate lists, the complexity is in O(n) or the size of candidate lists l (usually l 5), respectively To address the above complexity, Rajpurkar et al. (Rajpurkar et al.]2016) used a two-step chunk- and-rank approach that employs a rule-based algorithm to extract answer candidates from a passage\n*Both authors contribute equally\nTable 1: Example of questions (with answers) which can be potentially answered with RC on a Wikipedia passage. The first question is factoid, asking for an entity. The second and third are non-factoid.\nThe United Kingdom (UK) intends to withdraw from the European Union (EU), a process commonly known as Brexit, as a result of a June 2016 referendum in which 51.9% voted to leave the EU. The separation process is complex, causing political and economic changes for the UK and other countries. As of September 2016, neither the timetable nor the terms for withdrawal have been established: in the meantime, the UK remains a full member of the European Union. The term \"Brexit' is a portmanteau of the words 'British' and \"exit\". Q1. Which country withdrew from EU in 2016? A1. United Kingdom Q2. How did UK decide to leave the European Union? A2. as a result of a June 2016 referendum in which 51.9% yoted to leave the EU Q3. What has not been finalized for Brexit as of September 2016? A3. neither the timetable nor the terms for withdrawal\nfollowed by a ranking approach with hand-crafted features to select the best answer. The rule-basec chunking approach suffered from low coverage (~ 70% recall of answer chunks) that cannot be improved during training; and candidate ranking performance depends greatly on the quality of the hand-crafted features. More recently, Wang and Jiang (Wang & Jiang2016) proposed two end-to end neural network models, one of which chunks a candidate answer by predicting the answer's tw boundary indices and the other classifies each passage word into answer/not-answer. Both models improved significantly over the method proposed by Rajpurkar et al. (Rajpurkar et al.2016).\nOur proposed model, called dynamic chunk reader (DCR), not only significantly differs from both the above systems in the way that answer candidates are generated and ranked, but also shares merits with both works. First, our model uses deep networks to learn better representations for candidate answer chunks, instead of using fixed feature representations as in (Rajpurkar et al.[|2016) Second, it represents answer candidates as chunks, as in (Rajpurkar et al.]2016), instead of word-. level representations (Wang & Jiang2016), to make the model aware of the subtle differences. among candidates (importantly, overlapping candidates).\nThe contributions of this paper are three-fold. (1) We propose a novel neural network model for. joint candidate answer chunking and ranking, where the candidate answer chunks are dynamically. constructed and ranked in an end-to-end manner. (2) we propose a new question-attention mecha-. nism to enhance passage word representation, which is subsequently used to construct chunk rep- resentations. (3) We also propose several simple but effective features to strengthen the attention mechanism, which fundamentally improves candidate ranking, with the by-product of higher exact. boundary match accuracy.\nThe experiments on the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al.]2016) which contains a variety of human-generated factoid and non-factoid questions, have shown the effectiveness of above three contributions.\nOur paper is organized as follows. We formally define the RCQA problem first. Next, we describe. our baseline with a neural network component. We present the end-to-end dynamic chunk reader model next. Finally, we analyze our experimental results and discuss the related work. In appendix we show formal equations and details of the model..\nTable[1|shows an example of our RC setting where the goal is to answer a question Qi, factoid (Q1) or non-factoid (Q2 and Q3), based on a supporting passage Pi, by selecting a continuous sequence. of text A, C P, as answer. Qi, P, and A, are all word sequences, where each word is drawn from a vocabulary, V. The i-th instance in the training set is a triple in the form of (P, Qi, A), where P=(Pi1,...,PiPi|), Qi=(qi1,...,qiQ|), and A=(ai1,...,Ai|A) (Pi.,qi.,.E V). Owing to the disagreement among annotators, there could be more than one correct answer for the same. m.n\nthe i-th training example is defined as cm,n, a sub-sequence in Pi, that spans from position m to n (1 < m < n < P,D). The ground truth answer A, could be included in the set of all candidates\nCj ={cm,n |Vm,n E N+,subj(m,n, Pi) and 1 m n |Pi|}, where subj(m,n,Pi) is a question is matched against the corresponding gold standard answer(s)\nRemark: Categories of RC Tasks Other simpler variants of the aforementioned RC task wer. explored in the past.For example, quiz-style datasets (e.g., MCTest (Richardson et al.]2013). MovieQA (Tapaswi et al.|2015)) have multiple-choice questions with answer options. Cloze-style. datesets(Hermann et al.[|2015fHill et al.]2015[Onishi et al.]2016), usually automatically generated have factoid \"question's created by replacing the answer in a sentence from the text with blank. Fo. the answer selection task this paper focuses on, several datasets exist, e.g. TREC-QA for factoic. answer extraction from multiple given passages, bAbI (Weston et al.] 2014) designed for inference. purpose, and the SQuAD dataset (Rajpurkar et al.2016) used in this paper. To the best of ou. knowledge, the SQuAD dataset is the only one for both factoid and non-factoid answer extractior with a question distribution more close to real-world applications..\nIn this section we modified a state-of-the-art RC system for cloze-style tasks for our answer extrac- tion purpose, to see how much gap we have for the two type of tasks, and to inspire our end-to-end system in the next section. In order to make the cloze-style RC system to make chunk-level deci- sion, we use the RC model to generate features for chunks, which are further used in a feature-based ranker like in (Rajpurkar et al.]2016). As a result, this baseline can be viewed as a deep learning. based counterpart of the system in (Rajpurkar et al.|2016). It has two main components: 1) a stand-. alone answer chunker, which is trained to produce overlapping candidate chunks, and 2) a neural. RC model, which is used to score each word in a given passage to be used thereafter for generating. chunk scores.\nAnswer Chunking To reduce the errors generated by the rule-based chunker in (Rajpurkar et al. 2016), first, we capture the part-of-speech (POS) pattern of all answer sub-sequences in the training dataset to form a POS pattern trie tree, and then apply the answer POS patterns to passage P, tc acquire a collection of all subsequences (chunk candidates) C; whose POS patterns can be matchec to the POS pattern trie. This is equivalent to putting an constraint subj(m, n, P) to candidate answer chunk generation process that only choose the chunk with a POS pattern seen for answers in the training data. Then the sub-sequences C; are used as answer candidates for P,. Note tha overlapping chunks could be generated for a passage, and we rely on the ranker to choose the bes candidate based on features from the cloze-style RC system. Experiments showed that for > 90% of the questions on the development set, the ground truth answer is included in the candidate se constructed in such manner.\nFeature Extraction and Ranking For chunk ranking, we (1) use neural RCQA model to annotate quence of scores (Sim, ..., Sin) to characterize its scale and distribution information, which serves single-layer Gated Attention Reader2[(Dhingra et al.|2016), which has state-of-the-art performance ing 4 statistics on (Sim,..., Sin): maximum, minimum, average and sum, as well as the count of matched POS pattern within the chunk, which serves as an answer prior. We use these 5 features ir a state-of-the-art ranker (Ganjisaffar et al.2011)."}, {"section_index": "3", "section_name": "DYNAMIC CHUNK READER", "section_text": "The dynamic chunk reader (DCR) model is presented in Figure[1] Inspired by the baseline we built DCR is deemed to be superior to the baseline for 3 reasons. First, each chunk has a representatiol constructed dynamically, instead of having a set of pre-defined feature values. Second, each passag.\n2We tried using more than one layers in Gated Attention Reader, but no improvement was observed\nDenver Broncos 5 Softmax Ranker Linear Layer (inner product) Dynamic Chunk Representation 4 Chunk-Repr Chunk of Hidden States. Layer Convolution Layer Question Representation Bi-GRU Yi Attention Enajuhk Layer Passage-Question joint Representation atj1 dj2 dj3 ajk Bi-GRU h. Encoder Layer Super Bowl 50 was Who won Super Bowl50 Passage Question eddin Question er on Word Input\nFigure 1: The main components in dynamic chunk reader model (from bottom to top) are bi-GRU encoders for passage and question, a word-by-word attention bi-GRU for passage, dynamic chunk representations that are transformed from pooled dynamic chunks of hidden states, the question attention on every chunk representation and final answer chunk prediction.\nword's representation is enhanced by word-by-word attention that evaluates the relevance of the passage word to the question. Third, these components are all within a single, end-to-end model that. can be trained in a joint manner..\nDCR works in four steps. First, the encoder layer encodes passage and question separately, by using bidirectional recurrent neural networks (RNN).\nSecond, the attention layer calculates the relevance of each passa e word to the question.\nFourth, the chunk representation layer dynamically extracts the candidate chunks from the given passage, and create chunk representation that encodes the contextual information of each chunk.\nFifth. the ranker layer scores the relevance between the representations of a chunk and the giver question, and ranks all candidate chunks using a softmax layer.\nWe describe each step below\nEncoder Layer We use bi-directional RNN encoder to encode P, and Q; of example i, and get. hidden state for each word position pi; and qikAs RNN input, a word is represented by a row. vector x E Rn. x can be the concatenation of word embedding and word features (see Fig.1). The. word vector for the t-th word is xt. A word sequence is processed using an RNN encoder with gated recurrent units (GRU) (Cho et al.|2014), which was proved to be effective in RC and neural machine translation tasks (Bahdanau et al.| 2015f|Kadlec et al.[[2016f Dhingra et al.[2016). For each position t, GRU computes ht with input xt and previous state ht-1, as:.\n3We can have separated parameters for question and passage encoders but a single shared encoder for both works better in the experiments..\nThird, the convolution layer generates unigram, bigram and trigram representation for each word. bigram and trigram of a word ends with the same word, and proper padding is applied on the first word to make sure the output is the same length as input to CNN layer.\nwhere ht, Tt, and ut E Rd are d-dimensional hidden state, reset gate, and update gate, respectively;. function, and O denotes element-wise production. For a word at t, we use the hidden state h from the forward RNN as a representation of the preceding context, and the h t from a backward RNN. that encodes text reversely, to incorporate the context after t. Next, ht = [ht; ht], the bi-directional. contextual encoding of xt, is formed. ; :is the concatenation operator. To distinguish hidden states from different sources, we denote the h; of j-th word in P and the hx of k-th word in Q as h? and. h1 respectively.\nAttention Layer Attention mechanism in previous RC tasks (Kadlec et al.]2016] Hermann et al 2015] [Sordoni et al.]2016] Dhingra et al.[[2016][Cui et al.[2016a b) enables question-aware passag representations. We propose a novel attention mechanism inspired by word-by-word style attentio. methods (Rocktaschel et a1.J2015f|Wang & Jiang|2015} Santos et al.|2016). For each pj, a question attended representation v; is computed as follows (example index i is omitted for simplicity):.\nQ jk |Q| 3 Qjkh k=1 [hP; Bj] Uj\nwhere h, and h are hidden states from the bi-directional RNN encoders (see Figure[1). An inner product, Qjk, is calculated between h?, and every question word hg. It indicates how well the passage word p; matches with every question word qk. ; is a weighted pooling of [Q] question hidden states, which serves as a p;-aware question representation. The concatenation of h, and j leads to a passage-question joint representation, v; E R4d4Next, we apply a second bi-GRU layer taking the u,s as inputs, and obtain forward and backward representations and ; E Rd, and in turn their concatenation, j = [7j; Yj].\nConvolution Layer Every word is encoded with complete passage context through attention laye RNN. We would like to model more complex representation of the words, by introducing unigram bigram and trigram representations. There are two benefits for this enhanced representation: 1 each word could be enhanced with local context information to help identify the boundary of th answer chunk. Using previous words has been a common feature used in POS tagging and Namec entity recognition; and 2) The information brought in by the ngram into the word representatior could enhance the semantic match between the answer chunk internal and the question. Imagine scenario of a three word candidate, where the last word representation includes the two previous words through the convolution layer. Matching to the last word could also lead to the match tc the semantics of the internal of the chunk. Specifically, we create for every word position j three representations, by using ngrams ending with the hidden state j:\nrt o(Wrxt+ Urht-1) Ut o(Wuxt+Uuht-1) ht tanh(Wxt+ U(rt O ht-1) ht (1-Ut)ht-1+Utht\nYj1 Yj : Wc1 7j2 [Yj-1;Yj] : Wc2 1j3 [Yj-2;Yj-1;Yj] : Wc3\n4We tried another word-by-word attention methods as in (Santos et al.. 2016), which has similar passage representation input to question side. However, this does not lead to improvement due to the confusion caused by long passages in RC. Consequently, we used the proposed simplified version of word-by-word attention on. passage side only.\nChunk Representation Layer A candidate answer chunk representation is dynamically createc given convolution layer output. We first decide the text boundary for the candidate chunk, and thei form a chunk representation using all or part of those j outputs inside the chunk. To decide a candidate chunk (boundary): we tried two ways: (1) adopt the POS trie-based approach used ir our baseline, and (2) enumerate all possible chunks up to a maximum number of tokens. For (2) we create up to N (max chunk length) chunks starting from any position j in Pj. Approach (1) cai generate candidates with arbitrary lengths, but fails to recall candidates whose POS pattern is unsee. in training set; whereas approach (2) considers all possible candidates within a window and is more flexible, but over-generates invalid candidates.\ng(Yml,..., Ynl m.n\nEach ~t is a convolution output over concatenated forward and backward RNN hidden states from. attention layer. So the first half in ~jt encodes information in forward RNN hidden states and the. second half encodes information in backward RNN hidden states. We experimented with several pooling functions (e.g., max, average) for g(), and found out that, instead of pooling, the best g(). function is to concatenate the first half of convolution output of the chunk's first word and the second. half of convolution output of the chunk's last word. Formally,.\n(ml, ..., Ynl) = Yml; Ynl\nRanker Layer A score sm . for each l-gram chunk representation n. denoting the probability of that chunk to be the true answer is calculated by dot product with question representation. The question representation is the concatenation of the last hidden state in forward RNN and the first hidden state in backward RNN. Formally for the chunk cm,n we have\n'(cm,n|Pi,Qi)=7\nAfter that, the final score for cm,n is evaluated as the linear combination of three scores, followed by a softmax:\ns(cm,nPi, Qi) = softmax(W : [s1; s2; s3\nN L =-logP(Ai|Pi,Qi) i=1\nNote that the i-th training instance is only used when A, is included in the corresponding candidate cm,n. The softmax in the final layer serves as the list-wise ranking chunk set Ci, i.e.m,nA = module similar in spirit to (Cao et al.]2007)\nDatasetWe used the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al.]2016. for the experiment. SQuAD came into our sight because it is a mix of factoid and non-factoid\nFor a candidate answer chunk cm,n spanning from position m to n inclusively, we construct chunk E R2d using every 7ji within range [m,n], with a function g(), and l E {1, 2, 3}. Formally,\nwhere ~mi is half of the hidden state for l-gram word representation corresponding to forward at- tention RNN output. We hypothesize that the hidden states at that two ends can better represent the chunk's contexts, which is critical for this task, than the states within the chunk. This observation also agrees with (Kobayashi et al.]2016).\nwhere s' denotes the score generated from l-gram representation.. i is the k-th hidden state. output from question Q;'s forward and backward RNN encoder, respectively.\nwhere s' is the shorthand notation for s'(c,n,nP, Q); W E R3. In runtime, the chunk with the highest probability is taken as the answer. In training, the following negative log likelihood is minimized:\nTable 2: Results on the SOuAD dataset\nDev Test Models EM F1 EM F1 Rajpurkar 2016 39.8% 51.0% 40.4% 51.0% Wang 2016 59.1% 70.0% 59.5% 70.3% DCR w/o Cony. 62.5% 71.2% 62.5% 71.0% DCR 63.4% 72.3% DCR Ensemble 66.3% 74.7%\nquestions, a real-world data (crowd-sourced), and of large scale (over 1ooK question-answer pair. collected from 536 Wikipedia articles). Answers range from single words to long, variable-lengtl. phrase/clauses. It is a relaxation of assumptions by the cloze-style and quiz-style RC datasets in the. Problem Definition section.\ntions) have answers that have a specific POS/NE tag pattern. For instance, \"who\"' questions mostly. have proper nouns/persons as answers and \"when'' questions may frequently have numbers/dates (e.g., a year) as answers. Thus, we believe that the model could exploit the co-relation between. question types and answer POS/NE patterns easier with POS and NE tag features. Implementa-. tion Details We pre-processed the SQuAD dataset using Stanford CoreNLP tool|(Manning et al.. 2014) with its default setting to tokenize the text and obtain the POS and NE annotations. To train our model, we used stochastic gradient descent with the ADAM optimizer (Kingma & Ba2014) with an initial learning rate of O.001. All GRU weights were initialized from a uniform distribu-. tion between (-O.01, O.01). The hidden state size, d, was set to 300 for all GRUs. The question bi-GRU shared parameters with the passage bi-GRU, while the attention-based passage bi-GRU had its own parameters. We shuffled all training examples at the beginning of each epoch and adopted a. curriculum learning approach (Bengio et al.] 2009), by sorting training instances by length in every. 10 batches, to enable the model start learning from relatively easier instances and to harder ones.. We also applied dropout of rate 0.2 to the embedding layer of input bi-GRU encoder, and gradient. clipping when the norm of gradients exceeded 10. We trained in mini-batch style (mini-batch size is 180) and applied zero-padding to the passage and question inputs in each batch. We also set the. maximum passage length to be 300 tokens, and pruned all the tokens after the 300-th token in the. training set to save memory and speed up the training process. This step reduced the training set size by about 1.6%. During test, we test on the full length of passage, so that we don't prune out the potential candidates. We trained the model for at most 30 epochs, and in case the accuracy did not improve for 10 epochs, we stopped training."}, {"section_index": "4", "section_name": "Results", "section_text": "Table[2|shows our main results on the SQuAD dataset. Compared to the scores reported in (Wang & Jiang 2016), our exact match (EM) and F1 on the development set and EM score on the test set are better, and F1 on the test set is comparable. We also studied how each component in our model contributes to the overall performance. Table|3|shows the details as well as the results of the baseline ranker. As the first row of Table3 shows, our baseline system improves 10% (EM) over Rajpurkar et al. (Rajpurkar et al.]2016) (Table2] row 1), the feature-based ranking system. However when compared to our DCR model (Table3] row 2), the baseline (row 1) is more than 12% (EM) behind\nstanfordnlp.github.io/CoreNLP/\nFeatures Ihe input vector representation of each word w to encoder RNNs has s1x parts including a pre-trained 300-dimensional GloVe embedding (Pennington et al.|2014) and five features (see Fig-. ure 1): (1) a one-hot encoding (46 dimensions) for the part-of-speech (POS) tag of w; (2) a one-hot. encoding (14 dimensions) for named entity (NE) tag of w; (3) a binary value indicating whether w's. surface form is the same to any word in the quesiton; (4) if the lemma form of w is the same to any. word in the question; and (5) if w is caplitalized. Feature (3) and (4) are designed to help the model. align the passage text with question. Note that some types of questions (e.g., \"who', \"when\"' ques-. tions) have answers that have a specific POs/NE tag pattern. For instance, \"who' questions mostly. have proper nouns/persons as answers and \"when\"' questions may frequently have numbers/dates. (e.g., a year) as answers. Thus, we believe that the model could exploit the co-relation between. question types and answer POS/NE patterns easier with POS and NE tag features. Implementa-. tion Details We pre-processed the SQuAD dataset using Stanford CoreNLP tool5(Manning et al..\nFor the feature ranking-based system, we used jforest ranker (Ganjisaffar et al. 2011) With LambdaMART-RegressionTree algorithm and the ranking metric was NDCG@10. For the Gated Attention Reader in baseline system, we replicated the method and use the same configurations as in (Dhingra et al.]2016).\nTable 3: Detailed system experiments on the SQuAD development set\nFigure 2: (a) Variations of DCR performance on ground truth answer length (up to 10) in the devel. opment set. The curve with diamond knots also shows the percentage of answers for each length ir. the development set. (b) Performance comparisons for different question head word.. even though it is based on the state-of-the-art model for cloze-style RC tasks. This can be attribute to the advanced model structure and end-to-end manner of DCR.\nWe also did ablation tests on our DCR model. First, replacing the word-by-word attention with Attentive Reader style attention (Hermann et al. 2015) decreases the EM score by about 4.5% showing the strength of our proposed attention mechanism.\nSecond, we remove the features in input to see the contribution of each feature. The result show that POS feature (1) and question-word feature (3) are the two most important features.\nFinally, combining the DCR model with the proposed POS-trie constraints yields a score similar to. the one obtained using the DCR model with all possible n-gram chunks. The result shows that (1). our chunk representations are powerful enough to differentiate even a huge amount of chunks when. no constraints are applied; and (2) the proposed POS-trie reduces the search space at the cost of a small drop in performance.\nAnalysis To better understand our system, we calculated the accuracy of the attention mechanism o. the gated attention reader used in our deep learning-based baseline. We found that it is 72% accurate. i.e., 72% of the times a word with the highest attention score is inside the correct answer span. This. means that, if we could accurately detect the boundary around the word with the highest attentior. score to form the answer span, we could achieve an accuracy close to 72%. In addition, we checked. the answer recall of our candidate chunking approach. When we use a window size of 10, 92% of. the time, the ground truth answer will be included in the extracted Candidate chunk set. Thus the. upper bound of the exact match score of our baseline system is around 66% (92% (the answer recall. 72%). From the results, we see our DCR system's exact match score is at 62%. This shows tha DCR is proficient at differentiating answer spans dynamically..\nTo further analyze the system's performance while predicting answers of different lengths, we show. the exact match (EM) and F1 scores for answers with lengths up to 10 tokens in Figure 2(a). From the graph, we can see that, with the increase of answer length, both EM and F1 drops, but in different. speed. The gap between F1 and exact match also widens as answer length increases. However, the. model still yields a decent accuracy when the answer is longer than a single word. Additionally Figure 2(b) shows that the system is better at \"when'' and \"who'' questions, but performs poorly\nModels EM F1 Chunk-and-Rank Pipeline Baseline 49.7% 64.9% DCR w/o Convolution. 62.5% 71.2% DCR w/o Word-by-Word Attention 57.6% 68.7% DCR w/o POS feature (1) 59.2% 68.8% DCR w/o NE feature (2) 60.4% 70.2% DCR w/o Question-word feature (3) 59.5% 69.0% DCR w/o Ouestion-lemma feature (4) 61.2% 69.9% DCR w/o Capitalized feature (5) 61.5% 70.6% DCR w/o Conv. w POS-trie 62.1% 70.8% 0.8 % answers in dev. 1 0.7 -*-Exact Match Score 0.8 -F1 Score 0.6 0.6 0.5 0.4 0.4 Exact Match 0.3 0.2 XF1 0.2 0 0.1 AyM moy whhyh whheee 0ym 0 1 2 3 4 5 6 7 8 9 10 (a) (b)\n1 0.8 0.6 0.4 *-Exact Match 0.2 --F1 0 wrhnnrhet wydee rhess auaddey gees whre ahea opnhyo whnnihnee whneaheee wree rherr\n0.8 0.6 0.4 K-Exact Match 0.2 --F1 0 whree rhess auaddey geen whee ahea op aeyl ehnnerchee whh aheee New mhyy Whneyss whnnihnee whseheee whreehherr\nFigure 3: Development set performance comparisons for different types of \"what\"' questions (con sidering the types with more than 20 examples in the development set).\non \"why\" questions. The large gap between exact match and F1 on \"why\" questions means tha perfectly identifying the span is harder than locating the core of the answer span.\nSince \"what', \"which\", and \"how\" questions contain a broad range of question types, we split then further based on the bigram a question starts with, and Figure3|shows the breakdown for \"what' questions. We can see that \"what' questions asking for explanations such as \"what happens\" anc. what happened\"' have lower EM and F1 scores. In contrast, \"what\"' questions asking for year an numbers have much higher scores and, for these questions, exact match scores are close to F1 scores which means chunking for these questions are easier for DCR."}, {"section_index": "5", "section_name": "6 RELATED WORK", "section_text": "Recently, several models have been proposed to enable more complex inference for RC task. For instance, gated attention model (Dhingra et al.. 2016) employs a multi-layer architecture, where each layer encodes the same document, but the attention is updated from layer to layer. EpiReader (Trischler et al.|2016b) adopted a joint training model for answer extractor and reasoner, where the extractor proposes top candidates, and the reasoner weighs each candidate by examining entailment relationship between question-answer representation and the document. An iterative alternating at- tention mechanism and gating strategies were proposed in (Sordoni et al. 2016) to optimize the attention through several hops. In contrast, Cui et al. (Cui et al.f 2016a b) introduced fine-grained document attention from each question word and then aggregated those attentions from each ques- tion token by summation with or without weights. This system achieved the state-of-the-art score on the CNN dataset. Those different variations all result in roughly 3-5% improvement over attention sum reader, but none of those could achieve higher than that. Other methods include using dynamic entity representation with max-pooling (Kobayashi et al.|2016) that aims to change entity represen- tation with context, and Weissenborn's (Weissenborn2016) system, which tries to separate entity from the context and then matches the question to context, scoring an accuracy around 70% on the CNN dataset.\nAttentive Reader was the first neural model for factoid RCQA (Hermann et al.2015). It uses Bidi. rectional RNN (Cho et al., 2014; Chung et al.,2014) to encode document and query respectively. and use query representation to match with every token from the document. Attention Sum Reader. (Kadlec et al.[2016) simplifies the model to just predicting positions of correct answer in the doc-. ument and the training speed and test accuracy are both greatly improved on the CNN/Daily Mail. dataset. (Chen et al.[2016) also simplified Attentive Reader and reported higher accuracy. Window-. based Memory Networks (MemN2N) is introduced along with the CBT dataset (Hill et al.| 2015). which does not use RNN encoders, but embeds contexts as memory and matches questions with. embedded contexts. Those models' mechanism is to learn the match between answer context with question/query representation. In contrast, memory enhanced neural networks like Neural Turing. Machines (Graves et al.]2014) and its variants (Zhang et al. 2015} Gulcehre et al.]2016] Zaremba & Sutskever2015, Chandar et al.]2016] Grefenstette et al.2015) were also potential candidates for the task, and Gulcehre et al. (Gulcehre et al.2016) reported results on the bAbI task, which is. worse than memory networks. Similarly, sequence-to-sequence models were also used (Yu et al.. 2015, Hermann et al.][2015), but they did not yield better results either.\nHowever, all of those models assume that the answers are single tokens. This limits the type of questions the models can answer. Wang and Jiang (Wang & Jiang2016) proposed a match-lstm and achieved good results on SQuAD. However, this approach predicts a chunk boundary or whether a word is part of a chunk or not. In contrast, our approach explicitly constructs the chunk representa- tions and similar chunks are compared directly to determine correct answer boundaries."}, {"section_index": "6", "section_name": "7 CONCLUSION", "section_text": "In this paper we proposed a novel neural reading comprehension model for question answering. Different from the previously proposed models for factoid RCQA, the proposed model, dynamic. chunk reader, is not restricted to predicting a single named entity as an answer or selecting an answer from a small, pre-defined candidate list. Instead, it is capable of answering both factoid and non- factoid questions as it learns to select answer chunks that are suitable for an input question. DCR achieves this goal with a joint deep learning model enhanced with a novel attention mechanism and five simple yet effective features. Error analysis shows that the DCR model achieves good. performance, but still needs to improve on predicting longer answers, which are usually non-factoid. in nature."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. ICLR, 2015.\nZhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. Learning to rank: from pairwis approach to listwise approach. In Proceedings of the 24th international conference on Machin learning, pp. 129-136. ACM, 2007.\nSarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Ben gio. Hierarchical memory networks. arXiv preprint arXiv:1605.07427, 2016.\nKyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.\nYiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over-. attention neural networks for reading comprehension. arXiv preprint arXiv:1607.04423, 2016a\nYiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, and Guoping Hu. Consensus attention-based neural networks for chinese reading comprehension. arXiv preprint arXiv:1607.02250, 2016b.\nBhuwan Dhingra, Hanxiao Liu, William W Cohen, and Ruslan Salakhutdinoy. Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549, 2016.\nYoshua Bengio, Jerome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41-48. ACM 2009.\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pp. 1693-1701, 2015.\nFelix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Readin, children's books with explicit memory representations. arXiv preprint arXiv:1511.02301, 2015.\nRudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention sum reader network. ACL. 2016.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin arXiv:1412.6980, 2014.\nSosuke Kobayashi, Ran Tian, Naoaki Okazaki, and Kentaro Inui. Dynamic entity representations with max-pooling improves machine reading. NAACL-HLT, 2016.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 10o,ooo+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. 2016\nMatthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, volume 3, pp. 4, 2013.\nTim Rocktaschel. Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunsom Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664, 2015\nCicero dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. Attentive pooling networks. arXiv preprint arXiv:1602.03609, 2016.\nAlessandro Sordoni, Phillip Bachman, and Yoshua Bengio. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245, 2016\nMakarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. Movieqa: Understanding stories in movies through question-answering. arXiv preprin arXiv:1512.02902, 2015.\nAdam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Phillip Bachman, and Kaheer Suleman A parallel-hierarchical model for machine comprehension on sparse data. arXiv preprin arXiv:1603.08884, 2016a.\nAdam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. arXiv preprint arXiv:1606.02270, 2016b.\nShuohang Wang and Jing Jiang. Learning natural language inference with lstm. arXiv preprini arXiv:1512.08849, 2015\nShuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer arXiv preprint arXiv:1608.07905, 2016.\nDirk Weissenborn. Separating answers from queries for neural reading comprehension. arXiv preprint arXiv:1607.03316, 2016.\nChristopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, anc David McClosky. The Stanford CoreNLP natural language processing toolkit. In Associ ation for Computational Linguistics (ACL) System Demonstrations, pp. 55-60, 2014.URI http://www.aclweb.org/anthology/p/p14/p14-5010\nWojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 362, 2015.\nWei Zhang, Yang Yu, and Bowen Zhou. Structured memory for neural turing machines. arXi preprint arXiv:1510.03931, 2015.\nWenpeng Yin, Sebastian Ebert, and Hinrich Schutze. Attention-based convolutional neural network for machine comprehension. arXiv preprint arXiv:1602.04341, 2016."}] |
HJgXCV9xx | [{"section_index": "0", "section_name": "DIALOGUE LEARNING WITH HUMAN-IN-THE-LOO", "section_text": "Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston\njiwel, ahm, spchopra, ranzato, jase}@fb.com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A good conversational agent (which we sometimes refer to as a learner or bol') should have th ability to learn from the online feedback from a teacher: adapting its model when making mistake and reinforcing the model when the teacher's feedback is positive. This is particularly importan in the situation where the bot is initially trained in a supervised way on a fixed synthetic, domain specific or pre-built dataset before release, but will be exposed to a different environment afte release (e.g., more diverse natural language utterance usage when talking with real humans, differen distributions, special cases, etc.). Most recent research has focused on training a bot from fixec training sets of labeled data but seldom on how the bot can improve through online interaction witl humans. Human (rather than machine) language learning happens during communication (Bassiri 2011; Werts et al.] 1995), and not from labeled datasets, hence making this an important subject tc St11dv\nIn this work, we explore this direction by training a bot through interaction with teachers in an. online fashion. The task is formalized under the general framework of reinforcement learning via. the teacher's (dialogue partner's) feedback to the dialogue actions from the bot. The dialogue takes place in the context of question-answering tasks and the bot has to, given either a short story or a. set of facts, answer a set of questions from the teacher. We consider two types of feedback: explicit. numerical rewards as in conventional reinforcement learning, and textual feedback which is more. natural in human dialogue, following (Weston2016). We consider two online training scenarios.. (i) where the task is built with a dialogue simulator allowing for easy analysis and repeatability of. experiments; and (ii) where the teachers are real humans using Amazon Mechanical Turk..\nWe explore important issues involved in online learning such as how a bot can be most efficiently trained using a minimal amount of teacher's feedback, how a bot can harness different types of feedback signal, how to avoid pitfalls such as instability during online learing with different types of feedback via data balancing and exploration, and how to make learning with real humans feasible via data batching. Our findings indicate that it is feasible to build a pipeline that starts from a model trained with fixed data and then learns from interactions with humans to improve itself.\n'In this paper, we refer to a learner (either a human or a bot/dialogue agent which is a machine learning algorithm) as the student, and their more knowledgeable dialogue partner as the teacher."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "An important aspect of developing conversational agents is to give a bot the ability. to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of. labeled data rather than interacting with a dialogue partner in an online fashion. In this paper we explore this direction in a reinforcement learning setting where. the bot improves its question-answering ability from feedback a teacher gives fol-. lowing its generated responses. We build a simulator that tests various aspects of. such learning in a synthetic environment, and introduce models that work in this. regime. Finally, real experiments with Mechanical Turk validate the approach..\nReinforcement learning has been widely applied to dialogue, especially in slot filling to solve domain-specific tasks (Walker2000, Schatzmann et al., 2006 Singh et al.]2000f2002).Efforts include Markov Decision Processes (MDPs) (Levin et al.f|1997} 2000] Walker et al.]2003} Pierac cini et al.]2009), POMDP models (Young et al.] [2010]2013] Gasic et al.]2013] 2014) and policy learning (Su et al.[2016). Such a line of research focuses mainly on frames with slots to fill, wher. the bot will use reinforcement learning to model a state transition pattern, generating dialogue ut terances to prompt the appropriate user responses to put in the desired slots. This goal is differen from ours, where we study end-to-end learning systems and also consider non-reward based setups via textual feedback.\nOur work is related to the line of research that focuses on supervised learning for question answering (QA) from dialogues (Dodge et al.]2015, [Weston2016), either given a database of knowledge (Bordes et al.]2015] Miller et al.[2016) or short texts (Weston et al. 2015} Hermann et al.|2015 Rajpurkar et al.]2016). In our work, the discourse includes the statements made in the past, the question and answer, and crucially the response from the teacher. The latter is what makes the setting different from the standard QA setting, i.e. we use methods that leverage this response also not just answering questions. Further, QA works only consider fixed datasets with gold annotations, i.e. they do not consider a reinforcement learning setting.\nOur work is closely related to a recent work from [Weston (2016) that learns through conductin conversations where supervision is given naturally in the response during the conversation. Tha work introduced the use of forward prediction that learns by predicting the teacher's feedback, i addition to using reward-based learning of correct answers. However, two important issues wer not addressed: (i) it did not use a reinforcement learning setting, but instead used pre-built dataset with fixed policies given in advance; and (ii) experiments used only simulated and no real languag data. Hence, models that can learn policies from real online communication were not investigated To make the differences with our work clear, we will now detail these points further.\nThe experiments in (Weston 2016) involve constructing pre-built fixed datasets, rather than training the learner within a simulator, as in our work. Pre-built datasets can only be made by fixing a prior in advance. They achieve this by choosing an omniscient (but deliberately imperfect) labeler that gets acc examples always correct (the paper looked at values 50%, 10% and 1%). Again, this was not learned, and was fixed to generate the datasets. Note that the paper refers to these answers as coming from \"the learner' (which should be the model), but since the policy is fixed it actually does not depend on the model. In a realistic setting one does not have access to an omniscient labeler one has to learn a policy completely from scratch, online, starting with a random policy, so thei setting was not practically viable. In our work, when policy training is viewed as batch learning over iterations of the dataset, updating the policy on each iteration, (Weston]2016) can be viewed as training only one iteration, whereas we perform multiple iterations. This is explained further ir Sections 4.2|and |5.1We show in our experiments that performance improves over the iterations i.e. it is better than the first iteration. We show that such online learning works for both reward based numerical feedback and for forward prediction methods using textual feedback (under certain conditions which are detailed). This is a key contribution of our work.\nFinally, (Weston] 2016) only conducted experiments on synthetic or templated language, and not real language, especially the feedback from the teacher was scripted. While we believe that synthetic. datasets are very important for developing understanding (hence we develop a simulator and conduct experiments also with synthetic data), for a new method to gain traction it must be shown to work on real data. We hence employ Mechanical Turk to collect real language data for the questions and. importantly for the teacher feedback and construct experiments in this real setting..\nWe begin by describing the data setup we use. In our first set of experiments we build a simulator as a testbed for learning algorithms. In our second set of experiments we use Mechanical Turk to. provide real human teachers giving feedback."}, {"section_index": "3", "section_name": "3.1 SIMULATOR", "section_text": "The simulator adapts two existing fixed datasets to our online setting. FollowingWeston(2016), w. use (i) the single supporting fact problem from the bAbI datasets (Weston et al.|2015) which consist of 1o00 short stories from a simulated world interspersed with questions; and (ii) the WikiMovie dataset (Weston et al.2015) which consists of roughly 100k (templated) questions over 75k entitie. based on questions with answers in the open movie database (OMDb). Each dialogue takes plac. between a teacher, scripted by the simulation, and a bot. The communication protocol is as follows. (1) the teacher first asks a question from the fixed set of questions existing in the dataset, (2) the bc answers the question, and finally (3) the teacher gives feedback on the bot's answer..\nWe follow the paradigm defined in (Weston2016) where the teacher's feedback takes the form of either textual feedback, a numerical reward, or both, depending on the task. For each dataset, there are ten tasks, which are further described in Sec.A and illustrated in Figure 5|of the appendix. We also refer the readers to (Weston 2016) for more detailed descriptions and the motivation behind these tasks. In the main text of this paper we only consider Task 6 (\"partial feedback'): the teacher replies with positive textual feedback (6 possible templates) when the bot answers correctly, anc positive reward is given only 50% of the time. When the bot is wrong, the teacher gives textual feedback containing the answer. Descriptions and experiments on the other tasks are detailed in the appendix. Example dialogues are given in Figure[1\nFigure 2: Human Dialogue from Mechanical Turk (based on WikiMovies) The human teacher's dialogue is in black and the bot is in red. We show examples where the bot answers correctly (left) and incorrectly (right). Real humans provide more variability of language in both questions and textual feedback than in the simulator setup (cf. Figure[1).\nThe difference between our simulation and the original fixed tasks of Weston. (2016) is that models are trained on-the-fly. After receiving feedback and/or rewards, we update the model (policy) and then deploy it to collect teacher's feedback in the next episode or batch. This means the model's policy affects the data which is used to train it, which was not the case in the previous work..\nFigure 1: Simulator sample dialogues for the bAbI task (left) and WikiMovies (right). We. consider 10 different tasks followingWeston (2016) but here describe only Task 6; other tasks are detailed in the appendix. The teacher's dialogue is in black and the bot is in red. (+) indicates. receiving positive reward, given only 50% of the time even when correct..\nFinally, we extended WikiMovies using Mechanical Turk so that real human teachers are giving feedback rather than using a simulation. As both the questions and feedback are templated in the simulation, they are now both replaced with natural human utterances. Rather than having a set of simulated tasks, we have only one task, and we gave instructions to the teachers that they could give feedback as they see fit. The exact instructions given to the Turkers is given in AppendixB] In general, each independent response contains feedback like (i) positive or negative sentences; or (ii) a phrase containing the answer or (iii) a hint, which are similar to setups defined in the simulator However, some human responses cannot be so easily categorized, and the lexical variability is much larger in human responses. Some examples of the collected data are given in Figure2"}, {"section_index": "4", "section_name": "4.1 MODEL ARCHITECTURE", "section_text": "In our experiments, we used variants of the End-to-End Memory Network (MemN2N) mod (Sukhbaatar et al.] [2015) as our underlying architecture for learning from dialogue\nThe input to MemN2N is the last utterance of the dialogue history x as well as a set of memories (context) C=c1, c2, ..., Cy. The memory C encodes both short-term memory, e.g., dialogue histories. between the bot and the teacher, and long-term memories, e.g., the knowledge base facts that the bot has access to. Given the input x and C, the goal is to produce an output/label a..\nIn the first step, the query x is transformed to a vector representation uo by summing up its con- stituent word embeddings: uo = Ax. The input x is a bag-of-words vector and A is the d V word embedding matrix where d denotes the emebbding dimension and V denotes the vocabulary size Each memory c; is similarly transformed to a vector m. The model will read information from the memory by comparing input representation uo with memory vectors m; using softmax weights:\n01 = pimi pt = softmax(ufmi\nIn the end. u y is input to a softmax function for the final prediction\na = softmax(uNy1, uNy2, .., uNyL)\nThe standard way MemN2N is trained is via a cross entropy criterion on known input-output pairs. which we refer to as supervised or imitation learning. As our work is in a reinforcement learning setup where our model must make predictions to learn, this procedure will not work, so we instead consider reinforcement learning algorithms which we describe next."}, {"section_index": "5", "section_name": "4.2 REINFORCEMENT LEARNING", "section_text": "In this section, we present the algorithms we used to train MemN2N in an online fashion. Our learn. ing setup can be cast as a particular form of Reinforcement Learning. The policy is implemented by. the MemN2N model. The state is the dialogue history. The action space corresponds to the set ol answers the MemN2N has to choose from to answer the teacher's question. In our setting, the policy. chooses only one action for each episode. The reward is either 1 (a reward from the teacher wher the bot answers correctly) or 0 otherwise. Note that in our experiments, a reward equal to 0 migh mean that the answer is incorrect or that the positive reward is simply missing. The overall setup is. closest to standard contextual bandits, except that the reward is binary..\nThis process selects memories relevant to the last utterance x, i.e., the memories with large values of p,. The returned memory vector o1 is the weighted sum of memory vectors. This process can be repeated to query the memory N times (so called \"hops') by adding on to the original input,. U1 = 01 + uo, or to the previous state, un = On + un-1, and then using un to query the memories again.\nwhere y1, . . . , yt denote the set of candidate answers. If the answer is a word, y; is the corresponding word embedding. If the answer is a sentence, yi is the embedding for the sentence achieved in the same way that we obtain embeddings for query x and memory C.\nThis is comparable to the real world situation where a teacher can either ask a student a single question and give feedback right away, or set up a test that contains many questions and grade all of them at once. Only after the learner completes all questions, it can hear feedback from the teacher.\nWe use batch size to refer to how many dialogue episodes the current model is used to collect feedback before updating its parameters. In the Reinforcement Learning literature, batch size is related to off-policy learning since the MemN2N policy is trained using episodes collected with a stale version of the model. Our experiments show that our model and base algorithms are very robust to the choice of batch size, alleviating the need for correction terms in the learning algorithm (Bottou et al.|2013).\nWe consider two strategies: (i) online batch size, whereby the target policy is updated after doing a single pass over each batch (a batch size of 1 reverts to the usual on-policy online learning); and (ii dataset-sized batch, whereby training is continued to convergence on the batch which is the size of the dataset, and then the target policy is updated with the new model, and a new batch is drawn anc the procedure iterates. These strategies can be applied to all the methods we use, described below.\nNext, we discuss the learning algorithms we considered in this work"}, {"section_index": "6", "section_name": "4.2.1 REWARD-BASED IMITATION (RBI)", "section_text": "The simplest algorithm we first consider is the one employed in Weston (2016). RBI relies or positive rewards provided by the teacher. It is trained to imitate the correct behavior of the learner i.e., learning to predict the correct answers (with reward 1) at training time and disregarding the other ones. This is implemented by using a MemN2N that maps a dialogue input to a prediction, i.e using the cross entropy criterion on the positively rewarded subset of the data.\nIn order to make this work in the online setting which requires exploration to find the correct answer we employ an e-greedy strategy: the learner makes a prediction using its own model (the answe. assigned the highest probability) with probability 1 - e, otherwise it picks a random answer with probability e. The teacher will then give a reward of +1 if the answer is correct, otherwise 0. The bot will then learn to imitate the correct answers: predicting the correct answers while ignoring the incorrect ones."}, {"section_index": "7", "section_name": "4.2.2 REINFORCE", "section_text": "The second algorithm we use is the REINFORCE algorithm (Williams 1992), which maximizes the expected cumulative reward of the episode, in our case the expected reward provided by the teacher. The expectation is approximated by sampling an answer from the model distribution. Let c denote the answer that the learner gives, p(a) denote the probability that current model assigns to a. r denote the teacher's reward, and J(0) denote the expectation of the reward. We have:\nVJ(0) ~ V logp(a)[r - b\nwhere b is the baseline value, which is estimated using a linear regression model that takes as input. the output of the memory network after the last hop, and outputs a scalar b denoting the estimation of the future reward. The baseline model is trained by minimizing the mean squared loss between the estimated reward b and actual reward r, ||r - b||2. We refer the readers to (Ranzato et al.2015 Zaremba & Sutskever2015) for more details. The baseline estimator model is independent from the policy model, and its error is not backpropagated through the policy model..\nThe major difference between RBI and REINFORCE is that (i) the learner only tries to imitate correct behavior in RBI while in REINFORCE it also leverages the incorrect behavior, and (ii) the learner explores using an e-greedy strategy in RBI while in REINFORCE it uses the distribution over actions produced by the model itself.\nWhen working with real human dialogues, e.g. collecting data via Mechanical Turk, it is easier to. set up a task whereby a bot is deployed to respond to a large batch of utterances, as opposed to a. single one. The latter would be more difficult to manage and scale up since it would require some form of synchronization between the model replicas interacting with each human.."}, {"section_index": "8", "section_name": "4.2.3 FORWARD PREDICTION (FP)", "section_text": "FP (Weston,2016) handles the situation where a numerical reward for a bot's answer is not available. meaning that there are no +1 or O labels available after a student's utterance. Instead, the mode. assumes the teacher gives textual feedback t to the bot's answer, taking the form of a dialogue. utterance, and the model tries to predict this instead. Suppose that x denotes the teacher's questior and C=c1, C2, ..., Cy denotes the dialogue history as before. In FP, the model first maps the teacher's. initial question x and dialogue history C to a vector representation u using a memory network with. multiple hops. Then the model will perform another hop of attention over all possible student' answers in A, with an additional part that incorporates the information of which candidate (i.e., a. was actually selected in the dialogue:.\nPa = softmax(uf ya) pa(ya + 1[a = a]) aEA"}, {"section_index": "9", "section_name": "5 EXPERIMENTS", "section_text": "Experiments are first conducted using our simulator, and then using Amazon Mechanical Turk with real human subjects taking the role of the teacher3.\nOnline Experiments In our first experiments, we considered both the bAbI and WikiMovies tasks and varied batch size, random exploration rate e, and type of model. Figure|3|and Figure|4|shows (Task 6) results on bAbI and WikiMovies. Other tasks yield similar conclusions and are reported in the appendix.\nOverall. we obtain the following conclusions:\n2In the simulated data, because the responses are templates, this can be implemented by first randomly sampling the response, and then randomly sampling a story with that response; we keep the history of al stories seen from which we sample. For real data slightly more sophisticated clustering should be used.\nwhere ya denotes the vector representation for the student's answer candidate a. is a (learned) d-dimensional vector to signify the actual action a that the student chooses. o is then combined with u to predict the teacher's feedback t using a softmax:\nU1=0+u t = softmax(u)\nwhere xr, denotes the embedding for the ith response. In the online setting, the teacher will give textual feedback. and the learner needs to update its model using the feedback. It was shown in Weston (2016) that in an off-line setting this procedure can work either on its own, or in conjunction with a method that uses numerical rewards as well for improved performance. In the online setting we consider two simple extensions:\ne-greedy exploration: with probability e the student will give a random answer, and with. probability 1 - e it will give the answer that its model assigns the largest probability. This. method enables the model to explore the space of actions and to potentially discover correct answers. data balancing: cluster the set of teacher responses t and then balance training across the clusters equally[2[This is a type of experience replay (Mnih et al.2013) but sampling with. an evened distribution. Balancing stops part of the distribution dominating the learning. For. example, if the model is not exposed to sufficient positive and negative feedback, and one class overly dominates, the learning process degenerates to a model that always predicts. the same output regardless of its input..\nIn general RBI and FP do work in a reinforcement learning setting, but can perform better. with random exploration. In particular RBI can fail without exploration. RBI needs random noise for exploring labels otherwise it can get stuck predicting a subset of labels and fail..\nRandom Exploration for RBI Random Exploration for FP 1.0 e=0 0.9 0.9 e=0.2 0.8 e=0.4 0.8 e=0.6 0.7 0.7 e=0.8 ACeanecy e=1 0.6 0.6 e=0 0.5 0.5 e=0.2 0.4 0.4 e=0.4 e=0.6 0.3 0.3 e=0.8 0.2 0.2 e=1 0 20 40 60 80 0 20 40 60 80 Epoch Epoch Random Exploration for FP with Balancing Comparing RBI, FP and REINFORCE 1.0 0.9 0.9 0.8 0.8 0.7 0.7 AAcnnra 0.6 0.6 e=0 0.5 0.5 =0.2 0.4 =0.4 0.4 =0.6 REINFORCE 0.37 0.3 e=0.8 RBI 0.2 e=1 0.2 FP 0 20 40 60 80 0 20 40 60 80 Epoch Epoch RBI (eps=0.6) Varvina Batch Size FP (eps=0.6) Varying Batch Size 1.0 0.9 0.9 0.8 0.8 0.7 0.7 ACennney 1 0.6 0.6 0.5 0.4 batch 20 batch 20 0.4 batch 80 batch 80 0.3 batch 320 batch 320 0.3 0.2 batch 1000 batch 1000 0 20 40 60 80 100 0 20 40 60 80 100 Epoch Epoch\nFigure 3: Training epoch vs. test accuracy for bAbI (Task 6) varying exploration e and batch size. Random exploration is important for both reward-based (RBI) and forward prediction (FP) Performance is largely independent of batch size, and RBI performs similarly to REINFORCE Note that supervised, rather than reinforcement learning, with gold standard labels achieves 100% accuracy on this task.\nREINFORCE obtains similar performance to RBI with optimal e. FP with balancing or with exploration via e both outperform FP alone. For both RBI and FP, performance is largely independent of online batch s\nDataset Batch Size Experiments Given that larger online batch sizes appear to work well, anc. that this could be important in a real-world data collection setup where the same model is deployed tc. gather a large amount of feedback from humans, we conducted further experiments where the batcl. size is exactly equal to the dataset size and for each batch training is completed to convergence\nRandom Exploration for RB. Random Exploration for FP. 0.7 0.7 0.6 0.6 0.5 T0.5 0.4 e=0 e=0 e=0.1 0.4 e=0.1 0.3 A e=0.2 e=0.2 e=0.3 0.3 e=0.3 0.2 e=0.4 E=0.4 E=0.5 0.2 V E=0.5 0.1 e=1 e=1 0.1 0 5 10 15 20 0 5 10 15 20 Epoch Epoch RBI (eps=0.5) Varying Batch Size Comparing RBI, FP and REINFORCE 0.7 0.7 0.6 0.6 0.5 eul 0.4 0.4 CC 0.3 batch 32 A 0.3 batch 320 0.2 batch 3200 0.2 REINFORCE batch 32000 RBI 0.1 full dataset 0.1 FP 0 5 10 15 20 0 5 10 15 20 Epoch Epoch\nFigure 4: wikiMovies: Training epoch vs. test accuracy on Task 6 varying (top left panel) explo ration rate e while setting batch size to 32 for RBI, (top right panel) for FP, (bottom left) batch size for RBI, and (bottom right) comparing RBI, REINFORCE and FP with e = 0.5. The model is robust to the choice of batch size. RBI and REINFORCE perform comparably. Note that supervised, rather than reinforcement learning, with gold standard labels achieves 80% accuracy on this task (Weston 2016).\nAfter the model has been trained on the dataset, it is deployed to collect a new dataset of questions and answers, and the process is repeated. Table|1[reports test error at each iteration of training using the bAbI Task 6 as the case study (see the appendix for results on other tasks). The following conclusions can be made for this setting:\nOverall, our simulation results indicate that while a bot can be effectively trained fully online from. bot-teacher interactions, collecting real dialogue data in batches (which is easier to collect and iterate experiments over) is also a viable approach. We hence pursue the latter approach in our next set of. experiments.\nRBI improves in performance as we iterate. Unlike in the online case, RBI does not nee random exploration. We believe this is because the first batch, which is collected with randomly initialized model, contains enough variety of examples with positive rewards tha the model does not get stuck predicting a subset of labels. FP is not stable in this setting. This is because once the model gets very good at makin predictions (at the third iteration), it is not exposed to a sufficient number of negative re sponses anymore. From that point on, learning degenerates and performance drops as th model always predicts the same responses. At the next iteration, it will recover again sinc it has a more balanced training set, but then it will collapse again in an oscillating behavio FP does work if extended with balancing or random exploration with sufficiently large e. RBI+FP also works well and helps with the instability of FP, alleviating the need for randor exploration and data balancing\nIteration 1 2 3 4 5 6 Imitation Learning 0.24 0.23 0.23 0.22 0.23 0.23 Reward Based Imitation (RBI 0.74 0.87 0.90 0.96 0.96 0.98 Forward Pred. (FP) 0.99 0.96 1.00 0.30 1.00 0.29 RBI+FP 0.99 0.96 0.97 0.95 0.94 0.97 FP (balanced) 0.99 0.97 0.97 0.97 0.97 0.97 FP (rand. exploration e = 0.25) 0.96 0.88 0.94 0.26 0.64 0.99 FP (rand. exploration e = 0.5) 0.98 0.98 0.99 0.98 0.95 0.99\nIteration 1 2 3 4 5 6 Imitation Learning 0.24 0.23 0.23 0.22 0.23 0.23 Reward Based Imitation (RBI) 0.74 0.87 0.90 0.96 0.96 0.98 Forward Pred. (FP) 0.99 0.96 1.00 0.30 1.00 0.29 RBI+FP 0.99 0.96 0.97 0.95 0.94 0.97 FP (balanced) 0.99 0.97 0.97 0.97 0.97 0.97 FP (rand. exploration e = 0.25) 0.96 0.88 0.94 0.26 0.64 0.99 FP (rand. exploration e = 0.5) 0.98 0.98 0.99 0.98 0.95 0.99\nTable 1: Test accuracy of various models per iteration in the dataset batch size case (using batch siz equal to the size of the full training set) for bAbI, Task 6. Results > 0.95 are in bold.\nRelation to experiments in Weston(2016 As described in detail in Section 2[the datasets we use in our experiments were introduced in (Weston et al.]2015). However, that work involvec constructing pre-built fixed policies (and hence, datasets), rather than training the learner in a rein forcement/interactive learning using a simulator, as in our work. They achieved this by choosing an omniscient (but deliberately imperfect) labeler that gets acc examples always correct (the pape looked at values 1%, 10% and 50%). In a realistic setting one does not have access to an omniscien labeler, one has to learn a policy completely from scratch, online, starting with a random policy, as we do here. Nevertheless, it is possible to compare our learnt policies to those results because we use the same train/valid/test splits.\nThe clearest comparison comparison is via Table[1] where the policy is learnt using batch iterations of the dataset, updating the policy on each iteration. Weston et al.(2015) can be viewed as training. only one iteration, with a pre-built policy, as explained above, where 59%, 81% and 99% accuracy. was obtained for RBI for acc with 1%, 10% and 50% respectivelyWhile acc of 50% is good. enough to solve the task, lower values are not. In this work a random policy begins with 74% accuracy on the first iteration, but importantly on each iteration the policy is updated and improves with values of 87%, 90% on iterations 2 and 3 respectively, and 98% on iteration 6. This is a key differentiator to the work of (Weston et al.]2015) where such improvement was not shown. We. show that such online learning works for both reward-based numerical feedback and for forward. prediction methods using textual feedback (as long as balancing or random exploration is performed sufficiently). The final performance outperforms most values of acc from Weston et al.(2015 unless is so large that the task is already solved. This is a key contribution of our work..\nSimilar conclusions can be made for Figures[3and4 Despite our initial random policy starting at close to O% accuracy, if random exploration e > 0.2 is employed then after a number of epochs the performance is better than most values of acc from Weston et al.[(2015), e.g. compare the. accuracies given in the previous paragraph (59%, 81% and 99%) to Figure[3] top left.."}, {"section_index": "10", "section_name": "5.2 HUMAN FEEDBACK", "section_text": "We employed Turkers to both ask questions and then give textual feedback on the bot's answers, a described in Section3.2] Our experimental protocol was as follows. We first trained a MemN2N using supervised (i.e., imitation) learning on a training set of 1ooo questions produced by Turker and using the known correct answers provided by the original dataset (and no textual feedback) Next, using the trained policy, we collected textual feedback for the responses of the bot for ar additional 10,o00 questions. Examples from the collected dataset are given in Figure[2] Given thi dataset, we compare various models: RBI, FP and FP+RBI. As we know the correct answers t the additional questions, we can assign a positive reward to questions the bot got correct. We henc measure the impact of the sparseness of this reward signal, where a fraction r of additional example have rewards. The models are tested on a test set of ~8,o00 questions (produced by Turkers), an hyperparameters are tuned on a similarly sized validation set. Note this is a harder task than th WikiMovies task in the simulator due to the use natural language from Turkers, hence lower tes performance is expected.\n4Note, this is not the same as a randomly initialized neural network policy, because due to the synthetic. construction with an omniscient labeler the labels will be balanced. In our work, we learn the policy fron randomly initialized weights which are updated as we learn the policy..\nResults are given in Table 2 They indicate that both RBI and FP are useful. When rewards are sparse, FP still works via the textual feedback while RBI can only use the initial 1000 example when r = 0. As FP does not use numericalrewards at all, it is invariant to the parameter r. Th combination of FP and RBI outperforms either alone\nWe also conducted additional experiments comparing with (i) synthetic feedback and (ii) the fully supervised case which are given in Appendix[C.1 They show that the results with human feedback. are competitive with these approaches..\nWe studied dialogue learning of end-to-end models using textual feedback and numerical rewards. Both fully online and iterative batch settings are viable approaches to policy learning, as long as. possible instabilities in the learning algorithms are taken into account. Secondly, we showed for. the first time that the recently introduced FP method can work in both an online setting and on. real human feedback. Overall, our results indicate that it is feasible to build a practical pipeline. that starts with a model trained on an initial fixed dataset, which then learns from interactions with humans in a (semi-)online fashion to improve itself. Future research should work towards doing this. in a never-ending learning setup."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Leon Bottou, Jonas Peters, Denis X. Quionero-Candela, Joaquin amd Charles, D. Max Chicker ing, Elon Portugaly, Dipankar Ray, Patrice Simard, and Ed Snelson. Counterfactual reasoning. and learning systems: The example of computational advertising. Journal of Machine Learning. Research, 14:3207-3260, 2013.\nJesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. Evaluating prerequisite qualities for learning end-to-end dialog sys tems. arXiv preprint arXiv:1511.06931, 2015.\nMilica Gasic. Catherine Breslin. Matthew Henderson, Dongho Kim, Martin Szummer, Blaise Thom son, Pirros Tsiakoulis, and Steve Young. Pomdp-based dialogue manager adaptation to extended domains. In Proceedings of S1GDIAL, 2013.\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pp. 1693-1701, 2015.\nModel r = 0 r = 0.1 r = 0.5 r = 1 Reward Based Imitation (RBI) 0.333 0.340 0.365 0.375 Forward Prediction (FP) 0.358 0.358 0.358 0.358 RBI+FP 0.431 0.438 0.443 0.441\nTable 2: Incorporating Feedback From Humans via Mechanical Turk. Textual feedback is provided for 10,000 model predictions (from a model trained with 1k labeled training examples) and additional sparse binary rewards (fraction r of examples have rewards). Forward Prediction and Reward-based Imitation are both useful, with their combination performing best.\nMohammad Amin Bassiri. Interactional feedback and the impact of attitude and motivation on noticing 12 form. English Language and Literature Studies, 1(2):61, 2011..\nMilica Gasic, Dongho Kim, Pirros Tsiakoulis, Catherine Breslin, Matthew Henderson, Martin Szummer, Blaise Thomson, and Steve Young. Incremental on-line adaptation of pomdp-based dialogue managers to extended domains. In Proceedings on InterSpeech, 2014.\nEsther Levin, Roberto Pieraccini, and Wieland Eckert. Learning dialogue strategies within the markov decision process framework. In Automatic Speech Recognition and Understanding, 1997 Proceedings., 1997 IEEE Workshop on, pp. 72-79. IEEE, 1997.\nEsther Levin. Roberto Pieraccini. and Wieland Eckert. A stochastic model of human-machine in teraction for learning dialog strategies. IEEE Transactions on speech and audio processing, 8(1):. 11-23, 2000.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 1oo,ooo+ question. for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016\nMarc'Aurelio Ranzato. Sumit Chopra. Michael Auli, and Wojciech Zaremba. Sequence level train ing with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015.\nSatinder Singh, Michael Kearns, Diane J Litman, Marilyn A Walker, et al. Empirical evaluation of a reinforcement learning spoken dialogue system. In AAAI/IAAI, pp. 645-651, 2000\nSatinder Singh, Diane Litman, Michael Kearns, and Marilyn Walker. Optimizing dialogue man agement with reinforcement learning: Experiments with the njfun system. Journal of Artificia. Intelligence Research. 16:105-133. 2002\nPei-Hao Su. Milica Gasic. Nikola Mrksic. Lina Roias-Barahona. Stefan Ultes. David Vandyke Tsung-Hsien Wen, and Steve Young. Continuously learning neural dialogue management. arXiv preprint arXiv: 1606.02689, 2016.\nMarilyn A Walker, Rashmi Prasad, and Amanda Stent. A trainable generator for recommendations in multimodal dialog. In INTERSPEECH, 2003.\nJason Weston. Dialog-based language learning. arXiv j print arXiv:1604.06045, 2016.\nJason Weston. Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merrienboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015\nSteve Young, Milica Gasic, Simon Keizer, Francois Mairesse, Jost Schatzmann, Blaise Thomson and Kai Yu. The hidden information state model: A practical framework for pomdp-based spoken dialogue management. Computer Speech & Language, 24(2):150-174, 2010.\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. 2013.\nRoberto Pieraccini, David Suendermann, Krishna Dayanidhi, and Jackson Liscombe. Are we there yet? research in commercial spoken dialog systems. In International Conference on Text, Speech and Dialogue, pp. 3-13. Springer, 2009\nWojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 362, 2015."}, {"section_index": "12", "section_name": "FURTHER SIMULATOR TASK DETAILS", "section_text": "The tasks inWeston(2016) were specifically: - Task 1: The teacher tells the student exactly what they should have said (supervised baseline) - Task 2: The teacher replies with positive textual feedback and reward, or negative textual feedback - Task 3: The teacher gives textual feedback containing the answer when the bot is wrong. - Task 4: The teacher provides a hint by providing the class of the correct answer, e.g., \"No it's a movie\" for the question \"which movie did Forest Gump star in?''. - Task 5: The teacher provides a reason why the student's answer is wrong by pointing out the relevant supporting fact from the knowledge base. - Task 6: The teacher gives positive reward only 50% of the time. - Task 7: Rewards are missing and the teacher only gives natural language feedback. - Task 8: Combines Tasks 1 and 2 to see whether a learner can learn successfully from both forms of supervision at once. - Task 9: The bot asks questions of the teacher about what it has done wrong. - Task 10: The bot will receive a hint rather than the correct answer after asking for help.\nThese are the instructions given for the textual feedback mechanical turk task (we also constructec a separate task to collect the initial questions, not described here):\nTitle: Write brief responses to given dialogue exchanges (about 15 min)\nDescription: Write a brief response to a student's answer to a teacher's question, providing feedbac to the student on their answer..\nEach task consists of the following triplets\n1. a question by the teacher. 2. the correct answer(s) to the question (separated by \"OR\"). 3. a proposed answer in reply to the question from the student\nConsider the scenario where you are the teacher and have already asked the question, and received. the reply from the student. Please compose a brief response giving feedback to the student about. their answer. The correct answers are provided so that you know whether the student was correct or nOt.\nFor example, given 1) question: \"what is a color in the united states flag?; 2) correct answer: \"white, blue, red\"; 3) student reply: \"red\", your response could be something like \"that's right!\"; for 3) reply: \"green'', you might say \"no that's not right'' or \"nope, a correct answer is actually white\"'\nPlease vary responses and try to minimize spelling mistakes. If the same responses are copied/pastec or overused, we'll reject the HIT.\nAvoid naming the student or addressing \"the class\" directly\nWe will consider bonuses for higher. ponses during review qualuty re\nWe refer the readers to (Weston 2016) for more detailed descriptions and the motivation behind. these tasks. The difference in our system is that the model can be trained on-the-fly via the simulator: after receiving feedback and/or rewards, the model can update itself and apply its learning to the next episode. We present results on Tasks 2, 3 and 4 in this appendix.\nFigure 5: The ten tasks our simulator implements, which evaluate different forms of teacher response and binary feedback. In each case the same example from WikiMovies is given for simplicity, where the student answered correctly for all tasks (left) or incorrectly (right). Red text denotes responses by the bot with S denoting the bot. Blue text is spoken by the teacher with T denoting the teacher's response. For imitation learning the teacher provides the response the student should say denoted with S in Tasks 1 and 8. A (+) denotes a positive reward.\nIteration 1 2 3 4 5 6 Imitation Learning. 0.24 0.23 0.23 0.23 0.25 0.25 Reward Based Imitation (RBI) 0.95 0.99 0.99 0.99 1.00 1.00 Forward Pred. (FP) 1.00 0.19 0.86 0.30 99 0.22 RBI+FP 0.99 0.99 0.99 0.99 99 0.99 FP (balanced) 0.99 0.97 0.98 0.98 0.96 0.97 FP (rand. exploration e = 0.25) 0.99 0.91 0.93 0.88 0.94 0.94 FP (rand. exploration e = 0.5) 0.98 0.93 0.97 0.96 0.95 0.97\nIteration 1 2 3 4 5 6 Imitation Learning 0.24 0.23 0.23 0.23 0.25 0.25 Reward Based Imitation (RBI 0.95 0.99 0.99 0.99 1.00 1.00 Forward Pred. (FP) 1.00 0.19 0.86 0.30 99 0.22 RBI+FP 0.99 0.99 0.99 0.99 99 0.99 FP (balanced) 0.99 0.97 0.98 0.98 0.96 0.97 FP (rand. exploration e = 0.25) 0.99 0.91 0.93 0.88 0.94 0.94 FP (rand. exploration e = 0.5) 0.98 0.93 0.97 0.96 0.95 0.97\nTable 3: Test accuracy of various models in the dataset batch size case (using batch size equal to the size of the full training set) for bAbI, task 3. Results > 0.95 are in bold..\nRandom Exploration for RBI Random Exploration for FP 1.0 0.9 e=0 0.9 e=0.2 0.8 0.8 I e=0.4 0.7 e=0.6 0.7 e=0.8 0.6 0.6 -1 e=0 0.5 0.5 e=0.2 e=0.4 0.4 0.4 e=0.6 0.3 0.3K e=0.8 0.2 e=1 0.2 0 20 40 60 80 0 20 40 60 80 Epoch Epoch Random Exploration for FP with Balancing Comparing RBI, FP and REINFORCE 1.0 0.9 e=0 0.9 e=0.2 0.8 e=0.4 0.8 0.7 e=0.6 0.7 T 0.6 e=0.8 Aeeunre e=1 0.6 0.5 0.5 0.4 0.4 REINFORCE 0.3 0.3 RBI 0.2 0.2 FP 0 20 40 60 80 0 20 40 60 80 Epoch Epoch RBI (eps=0.6) Varying Batch Size FP (eps=0.6) Varying Batch Size 1.0 0.9 batch 20 0.9 batch 80 0.8 0.8 batch 320 0.7 batch 1000 0.7 ACcunre 0.6 0.6 0.5 0.4 0.4 batch 20 batch 80 0.3 0.3 batch 320 0.2 batch 1000 0.2 20 40 60 80 100 0 20 40 60 80 100 Epoch Epoch\nFigure 6: Training epoch vs. test accuracy for bAbI (Task 2) varying exploration e and batch size.\nRandom Exploration for RBI. Random Exploration for FP. 1.0 0.9 0.9 0.8 0.8 0.7 0.7 1 0.6 0.6 e=0 0.5 e=0 0.5 e=0.2 e=0.2 0.4 e=0.4 0.4 e=0.4 e=0.6 e=0.6 0.3 0.3 e=0.8 e=0.8 0.2 e=1 0.2 e=1 0 20 40 60 80 0 20 40 60 80 Epoch Epoch Random Exploration for FP with Balancing Comparing RBI, FP and REINFORCE 1.0 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 Cnn ACe e=0 0.5 ACC 0.5 e=0.2 0.4 e=0.4 0.4 e=0.6 REINFORCE 0.3 0.3 e=0.8 RBI 0.2 e=1 0.2 FP 0 20 40 60 80 0 20 40 60 80 Epoch Epoch RBI (eps=0.6) Varying Batch Size FP (eps=0.6) Varying Batch Size 1.0 0.9 0.9 0.8 0.8 0.7 0.7 0.6 CU ACCUI 0.6 0.5 0.5 0.4 batch 20 batch 20 batch 80 0.4 batch 80 0.3 batch 320 batch 320 batch 1000 0.3 batch 1000 0.2 0 20 40 60 80 100 0 20 40 60 80 100 Epoch Epoch\nFigure 7: Training epoch vs. test accuracy for bAbI (Task 3) varying exploration e and batch. size. Random exploration is important for both reward-based (RBI) and forward prediction (FP).\nRandom Exploration for RB|. Random Exploration for FP 1.0 0.9 0.9 0.8 0.8 0.7 0.7 ACeanccy AAcnrra 0.6 0.6 I0 e=0 0.5 0.5 e=0.2 e=0.2 =0.4 0.4 e=0.4 0.4 e=0.6 =0.6 0.3 0.3 e=0.8 e=0.8 e=1 0.2 e=1 0.2 0 20 40 60 80 0 20 40 60 80 Epoch Epoch Random Exploration for FP with Balancing Comparing RBl, FP and REINFORCE 1.0 0.9 0.9 0.8 0.8 0.7 0.7 ACeanccy 0.6 0.6 0.5 0.5 A =0.2 0.4 =0.4 0.4 0.6 REINFORCE 0.3 0.3 e=0.8 RBI 0.2 -1 0.2 FP 0 20 40 60 80 0 20 40 60 80 Epoch Epoch RBI (eps=0.6) Varying Batch Size. FP (eps=0.6) Varying Batch Size. 1.0 0.9 0.9 0.8 0.8 0.7 0.7 C 0.6 0.6 0.5 0.5 0.4 batch 20 0.4 batch 20 batch 80 batch 80 0.3 0.3 batch 320 batch 320 0.2 batch 1000 0.2 batch 1000 0 20 40 60 80 100 0 20 40 60 80 100 Epoch Epoch\nFigure 8: Training epoch vs. test accuracy for bAbI (Task 4) varying exploration e and batch size. Random exploration is important for both reward-based (RBI) and forward prediction (FP)\nRandom Exploration for RBI. Random Exploration for FP. 0.7 0.7 0.6 0.6 Aceanccy 0.5 0.5 ean 0.4 e=0 e=0 0.4 e=0.1 e=0.1 0.3 e=0.2 4 0.3 e=0.2 e=0.3 e=0.3 0.2 e=0.4 0.2 e=0.4 V e=0.5 e=0.5 0.1 e=1 0.1 e=1 0 5 10 15 20 0 5 10 15 20 Epoch Epoch RBI (eps=0.5) Varying Batch Size. Comparing RBI, FP and REINFORCE 0.7 0.7 0.6 0.6 0.4 0.4 0.3 batch 32 batch 320 0.2 batch 3200 0.2 REINFORCE batch 32000 RBI 0.1 full dataset 0.1 FP 0 5 10 15 20 0 5 10 15 20 Epoch Epoch\nFigure 9: WikiMovies: Training epoch vs. test accuracy on Task 2 varying (top left panel) explo. ration rate e while setting batch size to 32 for RBI, (top right panel) for FP, (bottom left) batch size. for RBI, and (bottom right) comparing RBI, REINFORCE and FP setting e = O.5. The model is robust to the choice of batch size. RBI and REINFORCE perform comparably..\nRandom Exploration for RBI Random Exploration for FP. 0.7 0.7 0.6 0.6 ACeanccy 0.5 T0.5 0.4 e=0 e=0 e=0.1 0.4 e=0.1 0.3 e=0.2 e=0.2 e=0.3 0.3 =0.3 0.2 e=0.4 e=0.4 e=0.5 0.2 V V e=0.5 0.1 e=1 * + e=1 0.1 0 5 10 15 20 0 5 10 15 20 Epoch Epoch RBI (eps=0.5) Varying Batch Size Comparing RBI, FP and REINFORCE 0.7 0.7 0.6 0.6 0.4 0.4 batch 32 0.3 batch 320 0.2 batch 3200 0.2 REINEORCE batch 32000 RBI 0.1 full dataset 0.1 FP 0 5 10 15 20 0 5 10 15 20 Epoch Epoch\nFigure 10: wikiMovies: Training epoch vs. test accuracy on Task 3 varying (top left panel) explo. ration rate e while setting batch size to 32 for RBI, (top right panel) for FP, (bottom left) batch size for RBI, and (bottom right) comparing RBI, REINFORCE and FP setting e = O.5. The model is robust to the choice of batch size. RBI and REINFORCE perform comparably..\nRandom Exploration for RBI. Random Exploration for FP. 0.7 0.7 0.6 0.6 ACeanccy 0.5 0.5 el 0.4 e=0 0-0 e=0 5 0.4 e=0.1 =0.1 0.3 e=0.2 e=0.2 0.3 e=0.3 e=0.3 0.2 e=0.4 e=0.4 0.2 E=0.5 E=0.5 0.1 e=1 0.1 e=1 0 5 10 15 20 0 5 10 15 20 Epoch Epoch RBI (eps=0.5) Varying Batch Size Comparing RBI, FP and REINFORCE 0.7 0.7 0.6 0.6 0.4 0.4 0.3 batch 32 A 0.3 batch 320 0.2 batch 3200 0.2 REINFORCE batch 32000 RBI 0.1 full dataset 0.1 FP 0 5 10 15 20 0 5 10 15 20 Epoch Epoch\nFigure 11: wikiMovies: Training epoch vs. test accuracy on Task 4 varying (top left panel) explo. ration rate e while setting batch size to 32 for RBI, (top right panel) for FP, (bottom left) batch size for RBI, and (bottom right) comparing RBI, REINFORCE and FP setting e = O.5. The model is robust to the choice of batch size. RBI and REINFORCE perform comparably..\nFigure 12: WikiMovies: Training epoch vs. test accuracy with varying batch size for FP on Task 2 (top left panel), 3 (top right panel), 4 (bottom left panel) and 6 (top right panel) setting e = 0.5. The model is robust to the choice of batch size\nFP (eps=0.5) Varying Batch Size FP (eps=0.5) Varying Batch Size 0.7 XXX 0.7 0.6 0.6 0.5 0.4 0.4 batch 32 batch 32 batch 320 0.3 batch 320 0.2 batch 3200 batch 3200 0.2 batch 32000 batch 32000 0.1 full dataset full dataset 0.1 Z 0 5 10 15 20 0 5 10 15 20 Epoch Epoch FP (eps=0.5) Varying Batch Size FP (eps=0.5) Varying Batch Size 0.7 0.7 0.6 0.6 0.5 0.4 0.4 batch 32 batch 32 0.3 batch 320 0.3 batch 320 batch 3200 batch 3200 0.2 0.2 batch 32000 batch 32000 0.1 full dataset full dataset 0.1 0 5 10 15 20 0 5 10 15 20 Epoch Epoch"}, {"section_index": "13", "section_name": "ADDITIONAL EXPERIMENTS FOR MECHANICAL TURK SETUP", "section_text": "In the experiment in Section 5.2 we conducted experiments with real human feedback. Here, we compare this to a form of synthetic feedback, mostly as a sanity check, but also to see how much improvement we can get if the signal is simpler and cleaner (as it is synthetic). We hence constructed synthetic feedback for the 10,O00 responses, using either Task 2 (positive or negative feedback), Task 3 (answers provided by teacher) or a mix (Task 2+3) where we use one or the other for each example (50% chance of each). The latter makes the synthetic data have a mixed setup of responses, which more closely mimics the real data case. The results are given in Table[4] The RBI+FP combination is better using the synthetic data than the real data with Task 2+3 or Task 3, which is to be expected, but the real data is competitive, despite the difficulty of dealing with its lexical and semantic variability The real data is better than using Task 2 synthetic data.\nFor comparison purposes, we also ran a supervised (imitation learning) MemN2N on different sized training sets of turker authored questions with gold annotated labels (so, there are no numerical rewards or textual feedback, this is a pure supervised setting). The results are given in Table[5] They indicate that RBI+FP and even FP alone get close to the performance of fully supervised learning.\nTable 4: Incorporating Feedback From Humans via Mechanical Turk: comparing real human feedback to synthetic feedback. Textual feedback is provided for 10,O00 model predictions (from a model trained with 1k labeled training examples), and additional sparse binary rewards (fraction r of examples have rewards). We compare real feedback (rows 2 and 3) to synthetic feedback when using FP or RBI+FP (rows 4 and 5).\nTrain data size. 1k 5k 10k 20k 60k Supervised MemN2N 0.333 0.429 0.476 0.526 0.599\nr = 0 r = 0.1 r = 0.5 r = 1 e =0 0.499 0.502 0.501 0.502 e = 0.1 0.494 0.496 0.501 0.502 e = 0.25 0.493 0.495 0.496 0.499 e = 0.5 0.501 0.499 0.501 0.504 e =1 0.497 0.497 0.498 0.497\nTable 6: Second Iteration of Feedback Using synthetic textual feedback of synthetic Task2+3 with. the RBI+FP method, an additional iteration of data collection of 1Ok examples, varying sparse binary reward fraction r and exploration e. The performance of the first iteration model was 0.478.\nWe conducted experiments with an additional iteration of data collection for the case of binar rewards and textual feedback using the synthetic Task 2+3 mix. We selected the best model fron the previous training, using RBI+FP with r = 1 which previously gave a test accuracy of O.478 (see Table 4). Using that model as a predictor, we collected an additional 10,000 training examples\nTrain data size. 1 k 5k 10k 20k 60k Supervised MemN2N 0.333 0.429 0.476 0.526 0.599\nr = 0 r = 0.1 r = 0.5 r =1 e=0 0.499 0.502 0.501 0.502 e = 0.1 0.494 0.496 0.501 0.502 e = 0.25 0.493 0.495 0.496 0.499 e = 0.5 0.501 0.499 0.501 0.504 e =1 0.497 0.497 0.498 0.497\nWe then continue to train our model using the original 1k+10k training set, plus the additional 10k. As before, we report the test accuracy varying r on the additional collected set. We also report the performance from varying e, the proportion of random exploration of predictions on the new set.. The results are reported in Table [6] Overall, performance is improved in the second iteration, with. slightly better performance for large r and e = O.5. However, the improvement is mostly invariant to those parameters, likely because FP takes advantage of feedback from incorrect predictions in any Case."}] |
Hk95PK9le | [{"section_index": "0", "section_name": "DEEP BIAFFINE ATTENTION FOR NEURAL DEPENDENCY PARSING", "section_text": "Timothy Dozat\ntdozatdstanford.edu\nThis paper builds off recent work from Kiperwasser & Goldberg(2016) using. neural attention in a simple graph-based dependency parser. We use a larger but more thoroughly regularized parser than other recent BiLSTM-based approaches, with biaffine classifiers to predict arcs and labels. Our parser gets state of the art or. near state of the art performance on standard treebanks for six different languages. achieving 95.7% UAS and 94.1% LAS on the most popular English PTB dataset.. This makes it the highest-performing graph-based parser on this benchmark-. outperformingKiperwasser & Goldberg(2016) by 1.8% and 2.2%-and com parable to the highest performing transition-based parser (Kuncoro et al.[2016). which achieves 95.8% UAS and 94.6% LAS. We also show which hyperparameter. choices had a significant effect on parsing accuracy, allowing us to achieve large. gains over other graph-based approaches.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Dependency parsers-which annotate sentences in a way designed to be easy for humans and com-. puters alike to understand--have been found to be extremely useful for a sizable number of NLP tasks, especially those involving natural language understanding in some way (Bowman et al.||2016 Angeli et al.]2015]Levy & Goldberg2014] Toutanova et al.]2016] Parikh et al.2015).How- ever, frequent incorrect parses can severely inhibit final performance, so improving the quality of. dependency parsers is needed for the improvement and success.. of these downstream tasks.\nThe current state-of-the-art transition-based neural dependency parser (Kuncoro et al.]. 2016) sub stantially outperforms many much simpler neural graph-based parsers. We modify the neural graph. based approach first proposed byKiperwasser & Goldberg (2016) in a few ways to achieve com. petitive performance: we build a network that's larger but uses more regularization; we replace the. traditional MLP-based attention mechanism and affine label classifier with biaffine ones; and rathei than using the top recurrent states of the LSTM in the biaffine transformations, we first put them. through MLP operations that reduce their dimensionality. Furthermore, we compare models trained. with different architectures and hyperparameters to motivate our approach empirically. The result-. ing parser maintains most of the simplicity of neural graph-based approaches while approaching the. performance of the SOTA transition-based one.\nTransition-based parsers--such as shift-reduce parsers-parse sentences from left to right, main taining a \"buffer\"' of words that have not yet been parsed and a \"stack\"' of words whose head has nol. been seen or whose dependents have not all been fully parsed. At each step, transition-based parsers. can access and manipulate the stack and buffer and assign arcs from one word to another. One can then train any multi-class machine learning classifier on features extracted from the stack, buffer and previous arc actions in order to predict the next action..\nChen & Manning(2014) make the first successful attempt at incorporating deep learning into a transition-based dependency parser. At each step, the (feedforward) network assigns a probability tc each action the parser can take based on word, tag, and label embeddings from certain words on the\nChristopher D. Manning\nmanning@stanford.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Figure 1: A dependency tree parse for Casey hugged Kim, including part-of-speech tags and a specia. root token. Directed edges (or arcs) with labels (or relations) connect the verb to the root and the arguments to the verb head.\nstack and buffer. A number of other researchers have attempted to address some limitations of|Chen & Manning's Chen & Manning parser by augmenting it with additional complexity: Weiss et al. (2015) and [Andor et al.(2016) augment it with a beam search and a conditional random field loss. objective to allow the parser to \"undo'' previous actions once it finds evidence that they may have. been incorrect; and Dyer et al.(2015) and (Kuncoro et al.]2016) instead use LSTMs to represent the stack and buffer, getting state-of-the-art performance by building in a way of composing parsed. phrases together.\nTransition-based parsing processes a sentence sequentially to build up a parse tree one arc at time. Consequently, these parsers don't use machine learning for directly predicting edges; the use it for predicting the operations of the transition algorithm. Graph-based parsers, by contrast use machine learning to assign a weight or probability to each possible edge and then construct naximum spaning tree (MST) from these weighted edges.Kiperwasser & Goldberg (2016) present neural graph-based parser (in addition to a transition-based one) that uses the same kind of attentio. mechanism as Bahdanau et al.(2014) for machine translation. In Kiperwasser & Goldberg[s201 model, the (bidirectional) LSTM's recurrent output vector for each word is concatenated with eac oossible head's recurrent vector, and the result is used as input to an MLP that scores each resultin arc. The predicted tree structure at training time is the one where each word depends on its highest scoring head. Labels are generated analogously, with each word's recurrent output vector and it gold or predicted head word's recurrent vector being used in a multi-class MLP.\nSimilarly,Hashimoto et al.(2016) include a graph-based dependency parser in their multi-task neu- ral model. In addition to training the model with multiple distinct objectives, they replace the tra- ditional MLP-based attention mechanism that Kiperwasser & Goldberg(2016) use with a bilinear one (but still using an MLP label classifier). This makes it analogous to Luong et al.[s2015|pro posed attention mechanism for neural machine translation. Cheng et al.(2016) likewise propose a graph-based neural dependency parser, but in a way that attempts to circumvent the limitation of other neural graph-based parsers being unable to condition the scores of each possible arc on pre- vious parsing decisions. In addition to having one bidirectional recurrent network that computes a recurrent hidden vector for each word, they have additional, unidirectional recurrent networks (left to-right and right-to-left) that keep track of the probabilities of each previous arc, and use these together to predict the scores for the next arc.\nWe make a few modifications to the graph-based architectures of Kiperwasser & Goldberg(2016 Hashimoto et al.(2016), and Cheng et al.(2016), shown in Figure [2] we use biaffine attentior instead of bilinear or traditional MLP-based attention; we use a biaffine dependency label classifier. and we apply dimension-reducing MLPs to each recurrent output vector r; before applying th biaffine transformation!The choice of biaffine rather than bilinear or MLP mechanisms makes the classifiers in our model analogous to traditional affine classifiers, which use an affine transformatior over a single LSTM output state r; (or other vector input) to predict the vector of scores s; for al classes (1). We can think of the proposed biaffine attention mechanism as being a traditional affin.\n'In this paper we follow the convention of using lowercase italic letters for scalars and indices, lowercase. bold letters for vectors, uppercase italic letters for matrices, uppercase bold letters for higher order tensors. We also maintain this notation when indexing; so row i of matrix R would be represented as r..\nroot dob\nFigure 2: BiLSTM with deep biaffine attention to score each possible head for each dependent applied to the sentence \"Casey hugged Kim\". We reverse the order of the biaffine transformation here for clarity.\nclassifier, but using a (d d) linear transformation of the stacked LSTM output RU(1) in place of the weight matrix W and a (d 1) transformation Ru(2) for the bias term b (2).\nVariable-class biaffine classifie\nRU Ru\n(label) U(1)r;+(ry,r;)U(2) + b Sv Fixed-class\nApplying smaller MLPs to the recurrent output states before the biaffine classifier has the advantage of stripping away information not relevant to the current decision. That is, every top recurrent state r; will need to carry enough information to identify word i's head, find all its dependents, exclude al its non-dependents, assign itself the correct label, and assign all its dependents their correct labels, as well as transfer any relevant information to the recurrent states of words before and after it. Thus r necessarily contains significantly more information than is needed to compute any individual score and training on this superfluous information needlessly reduces parsing speed and increases the risl of overfitting. Reducing dimensionality and applying a nonlinearity (4]-6) addresses both of these problems. We call this a deep bilinear attention mechanism, as opposed to shallow bilinear attention which uses the recurrent states directly.\narc-dep) = MLp(arc-dep) H(arc-head)1J(1) arc-de + H(arc-head).\n(arc-dep) = MLP(arc-dep)(ri) arc-head) = MLP(arc-head)(r) arc H(arc-head)1J(1) arc-de + H(arc-head)\nH(arc-dep) 1 U(arc) H(arc-head) S(arc) 1 1 (arc-dep) MLP: h (arc-head BiLSTM: ri Embeddings: x, root ROOT Kim NNP\nIn addition to being arguably simpler than the MLP-based approach (involving one bilinear layer rather than two linear layers and a nonlinearity), this has the conceptual advantage of directly mod- eling both the prior probability of a word j receiving any dependents in the term r, u(2) and the likelihood of j receiving a specific dependent i in the term r, U(1)r;. Analogously, we also use a biaffine classifier to predict dependency labels given the gold or predicted head y (3)\nWe apply MLPs to the recurrent states before using them in the label classifier as well. As with other. graph-based models, the predicted tree at training time is the one where each word is a dependent of its highest scoring head (although at test time we ensure that the parse is a well-formed tree via the. MST algorithm).\nAside from architectural differences between ours and the other graph-based parsers, we make a number of hyperparameter choices that allow us to outperform theirs, laid out in Table|1] We use 100-dimensional uncased word vectors2|and POS tag vectors; three BiLSTM layers (400 dimensions in each direction); and 500- and 100-dimensional ReLU MLP layers. We also apply dropout at every stage of the model: we drop words and tags (independently); we drop nodes in the LSTM layers (input and recurrent connections), applying the same dropout mask at every recurrent timestep (cf. the Bayesian dropout of Gal & Ghahramani(2015)); and we drop nodes in the MLP layers and classifiers, likewise applying the same dropout mask at every timestep. We optimize the network with annealed Adam (Kingma & Ba] 2014) for about 50,000 steps, rounded up to the nearest epoch."}, {"section_index": "3", "section_name": "4.1 DATASETS", "section_text": "We show test results for the proposed model on the English Penn Treebank, converted into Stanford Dependencies using both version 3.3.0 and version 3.5.0 of the Stanford Dependency converter (PTB-SD 3.3.0 and PTB-SD 3.5.O); the Chinese Penn Treebank; and the CoNLL 09 shared task dataset[following standard practices for each dataset. We omit punctuation from evaluation only for the PTB-SD and CTB. For the English PTB-SD datasets, we use POS tags generated from the Stanford POS tagger (Toutanova et al.2003); for the Chinese PTB dataset we use gold tags; and for the CoNLL 09 dataset we use the provided predicted tags. Our hyperparameter search was done with the PTB-SD 3.5.0 validation dataset in order to minimize overfitting to the more popular PTB-SD 3.3.0 benchmark, and in our hyperparameter analysis in the following section we report performance on the PTB-SD 3.5.0 test set, shown in Tables2|and|3"}, {"section_index": "4", "section_name": "4.2.1 ATTENTION MECHANISM", "section_text": "We examined the effect of different classifier architectures on accuracy and performance. What we see is that the deep bilinear model outperforms the others with respect to both speed and accuracy. The model with shallow bilinear arc and label classifiers gets the same unlabeled performance as the deep model with the same settings, but because the label classifier is much larger ((801 c 801) as opposed to (101 c 101)), it runs much slower and overfits. One way to decrease this overfitting is by increasing the MLP dropout, but that of course doesn't change parsing speed; another way is to decrease the recurrent size to 30o, but this hinders unlabeled accuracy without increasing parsing speed up to the same levels as our deeper model. We also implemented the MLP-based approach to attention and classification used in Kiperwasser & Goldberg2016)/4we found this version to\n2we compute a \"trained\" embedding matrix composed of words that occur at least twice in the training. dataset and add these embeddings to their corresponding pretrained embeddings. Any words that don't occur in either embedding matrix are replaced with a separate OOV token..\nWe exclude the Japanese dataset from our evaluation because we do not have access to it.\n'We exclude the Japanese dataset from our evaluation because we do not have access to it.. 4In the version of TensorFlow we used, the model's memory requirements during training exceeded the. available memory on a single GPU when default settings were used. so we reduced the MLP hidden size to 200\nTable 1: Model hyperparameters\nTable 2: Test accuracy and speed on PTB-SD 3.5.0. Statistically significant differences are marke with an asterisk.\nTable 3: Test Accuracy on PTB-SD 3.5.0. Statistically significant differences are marked with an asterisk.\nlikewise be somewhat slower and significantly underperform the deep biaffine approach in both labeled and unlabeled accuracy.\nWe also examine more closely how network size influences speed and accuracy. In Kiperwasse. & Goldberg[s 2016 model, the network uses 2 layers of 125-dimensional bidirectional LSTMs; ir Hashimoto et al. s |2016|model, it has one layer of 100-dimensional bidirectional LSTMs dedicatec. to parsing (two lower layers are also trained on other objectives); and Cheng et al.[s 2016 mode has one layer of 368-dimensional GRU cells. We find that using three or four layers gets signifi. cantly better performance than two layers, and increasing the LSTM sizes from 200 to 300 or 400 dimensions likewise signficantly improves performance\nGRU cells have been promoted as a faster and simpler alternative to LSTM cells, and are used ir the approach of|Cheng et al.(2016); however, in our model they drastically underperformed LSTM cells. We also implemented the coupled input-forget gate LSTM cells (Cif-LSTM) suggested by Greff et al.(2015)|[finding that while the resulting model still slightly underperforms the more popular LSTM cells, the difference between the two is much smaller. Additionally, because the gate and candidate cell activations can be computed simultaneously with one matrix multiplication the Cif-LSTM model is faster than the GRU version even though they have the same number o. parameters. We hypothesize that the output gate in the Cif-LSTM model allows it to maintain a sparse recurrent output state, which helps it adapt to the high levels of dropout needed to preven overfitting in a way that GRU cells are unable to do.\nIn addition to using a coupled input-forget gate, we remove the first tanh nonlinearity, which is no longer needed when using a coupled gate.\nInput Dropout Adam Model UAS LAS Model UAS LAS Default 95.75 94.22 2 = .9 95.75 94.22 No word dropout 95.74 94.08* 2 = .999 95.53* 93.91* No tag dropout 95.28* 93.60* No tags 95.77 93.91*"}, {"section_index": "5", "section_name": "4.2.4 EMBEDDING DROPOUT", "section_text": "Because we increase the parser's power, we also have to increase its regularization. In addition t using relatively extreme dropout in the recurrent and MLP layers mentioned in Table[1] we alsc. regularize the input layer. We drop 33% of words and 33% of tags during training: when one i dropped the other is scaled by a factor of two to compensate, and when both are dropped together the model simply gets an input of zeros. Models trained with only word or tag dropout but no. both wind up signficantly overfitting, hindering label accuracy and--in the latter case--attachmen. accuracy. Interestingly, not using any tags at all actually results in better performance than usin, tags without dropout."}, {"section_index": "6", "section_name": "4.2.5 OPTIMIZER", "section_text": "We choose to optimize with Adam (Kingma & Ba]2014), which (among other things) keeps a moving average of the L2 norm of the gradient for each parameter throughout training and divides the gradient for each parameter by this moving average, ensuring that the magnitude of the gradients will on average be close to one. However, we find that the value for 2 recommended byKingma & Bawhich controls the decay rate for this moving average-is too high for this task (and we suspect more generally). When this value is very large, the magnitude of the current update is heavily influenced by the larger magnitude of gradients very far in the past, with the effect that the optimizer can't adapt quickly to recent changes in the model. Thus we find that setting 2 to .9 instead of .999 makes a large positive impact on final performance."}, {"section_index": "7", "section_name": "4.3 RESULTS", "section_text": "Our model gets nearly the same UAS performance on PTB-SD 3.3.0 as the current SOTA model from Kuncoro et al.(2016) in spite of its substantially simpler architecture, and gets SOTA UAS performance on CTB 5.1'|as well as SOTA performance on all CoNLL 09 languages. It is worth noting that the CoNLL 09 datasets contain many non-projective dependencies, which are difficult or impossible for transition-based--but not graph-based-parsers to predict. This may account for- some of the large, consistent difference between our model and|Andor et al.s[2016|transition-based model applied to these datasets.\nEnglish PTB-SD 3.3.0 Chinese PTB 5.1 Model UAS LAS UAS LAS Ballesteros et al.(2016 93.56 91.42 87.65 86.21 ition Andor et al.(2016 94.61 92.79 Kuncoro et al.(2016 95.8 94.6 Kiperwasser & Goldberg (2016 93.9 91.9 87.6 86.1 Cheng et al. (2016) 94.10 91.49 88.1 85.7 Hashimoto et al.|(2016) 94.67 92.90 Deep Biaffine 95.74 94.08 89.30 88.23\nTable 4: Results on the English PTB and Chinese PTB parsing datasets\nTable 5: Results on the CoNLL '09 shared task datasets\n7We'd like to thank Zhiyang Teng for finding a bug in the original code that affected the CTB 5.1 datase\nWhere our model appears to lag behind the SOTA model is in LAS, indicating one of a few possibil ities. Firstly, it may be the result of inefficiencies or errors in the GloVe embeddings or POS tagger in which case using alternative pretrained embeddings or a more accurate tagger might improv label classification. Secondly, the SOTA model is specifically designed to capture phrasal composi tionality; so another possibility is that ours doesn't capture this compositionality as effectively, an that this results in a worse label score. Similarly, it may be the result of a more general limitation c graph-based parsers, which have access to less explicit syntactic information than transition-base parsers when making decisions. Addressing these latter two limitations would require a more inno vative architecture than the relatively simple one used in current neural graph-based parsers."}, {"section_index": "8", "section_name": "5 CONCLUSION", "section_text": "In this paper we proposed using a modified version of bilinear attention in a neural dependency. parser that increases parsing speed without hurting performance. We showed that our larger but more. regularized network outperforms other neural graph-based parsers and gets comparable performance to the current SOTA transition-based parser. We also provided empirical motivation for the proposed. architecture and configuration over similar ones in the existing literature. Future work will involve. exploring ways of bridging the gap between labeled and unlabeled accuracy and augment the parser. with a smarter way of handling out-of-vocabulary tokens for morphologically richer languages.."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. International Conference on Learning Representations, 2014\nMiguel Ballesteros, Yoav Goldberg, Chris Dyer, and Noah A Smith. Training with exploration improves a greedy stack-LSTM parser. Proceedings of the conference on empirical methods in natural language processing, 2016.\nSamuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Manning, anc Christopher Potts. A fast unified model for parsing and sentence understanding. ACL 2016, 2016\nDanqi Chen and Christopher D Manning. A fast and accurate dependency parser using neura networks. In Proceedings of the conference on empirical methods in natural language processing pp. 740-750, 2014.\nHao Cheng, Hao Fang, Xiaodong He, Jianfeng Gao, and Li Deng. Bi-directional attention witl agreement for dependency parsing. arXiv preprint arXiv:1608.02076, 2016\nYarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing mode. uncertainty in deep learning. International Conference on Machine Learning, 2015.\nKazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. A joint many-task model: Growing a neural network for multiple nlp tasks. arXiv preprint arXiv:1611.01587, 2016\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Internationa Conference on Learning Representations. 2014\nKristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. Feature-rich part-of. speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology- Volume 1, pp. 173-180. Association for Computational Linguistics, 2003.\nKristina Toutanova, Xi Victoria Lin, and Wen-tau Yih. Compositional learning of embeddings for relation paths in knowledge bases and text. In ACL. 2016\nDavid Weiss, Chris Alberti, Michael Collins, and Slav Petrov. Structured training for neural network. transition-based parsing. Annual Meeting of the Association for Computational Linguistics, 2015\nMinh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention. based neural machine translation. Empirical Methods in Natural Language Processing. 2015"}] |
BJRIA3Fgg | [{"section_index": "0", "section_name": "MODULARIZED MORPHING OF NEURAL NETWORKS", "section_text": "Changhu Wang Microsoft Research Beijing, China, 100080 chw@microsoft.con\nIn this work we study the problem of network morphism, an effective learning. scheme to morph a well-trained neural network to a new one with the network function completely preserved. Different from existing work where basic morph-. ing types on the layer level were addressed, we target at the central problem of net-. work morphism at a higher level, i.e., how a convolutional layer can be morphed into an arbitrary module of a neural network. To simplify the representation of a. network, we abstract a module as a graph with blobs as vertices and convolutional. layers as edges, based on which the morphing process is able to be formulated as a. graph transformation problem. Two atomic morphing operations are introduced to. compose the graphs, based on which modules are classified into two families, i.e.,. simple morphable modules and complex modules. We present practical morphing. solutions for both of these two families, and prove that any reasonable module can. be morphed from a single convolutional layer. Extensive experiments have been. conducted based on the state-of-the-art ResNet on benchmark datasets, and the effectiveness of the proposed solution has been verified.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep convolutional neural networks have continuously demonstrated their excellent performance on diverse computer vision problems. In image classification, the milestones of such networks ca be roughly represented by LeNet (LeCun et al.f|1989), AlexNet (Krizhevsky et al.2012), VGG ne (Simonyan & Zisserman 2014), GoogLeNet (Szegedy et al.2014), and ResNet (He et al.]2015 with networks becoming deeper and deeper. However, the architectures of these network are signifi cantly altered and hence are not backward-compatible. Considering a life-long learning system, it highly desired that the system is able to update itself from the original version established initiall and then evolve into a more powerful one, rather than re-learning a brand new one from scratch.\nNetwork morphism (Wei et al. 2016) is an effective way towards such an ambitious goal. It ca. morph a well-trained network to a new one with the knowledge entirely inherited, and hence : able to update the original system to a compatible and more powerful one based on further training. Network morphism is also a performance booster and architecture explorer for convolutional neura. networks, allowing us to quickly investigate new models with significantly less computational an. human resources. However, the network morphism operations proposed in (Wei et al.|2016), ir. cluding depth, width, and kernel size changes, are quite primitive and have been limited to the leve. of layer in a network. For practical applications where neural networks usually consist of dozens c. even hundreds of layers, the morphing space would be too large for researchers to practically desig. the architectures of target morphed networks, when based on these primitive morphing operatior. Only.\nDifferent from previous work, we investigate in this research the network morphism from a higher level of viewpoint, and systematically study the central problem of network morphism on the mod- ule level, i.e., whether and how a convolutional layer can be morphed into an arbitrary module1 where a module refers to a single-source, single-sink acyclic subnet of a neural network. With this\ntTao Wei performed this work while being an intern at Microsoft Research Asia ' Although network morphism generally does not impose constraints on the architecture of the child networ in this work we limit the investigation to the expanding mode.\nChang Wen Chen. University at Buffalo. Buffalo. NY 14260 chencw@buffalo\nChang Wen Chen"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "modularized network morphing, instead of morphing in the layer level where numerous variations exist in a deep neural network, we focus on the changes of basic modules of networks, and explore the morphing space in a more efficient way. The necessities for this study are two folds. First, we wish to explore the capability of the network morphism operations and obtain a theoretical upper bound for what we are able to do with this learning scheme. Second, modern state-of-the-art convo- lutional neural networks have been developed with modularized architectures (Szegedy et al.]2014 He et al.] 2015), which stack the construction units following the same module design. It is highly desired that the morphing operations could be directly applied to these networks.\nTo study the morphing capability of network morphism and figure out the morphing process, we. introduce a simplified graph-based representation for a module. Thus, the network morphing process. can be formulated as a graph transformation process. In this representation, the module of a neura network is abstracted as a directed acyclic graph (DAG), with data blobs in the network representec. as vertices and convolutional layers as edges. Furthermore, a vertex with more than one outdegree (or indegree) implicitly includes a split of multiple copies of blobs (or a joint of addition). Indeed the proposed graph abstraction suffers from the problem of dimension compatibility of blobs, for. different kernel filters may result in totally different blob dimensions. We solve this problem by. extending the blob and filter dimensions from finite to infinite, and the convergence properties wil. also be carefully investigated.\nTwo atomic morphing operations are adopted as the basis for the proposed graph transformation based on which a large family of modules can be transformed from a convolutional layer. Thi family of modules are called simple morphable modules in this work. A novel algorithm is propose to identify the morphing steps by reducing the module into a single convolutional layer. For an module outside the simple morphable family, i.e., complex module, we first apply the same reductioi process and reduce it to an irreducible module. A practical algorithm is then proposed to solve for the network morphism equation of the irreducible module. Therefore, we not only verify th morphability to an arbitrary module, but also provide a unified morphing solution. This demonstrates the generalization ability and thus practicality of this learning scheme.\nExtensive experiments have been conducted based on ResNet (He et al.||2015) to show the effective. ness of the proposed morphing solution. With only 1.2x or less computation, the morphed network. can achieve up to 25% relative performance improvement over the original ResNet. Such an im. provement is significant in the sense that the morphed 20-layered network is able to achieve an error. rate of 6.60% which is even better than a 110-layered ResNet (6.61%) on the CIFAR10 dataset with only around 1/5 of the computational cost. It is also exciting that the morphed 56-layered net. work is able to achieve 5.37% error rate, which is even lower than those of ResNet-110 (6.61%) anc ResNet-164 (5.46%). The effectiveness of the proposed learning scheme has also been verified or. the CIFAR100 and ImageNet datasets."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Knowledge Transfer. Network morphism originated from knowledge transferring for convolutiona neural networks. Early attempts were only able to transfer partial knowledge of a well-trained net- work. For example, a series of model compression techniques (Bucilu et al.|2006, Ba & Caruana 2014}Hinton et al.]2 2015f Romero et al.]2014) were proposed to fit a lighter network to predict the output of a heavier network. Pre-training (Simonyan & Zisserman2014) was adopted to pre initialize certain layers of a deeper network with weights learned from a shallower network. How- ever, network morphism requires the knowledge being fully transferred, and existing work includes Net2Net (Chen et al.[|2015) and NetMorph (Wei et al.f|2016). Net2Net achieved this goal by padding identity mapping layers into the neural network, while NetMorph decomposed a convolutional layer into two layers by deconvolution. Note that the network morphism operations in (Chen et al.]2015 Wei et al. 2016) are quite primitive and at a micro-scale layer level. In this research, we study the network morphism at a meso-scale module level, and in particular, we investigate its morphing capability.\nModularized Network Architecture. The evolution of convolutional neural networks has been from sequential to modularized. For example, LeNet (LeCun et al.|1998), AlexNet (Krizhevsky et al.. 2012), and VGG net (Simonyan & Zisserman2014) are sequential networks, and their difference. is primarily on the number of layers, which is 5, 8, and up to 19 respectively. However, recently.\nFigure 1: Illustration of atomic morphing types. (a) One convolutional layer is morphed into twc convolutional layers; (b) TYPE-I and TYPE-II atomic morphing types.\nproposed networks, such as GoogLeNet (Szegedy et al.]2014] 2015) and ResNet (He et al.]2015) follow a modularized architecture design, and have achieved the state-of-the-art performance. This is why we wish to study network morphism at the module level, so that its operations are able to directly apply to these modularized network architectures."}, {"section_index": "4", "section_name": "3 NETWORK MORPHISM VIA GRAPH ABSTRACTION", "section_text": "For a 2D deep convolutional neural network (DCNN), as shown in Fig.1a] the convolution is defined\nGi(Cj,ci)= Fi(cl,ci) * Fl+1(Cj,Cl) Cl\nmax(CC;K?,C;CK) > C;C,(K + K2-1)2\nG1 Conv Bi Bi G1 F1 F? Conv Conv Bi Bl Bj Fl Fl+1 TYPE-I TYPE-II (a) Network morphism in depth (b) Atomic morphing types..\nG1 S Conv Bi B, Gl F1 S F? Conv Conv Bi B1 Bj F1 Fl+1 TYPE-I TYPE-II (a) Network morphism in depth (b) Atomic morphing types\nIn this section, we present a systematic study on the capability of network morphism learning scheme. We shall verify that a convolutional layer is able to be morphed into any single-source single-sink DAG subnet, named as a module here. We shall also present the corresponding morph ing algorithms.\nFor simplicity, we first consider convolutional neural networks with only convolutional layers. All. other layers, including the non-linearity and batch normalization layers, will be discussed later in this paper.\nBj(cj)= Bi(ci) * Gt(Cj,Ci) Ci\nIn a network morphism process, the convolutional layer Gj in the parent network is morphed into two convolutional layers F and F+1 (Fig. 1a), where the filters F and F+1 are 4D tensors of shapes (C, C, K1, K1) and (C;, Ci, K2, K2). This process should follow the morphism equation:\nFor simplicity, we shall denote equations (1) and (2) as B; = G B; and G = Fi+1 Fi, where is a non-communicative multi-channel convolution operator. We can also rewrite equation (3) as max(|F]. [F+1D) > [Gi], where | * | measures the size of the convolutional filter.\nGenerally speaking, Gt is a 4D tensor of shape (C, Ct, KH, KW), where convolutional kernel sizes for. blob height and width are not necessary to be the same. However, in order to simplify the notations, we assume that K KW, but the claims and theorems in this paper apply equally well when they are different.."}, {"section_index": "5", "section_name": "3.2 ATOMIC NETWORK MORPHISM", "section_text": "We start with the simplest cases. Two atomic morphing types are considered, as shown in Fig.1b 1. a convolutional layer is morphed into two convolutional layers (TYPE-I); 2) a convolutional layer i morphed into two-way convolutional layers (TYPE-II). For the TYPE-I atomic morphing operation. equation (2) is satisfied, while For TYPE-II, the filter split is set to satisfy.\nIn addition, for TYPE-II, at the source end, the blob is split with multiple copies; while at the sink end, the blobs are joined by addition."}, {"section_index": "6", "section_name": "3.3 GRAPH ABSTRACTION", "section_text": "Based on this abstraction, we formally introduce the following definition for modular network mor phism:\nDefinition 1. Let Mo = ({s, t}, eo) represent the graph with only a single edge eo that connects the. source vertex s and sink vertex t. M = (V, E) represents any single-source, single-sink DAG with the same source vertex s and the same sink vertex t. We call such an M as a module. If there exists a process that we are able to morph Mo to M, then we say that module M is morphable, and the morphing process is called modular network morphism..\nFor each modular network morphin g, a modular network morphism equation is associated\nDefinition 2. Each module essentially corresponds to a function from s to t, which is called a module function. For a modular network morphism process from Mo to M, the equation that guarantees the module function unchanged is called modular network morphism equation.\nIt is obvious that equations (2) and (4) are the modular network morphism equations for TYPE-I and TYPE-II atomic morphing types. In general, the modular network morphism equation for a module M is able to be written as the sum of all convolutional filter compositions, in which each composition is actually a path from s to t in the module M. Let {(Fp,1, Fp,2,... , Fp,ip) : p = 1, ... , P, and ip is the length of path p} be the set of all such paths represented by the convolutional filters. Then the modular network morphism equation for module M can be written as\nG=Fp,ip p\nAs an example illustrated in Fig.2a] there are four paths in module (D), and its modular network morphism equation can be written as.\nG=FF+F6*F3F+F4*F+F-F2\nOne difficulty in this graph abstraction is in the dimensional compatibility of convolutional filters or. blobs. For example, for the TYPE-II atomic morphing in Fig.1b we have to satisfy Gt = F1 + F2 Suppose that Gi and F2 are of shape (64, 64, 3, 3), while F is (64, 64, 1, 1), they are actually no. addable. Formally, we define the compatibility of modular network morphism equation as follows.\nG=F+F\nTo simplify the representation, we introduce the following graph abstraction for network morphism. For a convolutional neural network, we are able to abstract it as a graph, with the blobs represented. by vertices, and convolutional layers by edges. Formally, a DCNN is represented as a DAG M =- convolutional layer e1 connects two blobs B, and B;, and is associated with a convolutional filter Fi. Furthermore, in this graph, if outdegree(B) > 1, it implicitly means a split of multiple copies:. and if indegree(B) > 1, it is a joint of addition..\nHence, based on this abstraction, modular network morphism can be represented as a graph trans- formation problem. As shown in Fig.2b, module (C) in Fig.2a can be transformed from module Mo by applying the illustrated network morphism operations.\n(A) (B) Module (C) t (C) (D) Module (D) a) Example modules (b) Morphing process for module (C) and (D).\nS + t\nt\nFigure 2: Example modules and morphing processes. (a) Modules (A)-(C) are simple morphable while (D) is not; (b) a morphing process for module (C), while for module (D), we are not able to find such a process.\nDefinition 3. The modular network morphism equation for a module M is compatible if and only if the mathematical operators between the convolutional filters involved in this equation are well- defined.\nIn order to solve this compatibility problem, we need not to assume that blobs { B} and filters {Fi} are finite dimension tensors. Instead they are considered as infinite dimension tensors defined with a finite suppor(3] and we call this as an extended definition. An instant advantage when we adopt this extended definition is that we will no longer need to differentiate Gi and Gt in equation (2), since. Gi is simply a zero-padded version of Gt..\nLemma 4. The operations + and are well-defined for the modular network morphism equation Namely, if F1 and F2 are infinite dimension 4D tensors with finite support, let G = F1 + F2 and. H = F2 F1, then both G and H are uniquely defined and also have finite support..\nCorollary 5. The modular network morphism equation for any module M is always compatible 1 the filters involved in M are considered as infinite dimension tensors with finite support..\nIn this section, we introduce a large family of modules, i.e, simple morphable modules, and ther provide their morphing solutions. We first introduce the following definition:.\nDefinition 6. A module M is simple morphable if and only if it is able to be morphed with only combinations of atomic morphing operations.\nSeveral example modules are shown in Fig.2a It is obvious that modules (A)-(C) are simple morphable, and the morphing process for module (C) is also illustrated in Fig.2b\nFor a simple morphable module M, we are able to identity a morphing sequence from Mo to. M. The algorithm is illustrated in Algorithm 1 The core idea is to use the reverse operations of atomic morphing types to reduce M to Mo. Hence, the morphing process is just the reverse. of the reduction process. In Algorithm1] we use a four-element tuple (M, e1,{e2, e3}, type) to represent the process of morphing edge e1 in module M to {e2, e3} using TYPE-<TYP E> atomic operation. Two auxiliary functions CHEckTyPEI and CHEckTyPeII are further introduced. Both.\n3A support of a function is defined as the set of points where the function value is non-zero, i.e. upport(f) ={x|f(x) 0}\nSketch of Proof. It is quite obvious that this lemma holds for the operator +. For the operator *, if we have this extended definition, the sum in equation (2) will become infinite over the index ct. It is. straightforward to show that this infinite sum converges, and also that H is finitely supported with. respect to the indices c; and ci. Hence H has finite support..\nof them return either FALsE if there is no such atomic sub-module in M, or a morphing tuple. (M, e1, {e2, e3}, type) if there is. The algorithm of CheckTypeI only needs to find a vertex sat-. isfying indegree(B) = outdegree(B) = 1, while CHEckTyPEII looks for the matrix elements > 1 in the adjacent matrix representation of module M..\nProposition 7. Module (D) in Fig. 2a|is not simple morphable\nSketch of Proof. A simple morphable module M is always able to be reverted back to Mo. However for module (D) in Fig.2a] both CHECKTYPEI and CHECKTYPEII return FALSE."}, {"section_index": "7", "section_name": "3.6 MODULAR NETWORK MORPHISM THEOREM", "section_text": "Is there a module not simple morphable? The answer is yes, and an example is the module (D) in Fig.2a A simple try does not work as shown in Fig.2b In fact, we have the following proposition:\nFor a module that is not simple morphable, which is called a complex module, we are able to apply. Algorithm [1to reduce it to an irreducible module M first. For M, we propose Algorithm 2 to solve the modular network morphism equation. The core idea of this algorithm is that, if only one convolutional filter is allowed to change with all others fixed, the modular network morphism. equation will reduce to a linear system. The following argument guarantees the correctness of. Algorithm2\nCorrectness of Algorithm2] Let Gi and {Ft}-1 be the convolutional filter(s) associated with Mg. and M. We further assume that one of {Fi}, e.g., Fj, is larger or equal to Gi, where Gi is the zero-. padded version of Gi (this assumption is a strong condition in the expanding mode). The module network morphism equation for M can be written as.\nG = C1 F C2+ C3\n3x3 3x3 BN ReLU BN Conv Conv Id Input ReLU Output (a) ResNet module: 3x3 3x3 BN ReLU BN Conv Conv 0.51d Input + ReLU Output 1x1 1x1 BN PReLU BN Conv Conv (b) morph_1c1 module: Figure 3: Detailed architectures of the ResNet module and the morph_1c1 module 3x3 3x3 3x3 3x3 3x3 3x3 3x3 3x3 3x3 3x3\nFigure 3: Detailed architectures of the ResNet module and the morph_1c1 module\n3x3 3x3 3x3 3x3 3x3 3x3 3x3 3x3 >0 3x3 3x3 >o 0 0.5 0.5 0.5 1x1 1x1 1x1 >0 1x1 3x3 >0 1x1 3x3 >0 3x3 1x1 1x1\n3x3 3x3 3x3 3x3 3x3 3x3 3x3 3x3 3x3 3x3 0.5 0.5 0.5 1x1 1x1 1x1 1x1 3x3 1x1 3x3 3x3 1x1 1x1 (a) ResNet (b) morph_1c1 (c) morph_3c1 (d) morph_3c3 (e) morph_1c1_2branc\nFigure 4: Sample modules adopted in the proposed experiments. (a) and (b) are the graph abstrac tions of modules illustrated in Fig.3[a) and (b).\nwhere C1, C2, and C3 are composed of other filters {F; : i j}. It can be checked that equation [7) is a linear system with [Gi] constraints and |F;] free variables. Since we have |F;| [Gi], the system is non-deterministic and hence solvable as random matrices are rarely inconsistent..\nFor a general module M, whether simple morphable or not, we apply Algorithm1|to reduce M to al irreducible module M', and then apply Algorithm|2|to M'. Hence we have the following theorem:\nTheorem 8. A convolutional layer can be morphed to any module (any single-source, single-sink DAG subnet).\nThis theorem answers the core question of network morphism, and provides a theoretical upper bound for the capability of this learning scheme.\nBesides the convolutional layers, a neural network module typically also involves non-linearity lay ers and batch normalization layers, as illustrated in Fig. 3] In this section, we shall describe how do. we handle these layers for modular network morphism..\nFor the non-linear activation layers, we adopt the solution proposed in (Wei et al.]2016). Instead. of directly applying the non-linear activations, we are using their parametric forms. Let be any non-linear activation function, and its parametric form is defined to be.\nP-p={y}|aE[0,1]={(1- a)p+ aid}|aE[0,1]\nThe shapes of the parametric form of the non-linear activation is controlled by the parameter a When a is initialized (a = 1), the parametric form is equivalent to an identity function, and when th value of a has been learned, the parametric form will become a non-linear activation. In Fig.3p, th non-linear activation for the morphing process is annotated as PReLU to differentiate itself with th\nTable 1: Experimental results of networks morphed from ResNet-20, ResNet-56, and ResNet-110 on the CIFAR10 dataset. Results annotated with are from (He et al.2015).\nmorph20_3c3 morph20_1c1_2branch 5ct\nTable 2: Comparison results between learning from morphing and learning from scratch for the same network architectures on the CIFAR10 dataset.\nNet Arch. 1\nThe batch normalization layers (Ioffe & Szegedy| 2015) can be represented a.\ndata - mean newdata = gamma + beta var + eps\nIt is obvious that if we set gamma = /var + eps and beta = mean, then a batch normalizatior layer is reduced to an identity mapping layer, and hence it can be inserted anywhere in the network Although it is possible to calculate the values of gamma and beta from the training data, in this research, we adopt another simpler approach by setting gamma = 1 and beta = 0. In fact, the value of gamma can be set to any nonzero number, since the scale is then normalized by the latte batch normalization layer (lower right one in Fig. 3b). Mathematically and strictly speaking, wher we set gamma = 0, the network function is actually changed. However, since the morphed filters for the convolutional layers are roughly randomized, even though the mean of data is not strictly zero, it is still approximately zero. Plus with the fact that the data is then normalized by the latte. batch normalization layer, such small perturbation for the network function change can be neglected In the proposed experiments, only statistical variances in performance are observed for the morphec network when we adopt setting gamma to zero. The reason we prefer such an approach to using the training data is that it is easier to implement and also yields slightly better results when we continue to train the morphed network"}, {"section_index": "8", "section_name": "EXPERIMENTAL RESULTS", "section_text": "In this section, we report the results of the proposed morphing algorithms based on current state-of- the-art ResNet (He et al.]|2015), which is the winner of 2015 ImageNet classification task.\nIntermediate Abs. Perf. Rel. Perf. #Params. #Params. Net Arch. Error FLOP (million) Rel. FLOP Phases Improv. Improv. (MB) Rel. resnet20 8.75% 1.048 1x 40.8 1x morph20_1c1 - 7.35% 1.40% 16.0% 1.138 1.09x 44.0 1.08x - 7.10% 1.65% 18.9% morph20_3c1 1.466 1.40x 56.5 1.38x 1c1 6.83% 1.92% 21.9% 1 6.97% 1.78% 20.3% morph20_3c3 1.794 1.71x 69.1 1.69x 1c1, 3c1 6.66% 2.09% 23.9% 1 7.26% 1.49% 17.0% morph20_1c1_2branch 1.227 1.17x 47.1 1.15x 1c1,half 6.60% 2.15% 24.6% resnet56 1 6.97% 3.289 1x 125.7 1x morph56 1c1_half - 5.68% 1.29% 18.5% 3.468 1.05x 132.0 1.05x 1 5.94% 1.03% 14.8% morph56_1c1 3.647 1.11x 138.3 1.10x 1c1_half 5.37% 1.60% 23.0% resnet110 1 6.61%0.16 1 6.649 1x 253.1 1x morph110_1c1_half 5.74% 0.87% 13.2% 7.053 1.06x 267.3 1.06x 1 5.93% 0.68% 10.3% morph110_1c1 7.412 1.11x 279.9 1.11x 1c1_half 5.50% 1.11% 16.8%\nError Error Abs. Perf. Rel. Perf. Net Arch. (scratch) (morph) Improv. Improv. morph20_1c1 8.01% 7.35% 0.66% 8.2% morph20_1c1_2branch 7.90% 6.60% 1.30% 16.5% morph56_1c1 7.37% 5.37% 2.00% 27.1% morph110_1c1 8.16% 5.50% 2.66% 32.6%\nother ReLU activations. In the proposed experiments, for simplicity, all ReLUs are replaced with PReLUs.\nTable 3: Experimental results of networks morphed from ResNet-20, ResNet-56, and ResNet-11 on the CIFAR100 dataset.\nTable 4: Comparison results between learning from morphing and learning from scratch for the same network architectures on the CIFAR100 dataset."}, {"section_index": "9", "section_name": "4.1 NETWORK ARCHITECTURES OF MODULAR NETWORK MORPHISM", "section_text": "We first introduce the network architectures used in the proposed experiments. Fig.3a shows the. module template in the design of ResNet (He et al.2015), which is actually a simple morphable. two-way module. The first path consists of two convolutional layers, and the second path is a shortcut connection of identity mapping. The architecture of the ResNet module can be abstracted. as the graph in Fig. 4a. For the morphed networks, we first split the identity mapping layer in. the ResNet module into two layers with a scaling factor of O.5. Then each of the scaled identity. mapping layers is able to be further morphed into two convolutional layers. Fig.3b illustrates the. case with only one scaled identity mapping layer morphed into two convolutional layers, and its. equivalent graph abstraction is shown in Fig. 4b. To differentiate network architectures adopted in. this research, the notation morph_<k1>c<k2> is introduced, where k1 and k2 are kernel sizes in the morphed network. If both of scaled identity mapping branches are morphed, we append a. suffix of '_2branch'. Some examples of morphed modules are illustrated in Fig.4 We also use. the suffix '_ha1 f' to indicate that only one half (odd-indexed) of the modules are morphed, and the. other half are left as original ResNet modules..\nFigure 5: Comparison results of ResNet and morphed networks on the CIFAR10 and CIFAR100 datasets.\nIntermediate Abs. Perf. Rel. Perf. #Params. #Params. Net Arch. Error FLOP (million) Rel. FLOP Phases Improv. Improv. (MB) Rel. resnet20 32.82% 1.070 1x 40.8 1x morph20_1c1 31.70% 1.12% 3.4% 1.160 1.08x 44.0 - 1.08x resnet56 29.83% 3.311 1x 125.8 1x - morph56_1c1 1c1_half 27.52% 2.31% 7.7% 3.670 1.11x 138.3 1.10x resnet110 28.46% 6.672 1x 253.2 1x 1c1_half 26.81% 1.65% 5.8% 7.434 1.11x 279.9 1.11x morph110_1c1\nError Error Abs. Perf. Rel. Perf. Net Arch.. (scratch) (morph) Improv. Improv.. morph20_1c1 33.63% 31.70% 1.93% 5.7% morph56_1c1 32.58% 27.52% 5.06% 15.5% morph110_1c1 31.94% 26.81% 5.13% 16.1%\nresnet morphnet resnet morphnet 10 34 32.82 9 8.75 32 31.70 (%) 8 erre rrooe 6.97 errr rrote 30 29.83 7 6.60 6.61 28.46 6 5.37 5.50 28 27.52 5 26.81 26 4 3 24 20-layer 56-layer 110-layer 20-layer 56-layer 110-layer (a) CIFAR10 (b) CIFAR100"}, {"section_index": "10", "section_name": "4.2 EXPERIMENTAL RESULTS ON THE CIFAR1O DATASET", "section_text": "CIFAR10 (Krizhevsky & Hinton, 2009) is a benchmark dataset on image classification and neural network investigation. It consists of 3232 color images in 10 categories, with 50,000 training images and 10,o00 testing images. In the training process, we follow the same setup as in (He et al. 2015). We use a decay of 0.0001 and a momentum of 0.9. We adopt the simple data augmentation. with a pad of 4 pixels on each side of the original image. A 3232 view is randomly cropped from the padded image and a random horizontal flip is optionally applied..\nTable[1shows the results of different networks morphed from ResNet (He et al.2015). Notice that it is very challenging to further improve the performance, for ResNet has already boosted the number. to a very high level. E.g., ResNet (He et al.[2015) made only 0.36% performance improvement. by extending the model from 56 to 110 layers (Table[1). From Table[1we can see that, with only 1.2x or less computational cost, the morphed networks achieved 2.15%, 1.60%, 1.11% performance improvements over the original ResNet-20, ResNet-56, and ResNet-110 respectively. Notice thai. the relative performance improvement can be up to 25%. Table 1|also compares the number of. parameters of the original network architectures and the ones after morphing. As can be seen, the. morphed ones only have a little more parameters than the original ones, typically less than 1.2x\nSeveral different architectures of the morphed networks were also explored, as illustrated in Fig. and Table 1] First, when the kernel sizes were expanded from 1 1 to 3 3, the morphed net works (morph20_3c1 and morph20_3c3) achieved better performances. Similar results were reported in (Simonyan & Zisserman2014) (Table 1 for models C and D). However, because the morphed networks almost double the computational cost, we did not adopt this approach. Sec- ond, we also tried to morph the other scaled identity mapping layer into two convolutional layers (morph20_1c1_2branch), the error rate was further lowered for the 20-layered network. How- ever, for the 56-layered and 110-layered networks, this strategy did not yield better results.\nWe also found that the morphed network learned with multiple phases could achieve a lower error rate than that learned with single phase. For example, the networks morph20_3c1 and morph2 0_3c3 learned with intermediate phases achieved better results in Table[1] This is quite reasonable as it divides the optimization problem into sequential phases, and thus is possible to avoid being trapped into a local minimum to some extent. Inspired by this observation, we then used a 1c1_ha1f network as an intermediate phase for the morph56_1c1 and morph110_1c1 networks. and better results have been achieved.\nWe compared the proposed learning scheme against learning from scratch for the networks witl. the same architectures. These results are illustrated in Table[2 As can be seen, networks learnec. oy morphing is able to achieve up to 2.66% absolute performance improvement and 32.6% relative. performance improvement comparing against learning from scratch for the morph110_1c1 net. work architecture. These results are quite reasonable as when networks are learned by the proposec. morphing scheme, they have already been regularized and shall have lower probability to be trappec. into a bad-performing local minimum in the continual training process than the learning from scratcl. scheme. One may also notice that, morph110_1c1 actually performed worse than resnet110. when learned from scratch. This is because the network architecture morph_1c1 is proposed fo. morphing, and the identity shortcut connection is scaled with a factor of O.5. It was also reportec. that residual networks with a constant scaling factor of O.5 actually led to a worse performance ir. He et al.|2016) (Table 1), while this performance degradation problem could be avoided by the. proposed morphing scheme.\nFinally, it is worth noting that another advantage of the proposed learning scheme against the learn. ing from scratch scheme is on model exploration. One can quickly check whether a morphed archi. tecture deserves further exploration by continuing to train the morphed network in a finer learning rate (e.g. 1e-5), to see if the performance is improved. Hence, one does not have to wait for day. or even months of training time to tell whether the new network architecture is able to achieve a\nExcept for large error rate reduction achieved by the morphed network, one exciting indication from. Table|1|is that the morphed 20-layered network morph2 0_3c3 is able to achieve slightly lower error rate than the 110-layered ResNet (6.60% vs 6.61%), and its computational cost is actually less than 1/5 of the latter one. Similar results have also been observed from the morphed 56-layered. network. It is able to achieve a 5.37% error rate, which is even lower than those of ResNet-110 (6.61%) and ResNet-164 (5.46%) (He et al.]2016). These results are also illustrated in Fig.5[a)\nTable 5: Experimental results of networks morphed from ResNet-18 on the ImageNet dataset\nbetter performance. This could save human time for deciding which network architecture is wortl for exploring.\nCIFAR100 (Krizhevsky & Hinton2009) is another benchmark dataset for tiny images that consists of 100 categories. There are 500 training images and 100 testing images per category. The proposed experiments on CIFAR100 follows the same setup as in the experiments on CIFAR10. The experi- mental results are illustrated in Table [3|and Fig.5(b). As shown, the performance improvement is also significant: with only around 1.1x computational cost, the absolute performance improvement can be up to 2% and the relative performance improvement can be up to 8%. For the morphed 56-layered network, it also achieves better performance than the 110-layered ResNet (27.52% vs 28.46%), and with only around one half of the computation. Table4|also compares the proposed learning scheme against learning from scratch. More than 5% absolute performance improvement and around 16% relative performance improvement were achieved.\n4.4 EXPERIMENTAL RESULTS ON THE IMAGENET DATASET\nWe also evaluate the proposed scheme on the ImageNet dataset [Russakovsky et al.]2014). This dataset consists of 1.00o ob- ject categories, with 1.28 mil- lion training images and 50K validation images. For the train- ing process, we use a decay of 0.0001 and a momentum of 0.9 The image is resized to guar- antee its shorter edge is ran- domly sampled from [256,480] for scale augmentation. A 224 224 patch or its horizontal flip is randomly cropped from the re- sized image, with the image data pe batch size of 256. The learning rate epochs. The networks are trained fo\nThe comparison results of the morphed and original ResNets for both 18-layer and 34-layer networks are illustrated in Table 5|and Fig. 6 As shown in Table |5] morph18_1c1 and morph34_1c1 are able to achieve lower error rates than ResNet-18 and ResNet-34 respectively, and the absolute performance improvements can be up to 1.2%. We also draw the evaluation error curves in Fig 6] which shows that the morphed networks morph18_1c1 and morph34_1c1 are much more effective than the original ResNet-18 and ResNet-34 respectively..\nAbs. Perf. Rel. Perf. Net Arch.. Eval. Mode. Top-1 Error FLOP (billion) Rel. FLOP Improv. Improv. 1-view 32.56% - - resnet18 1.814 1x 10-view 30.86% 1-view 31.69% 0.87% 2.7% morph18_1c1 1.917 1.06x 10-view 29.90% 0.96% 3.1% 1-view 29.08% resnet34 3.664 1x 10-view 27.32% 1-view 27.90% 1.18% 4.1% morph34_1c1 3.972 1.08x 10-view 26.20% 1.12% 4.1%\n60 65 resnet18 60 resnet34 55 morph18_1c1 55 morph34_1c1 50 (%) nroe (%) rrnor 50 45 45 40 40 35 35 30 30 25 10 20 30 40 50 60 70 10 20 30 40 50 60 70 epoch epoch (a) 18-layer. (b) 34-layer."}, {"section_index": "11", "section_name": "CONCLUSIONS", "section_text": "This paper presented a systematic study on the problem of network morphism at a higher level, an. tried to answer the central question of such learning scheme, i.e., whether and how a convolutiona ayer can be morphed into an arbitrary module. To facilitate the study, we abstracted a modula. network as a graph, and formulated the process of network morphism as a graph transformatio orocess. Based on this formulation, both simple morphable modules and complex modules hav. been defined and corresponding morphing algorithms have been proposed. We have shown tha. a convolutional layer can be morphed into any module of a network. We have also carried ou. experiments to illustrate how to achieve a better performing model based on the state-of-the-ar ResNet with minimal extra computational cost on benchmark datasets. The experimental result. have demonstrated the effectiveness of the proposed morphing approach.."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.3.7\nAdriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.2\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied tc document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.2\nChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du-. mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions.. arXiv preprint arXiv:1409.4842, 2014.2 Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re- thinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.. 2 Tao Wei, Changhu Wang, Yong Rui, and Chang Wen Chen. Network morphism. International Conference on Machine Learning, 2016.123.13.7"}] |
B1E7Pwqgl | [{"section_index": "0", "section_name": "COOPERATIVE TRAINING OF DESCRIPTOR AND GEN- ERATOR NETWORKS", "section_text": "Jianwen Xie, Yang Lu, Ruiqi Gao, Song-Chun Zhu & Ying Nian Wu\nThis paper studies the cooperative training of two probabilistic models of signals. such as images. Both models are parametrized by convolutional neural networks. (ConvNets). The first network is a descriptor network, which is an exponential family model or an energy-based model, whose feature statistics or energy func-. tion are defined by a bottom-up ConvNet, which maps the observed signal to the. feature statistics. The second network is a generator network, which is a non-. linear version of factor analysis. It is defined by a top-down ConvNet, which. maps the latent factors to the observed signal. The maximum likelihood training. algorithms of both the descriptor net and the generator net are in the form of al-. ternating back-propagation, and both algorithms involve Langevin sampling. We. observe that the two training algorithms can cooperate with each other by jump. starting each other's Langevin sampling, and they can be seamlessly interwoven. into a CoopNets algorithm that can train both nets simultaneously.."}, {"section_index": "1", "section_name": ".1 TWO CONVNETS OF OPPOSITE DIRECTIONS", "section_text": "We begin with a story that the reader of this paper can readily relate to. A student writes up an initia. draft of a paper. His advisor then revises it. After that they submit the revised paper for review. The. student then learns from his advisor's revision, while the advisor learns from the outside review. In this story, the advisor guides the student, but the student does most of the work..\nThis paper is about two probabilistic models of signals such as images, and they play the role. of student and advisor as described above. Both models are parametrized by convolutional neura networks (ConvNets or CNNs) (LeCun et al.]1998f Krizhevsky et al.]2012). The two nets take two opposite directions. One is bottom-up, and the other is top-down, as illustrate by the following diagram:\nBottom-up ConvNet Top-down ConvNet features latent variables. signal signal (a) Descriptor Net (b) Generator Net.\nThe simultaneous training of such two nets was first studied by the recent work of|Kim & Bengio (2016). These two nets belong to two major classes of probabilistic models. (a) The exponential family models or the energy-based models (LeCun et al.[[2006) or the Markov random field models (Zhu et al.| 1997), where the probability distribution is defined by feature statistics or energy function computed from the signal by a bottom-up process. (b) The latent variable models or the directed graphical models, where the signal is assumed to be a transformation of the latent factors that follow a known prior distribution. The latent factors generate the signal by a top-down process. A classical example is factor analysis."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Figure 1: (a) Algorithm D involves sampling from the current model by Langevin dynamics. (b) Algorithm G involves sampling from the posterior distribution of the latent factors by Langevin dynamics. (c) CoopNets algorithm. The part of the flowchart for training the descriptor is similar to Algorithm D, except that the D1 Langevin sampling is initialized from the initial synthesized examples supplied by the generator. The part of the flowchart for training the generator can also be mapped to Algorithm G, except that the revised synthesized examples play the role of the observed data, and the known generated latent factors can be used as inferred latent factors (or be used to initialize the G1 Langevin sampling of the latent factors).\nmodels respectively. Both classes of models can benefit from the high capacity of the multi-layel ConvNets. (a) In the exponential family models or the energy-based models, the feature statistics or the energy function can be defined by a bottom-up ConvNet that maps the signal to the features and the energy function (Ngiam et al.|2011f|Xie et al.[2016). We call the resulting model a descriptive network or a descriptor net followingZhu|(2003), because it is built on descriptive feature statistics. (b) In the latent variable models or the directed graphical models, the transformation from the latent factors to the signal can be defined by a top-down ConvNet (Dosovitskiy et al.2015), which maps the latent factors to the signal. We call the resulting model a generative network or generator net followingGoodfellow et al.(2014), who proposed such a model in their work on the generative adversarial networks (GAN).\nFig. 1[a) and (b) display the flowcharts of the maximum likelihood learning algorithms for training. the descriptor and generator nets . We call the two algorithms Algorithm D and Algorithm G respec tively. Algorithm D (Xie et al.2016) iterates two steps: Step D1 synthesizes examples by sampling from the current model by Langevin dynamics. Step D2 updates the parameters to shift the density from the synthesized examples towards the observed examples. Algorithm G (Han et al.|2017) alsc iterates two steps. Step G1 infers latent factors for each observed example by sampling from their posterior distribution by Langevin dynamics. Step G2 updates the parameters by a non-linear re gression of the observed examples on their corresponding latent factors. We use Langevin dynamics for Markov chain Monte Carlo (MCMC) sampling because the gradient term of Langevin dynamics. can be readily computed via back-propagation. Thus all the steps D1, D2 and G1, G2 are powerec by back-propagation, and both Algorithms D and G are alternating back-propagation algorithms.\nIn this article, we propose to couple Algorithms D and G into a cooperative training algorithm that interweaves the steps of the two algorithms seamlessly. We call the resulting algorithm the CoopNets algorithm, and we show that it can train both nets simultaneously.\nFigure [1(c) displays the flowchart of the CoopNets algorithm. The generator is like the student. It generates the initial draft of the synthesized examples. The descriptor is like the advisor. It. revises the initia1 draft by initializing its Langevin dynamics from the initial draft in Step D1, which produces the revised draft of the synthesized examples. The descriptor learns from the outside review. in Step D2, which is in the form of the difference between the observed examples and the revised.\nG2 updating Generator G1 Langevin generated latent factors initial synthesized examples D2 updating G2 updating D2 updating Descriptor Generator Descriptor D1 Langevin D1 Langevin synthesized examples inferred latent factors revised synthesized examples G1 Langevin observed examples observed examples observed examples (a) Algorithm D (b) Algorithm G (c) CoopNets algorithm\nsynthesized examples. The generator learns from how the descriptor revises the initial draft by reconstructing the revised draft in Step G2. For each synthesized example, the generator knows the latent factors that generate the initial draft, so that Step G1 can infer the latent factors by initializing. its Langevin dynamics from their known values..\nThe connection between the descriptor net and the discriminator net has been explored byXie et al (2016), where the descriptor can be derived from the discriminator.\nOur work is most similar to the recent work ofKim & Bengio(2016). In fact, the settings of the two nets are the same. In their work, the generator learns from the descriptor by minimizing the Kullback-Leibler divergence from the generator to the descriptor, which can be decomposed into an energy term and an entropy term. In our work, the two nets interact with each other via synthesized data, and the generator learns from the descriptor by reconstructing the revised draft of synthesized examples. Our method does not need to approximate the intractable entropy term.\nOur work is related to the contrastive divergence algorithm (Hinton2002) for training the descriptor net. The contrastive divergence initializes the MCMC sampling from the observed examples. The CoopNets algorithm initializes the MCMC sampling from the examples supplied by the generator.\nLet Y be the D-dimensional signal, such as an image. The descriptor model is in the form of exponential tilting of a reference distribution (Xie et al.2016):\n1 Pp(Y;Wo) = exp[f(Y;WD)]q(Y) Z(WD\nwhere q(Y) is the reference distribution such as Gaussian white noise q(Y) x exp (-|Y||2/2s2) f(Y; Wp) (f stands for features) is the feature statistics or energy function, defined by a ConvNet whose parameters are denoted by Wp. This ConvNet is bottom-up because it maps the signal Y to. a number. See the diagram in (1). Z(Wp) = J exp[f(Y; Wp)]q(Y)dY = Eq{exp[f(Y; Wp)l} is the normalizing constant, where Eg is the expectation with respect to q..\nSuppose we observe training examples {Y,i = 1,...,n} from an unknown data distribution Pdata(Y). The maximum likelihood training seeks to maximize the log-likelihood function. timator minimizes KL(Pdata|PD), the Kullback-Leibler divergence from the data distribution Pdata. to the model distribution Pp. The gradient of the Lp(Wp) is.\nIn the CoopNets algorithm, the generator fuels the MCMC of the descriptor by supplying initial synthesized examples, which can be obtained by direct ancestral sampling. The generator then learns from the revised synthesized examples with virtually known latent factors. The cooperation is thus beneficial to both nets.\nOur work is inspired by the generative adversarial networks (GAN) (Goodfellow et al.,2014] Denton et al.[2015f|Radford et al.[[2015). In GAN, the generator net is paired with a discriminator net. The two nets play adversarial roles. In our work, the generator net and the descriptor net play cooperative roles, and they feed each other the initial, revised and reconstructed synthesized data. The learning of both nets is based on maximum likelihood, and the learning process is quite stable because of the. cooperative nature and the consistent directions of the two maximum likelihood training algorithms\nn 1 d LD(WD) f(Yi;Wp) - Ewp aWD aw n\nwhere Ew. denotes the expectation with respect to Pp(Y; Wp\n[Yr 82 Y,+1 = Y, f(Y;Wp) o 2 s2 aY\na n 7 d LD(WD) ) aw1 n r i=1\nwhere the constant term is independent of X, Y and Wg. The marginal density is obtained by inte grating out the latent factors X, i.e., Pg(Y; Wg) = I Pg(X, Y; Wg)dX. The inference of X given Y is based on the posterior density Pg(X|Y; Wg) = Pg(X, Y; Wg)/Pg(Y; Wg) Pg(X, Y; Wg) as a function of X\nS2 X+1 = X, + log Pg(X-,Y;Wg) + 8U. 2 ax\nX ~ N(0,Id),Y = g(X;Wc) +e, e~ N(0,o2Ip)\n1 log Pg(X,Y;Wg) I -g(X;Wc)ll + constant\nFor the training data {Y,,i = 1, ..., n}, the generator net can be trained by maximizing the log Kullback-Leibler divergence KL(Pdata|Pg) from the data distribution Pdata to the model distribu-. tion Pc. The gradient of Lc(Wc) is obtained according to the following identity.\n1 a log Pg(Y ; : Wc Pg(Y,X;Wg)dX OWg Pg(Y;Wg) 0Wg 1 g Pg(Y,X;Wg)]Pg(Y,X;Wg)dX Pc(Y; Wc 0 log Pg(Y,X;Wg) aWg Pg(Y; Wg) d EPg(X|Y;Wg) log Pc(X,Y; Wc (8) aWg\n1 log Pc Wc. Pg(Y,X;Wc)dX awg Pg(Y;Wg) 0Wg 1 g Pc(Y, X;Wc) Pg(Y,X;Wg)dX Pg(Y;Wg) g Pg(Y,X;Wg Pg(Y;Wg) (X|Y;Wg) log Pc(X,Y;Wc awg\nwhich underlies the EM algorithm. In general, the expectation in (8) is analytically intractable, and has to be approximated by MCMC that samples from the posterior Pg(X|Y; Wg), such as Langevin dynamics, which iterates\nwhere U, ~ N(0, Id). With X, sampled from Pg(X, | Y, Wg) for each observation Y, the Monte Carlo approximation to L'c(Wg) is\nlog Pg(Xi,Yi;Wg) X;Wg) Y aWg aWg\nAlgorithm G (Han et al.]2017) iterates the following two steps after initializing Wc and {X, i = 1, ..., n}. Step G1: run lg steps of Langevin from the current{X,} according to (9). Step G2: update. Younes(1999).\nIn Algorithms D and G, both steps D1 and G1 are Langevin dynamics, which may be slow tc converge. An interesting observation is that the two algorithms can cooperate with each other by. jumpstarting each other's Langevin sampling.\nSpecifically, in Step D1, we can initialize the synthesized examples by generating examples from. the generator net. We first generate X, ~ N(0, Id), and then generate Y, = g(X,; Wg) + ei, for. i = 1, ..., n. If the current generator Pg is close to the current descriptor Pp, then the generated. {Y} should be a good initialization for sampling from the descriptor net, i.e., starting from the. {Yi,i = 1, ..., n}, we run Langevin dynamics in Step D1 for lp steps to get {Y, i = 1,..., n}.. which are revised versions of {Y}. These {Y} can be used as the synthesized examples from the descriptor net. We can then update Wp according to Step D2 of Algorithm D..\nThe left diagram in (11) illustrates the basic idea\nX t+1 C t T.\nX t+1 A +1 A\nIn the two diagrams in (11), the double-line arrows indicate generation and reconstruction by the generator net, while the dashed-line arrows indicate Langevin dynamics for revision and inference in the two nets. The diagram on the right in (11) illustrates a more rigorous method, where we initialize the Langevin inference of {X, i = 1, ..., n} in Step G1 from {Xt}, and then update Wg in Step G2 based on {(Y, X), i = 1, ..., n}. The diagram on the right shows how the two nets jumpstart each other's Langevin dynamics.\nAlgorithm|1|describes the cooperative training that interweaves Algorithm D and Algorithm G. See Figure[1(c) for the flowchart of the CoopNets algorithm. In our experiments, we set lg = 0 and infer. X, = X, for simplicity, i.e., we follow the left diagram in (11).\nSee Appendix for a theoretical understanding of the conve nce of the CoopNets algorithm\nIn order to update Wg of the generator net, we treat the {Y, i = 1, ..., n} produced by the above Step D1 as the training data for the generator. Since these {Yt} are obtained by the Langevin dynamics initialized from the {Yi, i = 1, ..., n} produced by the generator net with known latent. factors {X, i = 1, ..., n}, we can update Wg by learning from {(Yi, Xi), i = 1, ..., n}, which is a supervised learning problem, or more specifically, a non-linear regression of Y, on X,. At w(t) the latent factors X, generates and thus reconstructs the initial example Y. After updating Wg, we want X, to reconstruct the revised example Y,. That is, we revise Wg to absorb the revision from Y;. to Y,, so that the generator shifts its density from {Y} to {Yt}. The reconstruction error can tell us. whether the generator has caught up with the descriptor by fully absorbing the revision.\n(1) training examples {Yi, i = 1, ..., n}. (2) numbers of Langevin steps lp ad lg (3) number of learning iterations T. Output: (1) estimated parameters Wp and Wg (2) synthesized examples {Y, Y, i = 1, ..., n}. 1: Let t 0, initialize Wp and Wg 2:repeat 3: Step G0: For i = 1, .., n, generate X, ~ N(0, Id), and generate Y, = g(X;; wt) + e. 4: Step D1: For i = 1, ..., n, starting from Y, Run lp steps of Langevin dynamics to obtain Y,. each step following equation (4). 5: Step G1: Treat the current {Y, i = 1, ..., n} as the training data, for each i, infer X, = X, Or more rigorously, starting from X, = X, run lg steps of Langevin dynamics to update X each step following equation (9). 6: to (5). 7: to (10), except that Y, is replaced by Y, and n by n. 8: Let t t + 1 9: until t = T"}, {"section_index": "3", "section_name": "5 EXPERIMENTS", "section_text": "We use the MatConvNet of Vedaldi & Lenc|(2015) for coding. For the descriptor net, we adopt the structure of|Xie et al.(2016), where the bottom-up network consists of multiple layers of convolution by linear filtering, ReLU non-linearity, and down-sampling. We adopt the structure of the generator network ofRadford et al.(2015);Dosovitskiy et al.(2015), where the top-down network consists of multiple layers of deconvolution by linear superposition, ReLU non-linearity, and up-sampling, with. tanh non-linearity at the bottom-layer (Radford et al.2015) to make the signals fall within [-1, 1].\nWe conduct an experiment on learning from complete training images of human faces, and ther testing the learned model on completing the occluded testing images. The structure of the generato network is the same as in (Radford et al.[2015, Dosovitskiy et al.]2015).We adopt a 4-laye descriptor net. The first layer has 96 5 5 filters with sub-sampling of 2, the second layers ha 128 5 5 filters with sub-sampling of 2, the third layer has 256 5 5 filters with sub-sampling of 2, and the final layer is a fully connected layer with 50 channels as output. We use L=10 steps of Langevin revision dynamics within each learning iteration, and the Langevin step size is set a 0.002. The learning rate is 0.07. The training data are 10, 000 human faces randomly selected fron CelebA dataset (Liu et al.]2015). We run 600 cooperative learning iterations. Figure[2|displays 144 synthesized human faces by the descriptor net.\nTo quantitatively test whether we have learned a good generator net g(X; Wg) even though it has never seen the training images directly in the training stage, we apply it to the task of recovering the occluded pixels of testing images. For each occluded testing image Y, we use Step G1 of Algorithm G to infer the latent factors X. The only change is with respect to the term ||Y g(X; Wg)||2 where the sum of squares is over all the observed pixels of Y in back-propagation computation We run 1000 Langevin steps, initializing X from N(0, Id). After inferring X, the completed image g(X; Wg) is automatically obtained. We design 3 experiments, where we randomly place a 20 20 30 30, or 40 40 mask on each 64 64 testing image. These 3 experiments are denoted by M20 M30, and M40 respectively (M for mask). We report the recovery errors and compare our method with 8 different image inpainting methods as well as the DCGAN of|Radford et al.(2015)\nFigure 2: Generating human face pattern. The synthesized images are generated by the CoopNet algorithm that learns from 10, 000 images.\nFor DCGAN, we use the parameter setting inRadford et al.(2015) except changing the number of learning iterations to 600. We use the same 10, 000 training images to learn DCGAN. After the model is learned, we keep the generator and use the same method as ours to infer latent factors X, and recover the unobserved pixels. In 8 inpainting methods, Methods 1 and 2 are based on Markov random field prior where the nearest neighbor potential terms are l2 and l1 differences respectively. Methods 3 to 8 are interpolation methods. Please refer toD'Errico(2004) for more details. Table[1 displays the recovery errors of the 3 experiments, where the error is measured by per pixel difference (relative to the range of pixel values) between the original image and the recovered image on the occluded region, averaged over 100 testing images. Fig. 3|displays some recovery results by our method. The first row shows the original images as the ground truth. The second row displays the testing images with occluded pixels. The third row displays the recovered images by the generator net trained by the CoopNets algorithm on the 10,o00 training images.\nFigure 3: Row 1: ground-truth images. Row 2: testing images with occluded pixels. Row recovered images by our method\nTable 1: Comparison of recovery errors among different inpainting methods in 3 experiments\nble Comparison of recovery errors among diiferent inpainting methods in 3 experiment xp Ours GAN 1 2 3 4 5 6 7 8 120 .0966 .2535 .1545 .1506 .1277 .1123 .2493 .1123 .1126 .1277 130 .1112 .2606 .1820 .1792 .1679 .1321 .3367 .1310 .1312 .1679 140 .1184 .2618 .2055 .2032 .1894 .1544 .3809 .1525 .1526 .1894\nWe conduct an experiment on synthesizing images of categories from Imagenet ILSVRC2012. dataset (Deng et al.2009) and MIT places205 dataset (Zhou et al.2014).We adopt a 4-layei descriptor net. The first layer has 64 5 5 filters with sub-sampling of 2, the second layers has 128 3 3 filters with sub-sampling of 2, the third layer has 256 3 3 filters with sub-sampling o 1, and the final layer is a fully connected layer with 100 channels as output. We set the number o. Langevin dynamics steps in each learning iteration to 10 and the step size to O.002. The learning rate is 0.07. For each category, we randomly choose 1,000 images as training data and resize the images. to 64 64. We run 1, 000 cooperative learning iterations to train the model. Figures4|and[5|display. the results for two categories, where for each category, we show 144 original images sampled fron. the training set, and 144 synthesized images generated by our method. The appendix contains more. synthesis results.\nAs a comparison, we apply the Algorithm G alone and GAN code on the same 1,000 hotel roon training images to learn the generator of the same structure as in CoopNets. Figure |6|displays th synthesis results.\nWe also try to synthesize images at high resolution (224 224). We adopt a 4-layer descriptor net The first layer has 128 15 15 filters with sub-sampling of 3. the second layer has 256 3 3 filters with sub-sampling of 2, the third layer has 512 3 3 filters with sub-sampling of 1, and the final layer is a fully connected layer with 100 channels as output. We enlarge the filters of the final layer of generator net to 14 14 to generate 224 224 images. The learning rate is 0.05. We run 1000 cooperative learning iterations to train the model. Figures7|and|8|show the synthesized images of two categories from MIT places205 dataset.\nThe most unique feature of our work is that the two networks feed each other the synthesized dat. in the learning process, including initial, revised, and reconstructed synthesized data.\nAnother unique feature of our work is that the learning process interweaves the existing maximun likelihood learning algorithms for the two networks.\nA third unique feature of our work is that the MCMC for the descriptor keeps rejuvenating the chains. by refreshing the samples by independent replacements supplied by the generator, so that a single. chain effectively amounts to an infinite number of chains or the evolution of the whole marginal distribution modeled by the generator."}, {"section_index": "4", "section_name": "7.1 GENERATOR OF INFINITE CAPACITY", "section_text": "Algorithm D is a stochastic approximation algorithm (Robbins & Monro1951), except that the. samples are generated by finite step MCMC transitions. According to|Younes(1999), Algorithm converges to the maximum likelihood estimate under suitable regularity conditions on the mixing of the transition kernel of the MCMC and the schedule of the learning rate Yt, even if the numbei. of Langevin steps lp is finite or small (e.g., lp = 1), and even if the number of parallel chains n is finite or small (e.g., n = 1). The reason is that the random fluctuations caused by the finite number. of chains, n, and the limited mixing caused by the finite steps of MCMC, lp, are mitigated if the\nIn the CoopNets algorithm, the descriptor learns from the observed examples, while the generator learns from the descriptor through the synthesized examples. Therefore, the descriptor is the driving force in terms of learning, although the generator is the driving force in terms of synthesis. In order to understand the convergence of learning, we can start from Algorithm D for learning the descriptor.\n(a) Original images (b) Synthesized images\nFigure 4: Generating forest road images. The cate ory is from MIT places205 dataset\n(a) Original images (b) Synthesized images\nFigure 5: Generating hotel room images. The cate gory is from MIT places205 dataset\n(a) Generated by Algorithm G alone\n(a) Generated by Algorithm G alone. (b) Generated by DCGAN code.\nFigure 6: Generating hotel room images by Algorithm G alone and by GAN\nFigure 7: Generating forest road images at high resolution (224 224)\nFigure 8: Generating hotel room images at high resolution (224 224).\nlearning rate /t is sufficiently small. At learning iteration t, let W(t) be the estimated parameter of. according to|Younes(1999), where Wp is the maximum likelihood estimate of Wp\nThe efficiency of Algorithm D increases if the number of parallel chains n is large because it leads to more accurate estimation of the expectation in the gradient L(Wp) of equation (3), so that we can afford to use larger learning rate yt for faster convergence.\ngenerator Pc(Y; W will reproduce I Pc(Y; Wc)), with"}, {"section_index": "5", "section_name": "7.2 GENERATOR OF FINITE CAPACITY", "section_text": "From an information geometry point of view, let D = {Pp(Y; Wp),VWp} be the manifold of the descriptor models, where each distribution Pp(Y; Wp) is a point on this manifold. Then the maximum likelihood estimate of Wp is a projection of the data distribution Pdata onto the manifold D. Let G = {Pg(Y; Wg), VWg} be the manifold of the generator models, where each distribution Pg(Y; Wg) is a point on this manifold. Then the maximum likelihood estimate of Wg is a projection of the data distribution Pdata onto the manifold G.\nFrom now on, for notational simplicity and with a slight abuse of notation, we use Wp to denote the descriptor distribution Pp(Y; Wp), and use Wc to denote the generator distribution Pg(Y; Wg)..\nWe assume both the observed data size n and the synthesized data size n are large enough so that we shall work on distributions or populations instead of finite samples. As explained above, assuming n -> oo is sound because the generator net can supply unlimited number of examples\nThe Langevin revision dynamics runs a Markov chain from w(t towards W(t). Let Lwp be the Markov transition kernel of lp steps of Langevin revisions towards Wp. The distribution of the revised synthesized data is\n(t+1)\nwhere the notation L. P denotes the marginal distribution obtained by running the Markov transition as the data distribution to train the generator, i.e., we project this distribution onto the manifold G = {Pc(Y; Wc), VWc} = {Wc} (recall we use Wc to denote the distribution Pc(Y; Wc)) in the\nNow let us come back to the CoopNets algorithm. In order to understand how the descriptor net helps the training of the generator net, let us consider the idealized scenario where the number of parallel. chains n -> oo, and the generator has infinite capacity, and in each iteration it estimates Wg by C duce Pp(Y; Wp). Thus the cooperative training helps the learning of the generator. Note that the learned generator Pg(Y, Wg) will not reproduce the distribution of the observed data Pdata, unless. the descriptor is of infinite capacity too.\nConversely, the generator net also helps the learning of the descriptor net in the CoopNets algorithm n Algorithm D, it is impractical to make the number of parallel chains n too large. On the othe nand, it would be difficult for a small number of chains {Y,, i = 1, ..., n} to explore the state space new batch of independent samples {Yy} from Pg(Y; w(t), and revise {Yi} to {Yi} by Langevir lynamics, instead of running Langevin dynamics from the same old batch of {Y} as in the origina Algorithm D. This is like implementing an infinite number of parallel chains, because each iteratioi evolves a fresh batch of examples, as if each iteration evolves a new set of chains. By updatin, he generator Wg, it is like we are updating the infinite number of parallel chains, because W nemorizes the whole distribution. Even if n in the CoopNets algorithm is small, e.g., n = 1 iewed from the perspective of Algorithm D, it is as if n -> oo. Thus the above idealization n -> S sound.\nt+1 g\nThe learning process alternates between Markov transition in (12) and projection in (13), as illus trated by Figure9\nT. t+1 MarkovL G Aprofection transition W(t+1)\nIn the case of lp -> 0o\nw(t) -> Wp = arg min KL(Pdata|Wp) D W c = arg min KL(Wp|Wg)\nThat is, we first project Pdata onto D, and from there continue to project onto G. Therefore, W converges to the maximum likelihood estimate with Pdata being the data distribution, while Wg. converges to the maximum likelihood estimate with Wp serving as the data distribution..\nFor finite lp, the algorithm may converge to the following fixed points. The fixed point for the generator satisfies\nWg = arg min KL(Lwp : Wg|Wg).\nWD arg min [KL(Pdata|Wp) - KL(Lwp : Wg|Wp)] D\nKim & Bengio (2016) learned the generator by gradient descent on KL(Wg|W(t)) over G. The objective function is KL(Wg|W$t)) = Ewg [log PG(Y; Wg)] Ewg [log Pp(Y; W(t)], where the first term is the negative entropy that is intractable, and the second term is the expected energy that is tractable. Our learning method for the generator is consistent with the learning objective KL(Wc|W(t) ). because\nKL(P) C- < KL(W\n) -> 0 monotonically as lp -> oo due to the second law of thermody namics. The reduction of the Kullback-Leibler divergence in (18) and the projection in (13) in ou. Kim & Bengio (2016). But the Monte Carlo implementation of L in our work avoids the need tc approximate the intractable entropy term.\nWe display more synthesis results at the resolution of 64 64\nFigure 9: The learning of the generator alternates between Markov transition and projection. The family of the generator models G is illustrated by the black curve. Each distribution is illustrated by. a point.\nD = arg min KL(Pdata|WD) D = argmin KL(Wp|Wg)\nwhich is similar to contrastive divergence (Hinton!2002), except that Wg takes the place of Pdata. in the second Kullback-Leibler divergence. Because Wg is supposed to be close to Wp, the second Kullback-Leibler divergence is supposed to be small, hence our algorithm is closer to maximum likelihood learning than contrastive divergence..\n(a) Original images (b) Synthesized images\nFigure 10: Generating swimming pool images. The cate ory is from MIT places205 dataset\n(a) Original images (b) Synthesized images\nFigure 11: Generating volcano images. The category is from MIT places205 dataset\n(a) Original images (b) Synthesized images\nFigure 12: Generating rock images. The cate, gory is from MIT places205 dataset\n(a) Original images (b) Synthesized images\nFigure 13: Generating desert images. The category is from MIT places205 dataset\n(a) Original images\nFigure 14: Generating schoolbus images. The category is from Imagenet ILSVRC2012 1000 object categories.\n(a) Original images\nFigure 15: Generating lifeboat images. The category is from Imagenet ILSVRC2012 1000 object categories.\n(a) Original images (b) Synthesized images\nFigure 16: Generating zebra images. The category is from Imagenet ILSVRC2012 1000 object categories.\n(a) Original images\nFigure 17: Generating strawberry images. The category is from Imagenet ILSVRC2012 1000 object categories.\nFigure 18: Generating lemon images. The category is from Imagenet ILSVRC2012 1000 object categories.\n(a) Original images\n(a) Original images (b) Synthesized images\nFigure 19: Generating apartment building images. The category is from Imagenet ILSVRC2012 1000 object categories.\n(a) Original images\nFigure 20: Generating dinning table images. The category is from Imagenet ILSVRC2012 1000 object categories.\n(a) Original images\nFigure 21: Generating balloon images. The category is from Imagenet ILSVRC2012 1000 object categories."}, {"section_index": "6", "section_name": "ACKNOWLEDGEMENT", "section_text": "We thank Hansheng Jiang for her work on this project as a summer visiting student. We thank Tiaj Han for sharing the code on learning the generator network, and for helpful discussions."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Tian Han, Yang Lu, Song-Chun Zhu, and Ying Nian Wu. Alternating back-propagation for generator network. In 31st AAAI Conference on Artificial Intelligence, 2017.\nTaesup Kim and Yoshua Bengio. Deep directed generative models with energy-based probability estimation. arXiv preprint arXiv:1606.03439, 2016\nDiederik P. Kingma and Max Welling. Auto-encoding variational bayes. ICLR, 2014\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied t document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\nYann LeCun, Sumit Chopra, Rata Hadsell, Mare'Aurelio Ranzato, and Fu Jie Huang. A tutorial or energy-based learning. In Predicting Structured Data. MIT Press, 2006..\nZiwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild In Proceedings of the IEEE International Conference on Computer Vision, pp. 3730-3738, 2015.\nAndriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In ICML, 2014.\nJiquan Ngiam, Zhenghao Chen, Pang Wei Koh, and Andrew Y. Ng. Learning deep energy models In International Conference on Machine Learning. 2011\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep. convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\nYee Whye Teh, Max Welling, Simon Osindero, and Geoffrey E Hinton. Energy-based models for sparse overcomplete representations. Journal of Machine Learning Research, 4(Dec):1235-1260 2003.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo. lutional neural networks. In NIPS, pp. 1097-1105, 2012\nDanilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approx imate inference in deep generative models. In Tony Jebara and Eric P. Xing (eds.), ICML, pp. 1278-1286. JMLR Workshop and Conference Proceedings, 2014.\nSong-Chun Zhu, Ying Nian Wu, and David Mumford. Minimax entropy principle and its applicatior to texture modeling. Neural Computation, 9(8):1627-1660, 1997."}] |
BkV4VS9ll | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "In this work we propose and evaluate a novel algorithm for pruning whole neurons from a trained. neural network without any re-training and examine its performance compared to two simpler meth ods. We then analyze the kinds of errors made by our algorithm and use this as a stepping off point to. launch an investigation into the fundamental nature of learning representations in neural networks.. Our results corroborate an insightful though largely forgotten observation by Mozer & Smolensky. (1989a) concerning the nature of neural network learning. This observation is best summarized in a quotation from Segee & Carter (1991) on the notion of fault-tolerance in multilayer perceptron. networks:"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Contrary to the belief widely held, multilayer networks are not inherently fault tolerant. In fact, the loss of a single weight is frequently sufficient to completely"}, {"section_index": "2", "section_name": "disrupt a learned function approximation. Furthermore, having a large number of weights does not seem to improve fault tolerance. [Emphasis added]", "section_text": "Essentially, Mozer & Smolensky (1989b) observed that during training neural networks do not dis tribute the learning representation evenly or equitably across hidden units. What actually happens is that a few, elite neurons learn an approximation of the input-output function, and the remaining units must learn a complex interdependence function which cancels out their respective influence on the network output. Furthermore, assuming enough units exist to learn the function in question increasing the number of parameters does not increase the richness or robustness of the learnec approximation, but rather simply increases the likelihood of overfitting and the number of noisy parameters to be canceled during training. This is evinced by the fact that in many cases, multiple neurons can be removed from a network with no re-training and with negligible impact on the quality of the output approximation. In other words, there are few bipartisan units in a trained network. A unit is typically either part of the (possibly overfit) input-output function approximation, or it is par of an elaborate noise cancellation task force. Assuming this is the case, most of the compute-time spent training a neural network is likely occupied by this arguably wasteful procedure of silencing superfluous parameters, and pruning can be viewed as a necessary procedure to \"trim the fat.'\nWe observed copious evidence of this phenomenon in our experiments, and this is the motivation behind our decision to evaluate the pruning algorithms in this study on the simple criteria of their ability to trim neurons without any re-training. If we were to employ re-training as part of our evaluation criteria, we would arguably not be evaluating the quality of our algorithm's pruning decisions per se but rather the ability of back-propagation trained networks to recover from faults caused by non-ideal pruning decisions, as suggested by the conclusions of Segee & Carter (1991) and Mozer & Smolensky (1989a). Moreover. as Fahlman & Lebiere (1989) discuss. due to the \"herd effect\"' and \"moving target\"' phenomena in back-propagation learning, the remaining units in a network will simply shift course to account for whatever error signal is re-introduced as a result of a bad pruning decision or network fault. So long as there are enough critical parameters to learn the function in question, a network can typically recover faults with additional training. This limits the conclusions we can draw about the quality of our pruning criteria when we employ re-training.\nPruning algorithms, as comprehensively surveyed by Reed (1993), are a useful set of heuristics designed to identify and remove elements from a neural network which are either redundant or dc not significantly contribute to the output of the network. This is motivated by the observed tendency of neural networks to overfit to the idiosyncrasies of their training data given too many trainable parameters or too few input patterns from which to generalize, as stated by Chauvin (1990).\nNetwork architecture design and hyperparameter selection are inherently difficult tasks typicall approached using a few well-known rules of thumb, e.g. various weight initialization procedures choosing the width and number of layers, different activation functions, learning rates, momentun etc. Some of this \"black art\"' appears unavoidable. For problems which cannot be solved usin linear threshold units alone, Baum & Haussler (1989) demonstrate that there is no way to precisel determine the appropriate size of a neural network a priori given any random set of training instances Using too few neurons seems to inhibit learning, and so in practice it is common to attempt to over parameterize networks initially using a large number of hidden units and weights, and then prun or compress them afterwards if necessary. Of course, as the old saying goes, there's more than on way to skin a neural network."}, {"section_index": "3", "section_name": "2.1 NON-PRUNING BASED GENERALIZATION & COMPRESSION TECHNIQUES", "section_text": "n terms of removing units without re-training. what we discovered is that predicting the behavior ol a network when a unit is to be pruned is very difficult, and most of the approximation techniques put. forth in existing pruning algorithms do not fare well at all when compared to a brute-force search. To begin our discussion of how we arrived at our algorithm and set up our experiments, we review. of the existing literature.\nThe generalization behavior of neural networks has been well studied, and apart from pruning al gorithms many heuristics have been used to avoid overfitting, such as dropout (Srivastava et al..\n(2014)), maxout (Goodfellow et al. (2013)), and cascade correlation (Fahlman & Lebiere (1989)) among others. Of course, while cascade correlation specifically tries to construct of minimal net- works, many techniques to improve network generalization do not explicitly attempt to reduce the total number of parameters or the memory footprint of a trained network per se.\nModel compression often has benefits with respect to generalization performance and the portability of neural networks to operate in memory-constrained or embedded environments. Without explicitly removing parameters from the network, weight quantization allows for a reduction in the number of bytes used to represent each weight parameter, as investigated by Balzer et al. (1991), Dundar & Rose (1994), and Hoehfeld & Fahlman (1992).\nA recently proposed method for compressing recurrent neural networks (Prabhavalkar et al. (2016)) uses the singular values of a trained weight matrix as basis vectors from which to derive a com pressed hidden layer. Oland & Raj (2015) successfully implemented network compression through weight quantization with an encoding step while others such as Han et al. (2016) have tried to expand on this by adding weight-pruning as a preceding step to quantization and encoding.\nIn summary, we can say that there are many different ways to improve network generalization by. altering the training procedure, the objective error function, or by using compressed representations of the network parameters. But these are not, strictly speaking, examples of techniques to reduce the number of parameters in a network. For this we must employ some form of pruning criteria"}, {"section_index": "4", "section_name": "2.2 PRUNING TECHNIQUES", "section_text": "If we wanted to continually shrink a neural network down to minimum size, the most straightforwar brute-force way to do it is to individually switch each element off and measure the increase in tota error on the training set. We then pick the element which has the least impact on the total error, anc remove it. Rinse and repeat. This is extremely computationally expensive, given a reasonably larg neural network and training set. Alternatively, we might accomplish this using any number of mucl faster off-the-shelf pruning algorithms, such as Skeletonization (Mozer & Smolensky (1989a)), Op timal Brain Damage (LeCun et al. (1989)), or later variants such as Optimal Brain Surgeon (Hassib & Stork (1993)). In fact, we borrow much of our inspiration from these algorithms, with one majo variation: Instead of pruning individual weights, we prune entire neurons, thereby eliminating all o their incoming and outgoing weight parameters in one go, resulting in more memory saved, faster.\nThe algorithm developed for this paper is targeted at reducing the total number of neurons in a trained network, which is one way of reducing its computational memory footprint. This is ofter a desirable criteria to minimize in the case of resource-constrained or embedded devices, and alsc allows us to probe the limitations of pruning down to the very last essential network elements. Ir terms of generalization as well, we can measure the error of the network on the test set as each element is sequentially removed from the network. With an oracle pruning algorithm, what we expect to observe is that the output of the network remains stable as the first few superfluous neurons are removed, and as we start to bite into the more crucial members of the function approximation the error should start to rise dramatically. In this paper, the brute-force approach described at the beginning of this section serves as a proxy for an oracle pruning algorithm.\nOne reason to choose to rank and prune individual neurons as opposed to weights is that there are fa. fewer elements to consider. Furthermore, the removal of a single weight from a large network is drop in the bucket in terms of reducing a network's core memory footprint. If we want to reduce th size of a network as efficiently as possible, we argue that pruning neurons instead of weights is mor. efficient computationally as well as practically in terms of quickly reaching a hypothetical targe. reduction in memory consumption. This approach also offers downstream applications a realisti. expectation of the minimal increase in error resulting from the removal of a specified percentage o neurons. Such trade-offs are unavoidable, but performance impacts can be limited if a principle. approach is used to find the best candidate neurons for removal..\nIt is well known that too many free parameters in a neural network can lead to overfitting. Regardless of the number of weights used in a given network, as Segee & Carter (1991) assert, the representation of a learned function approximation is almost never evenly distributed over the hidden units, and thus the removal of any single hidden unit at random can actually result in a network fault. Mozer & Smolensky (1989b) argue that only a subset of the hidden units in a neural network actually\nlatch on to the invariant or generalizing properties of the training inputs, and the rest learn to eithe mutually cancel each other's influence or begin overfitting to the noise in the data. We leverage this idea in the current work to rank all neurons in pre-trained networks based on their effective contributions to the overall performance. We then remove the unnecessary neurons to reduce the network's footprint. Through our experiments we not only concretely validate the theory put forth by Mozer & Smolensky (1989b) but we also successfully build on it to prune networks to 40 to 60 % of their original size without any major loss in performance."}, {"section_index": "5", "section_name": "PRUNING NEURONS TO SHRINK NEURAL NETWORKS", "section_text": "As discussed in Section 1 our aim is to leverage the highly non-uniform distribution of the learning. representation in pre-trained neural networks to eliminate redundant neurons, without focusing on. individual weight parameters. Taking this approach enables us to remove all the weights (incoming. and outgoing) associated with a non-contributing neuron at once. We would like to note here that in. an ideal scenario, based on the neuron interdependency theory put forward by Mozer & Smolensky. (1989a), one would evaluate all possible combinations of neurons to remove (one at a time, two at a time, three at a time and so forth) to find the optimal subset of neurons to keep. This is computation- ally unacceptable, and so we will only focus on removing one neuron at a time and explore more. 'greedy' algorithms to do this in a more efficient manner..\nOne last thing to note here before moving forward is that the methods discussed in this sectior involve some non-trivial derivations which are beyond the scope of this paper. We are more focusec on analyzing the implications of these methods on our understanding of neural network learning representations. However, a complete step-by-step derivation and proof of all the results presentec is provided in the Supplementary Material as an Appendix."}, {"section_index": "6", "section_name": "3.1 BRUTE FORCE REMOVAL APPROACH", "section_text": "This is perhaps the most naive yet the most accurate method for pruning the network. It is also the. slowest and hence possibly unusable on large-scale neural networks with thousands of neurons. This method explicitly evaluates each neuron in the network. The idea is to manually check the effect ol every single neuron on the output. This is done by running a forward propagation on the validatior. set K times (where K is the total number of neurons in the network), turning off exactly one neuror each time (keeping all other neurons active) and noting down the change in error. Turning a neuror off can be achieved by simply setting its output to O. This results in all the outgoing weights from. that neuron being turned off. This change in error is then used to generate the ranked list.."}, {"section_index": "7", "section_name": "3.2 TAYLOR SERIES REPRESENTATION OF ERROR", "section_text": "When a neuron is pruned, its output O becomes 0\nReplacing O by Og in equation 1 shows us that the error is approximated perfectly by equation 1 a Ok. So:\nThe general approach taken to prune an optimally trained neural network here is to create a ranked list of all the neurons in the network based off of one of the 3 proposed ranking criteria: a brute force approximation, a linear approximation and a quadratic approximation of the neuron's impact on the output of the network. We then test the effects of removing neurons on the accuracy and error of the network. All the algorithms and methods presented here are easily parallelizable as well.\nLet us denote the total error from the optimally trained neural network for any given validation. dataset by E. E can be seen as a function of O, where O is the output of any general neuron in the network. This error can be approximated at a particular neuron's output (say Or) by using the 2nd. order Taylor Series as,\ndE a2E E(O) ~ E(Ok) +(O-Ok) +0.5:(00k do d02\ndE a2 E AEk=E(0)-E(Ok)=-O + 0.5 : do d02\nwhere Ek is the change in the total error of the network when exactly one neuron (k) is turned off. Most of the terms in this equation are fairly easy to compute, as we have Og already from"}, {"section_index": "8", "section_name": "3.2.1 LINEAR APPROXIMATION APPROACH", "section_text": "We can use equation 2 to get the linear error approximation of the change in error due to the kth neuron being turned off and represent it as El. as follows:\ndE AEt =-0 ac\nThe derivative term above is the first-order gradient which represents the change in error with respect to the output a given neuron. This term can be collected during back-propagation. As we shall see. further in this section, linear approximations are not reliable indicators of change in error but they. provide us with an interesting basis for comparison with the other methods discussed in this paper.."}, {"section_index": "9", "section_name": "3.2.2 OUADRATIC APPROXIMATION APPROACH", "section_text": "As above, we can use equation 2 to get the quadratic error approximation of the change in error due to the kth neuron being turned off and represent it as E? as follows:."}, {"section_index": "10", "section_name": "3.3 PROPOSED PRUNING ALGORITHM", "section_text": "Figure 1 shows a random error function plotted against the output of any given neuron. Note tha. this figure is for illustration purposes only. The error function is minimized at a particular value c the neuron output as can be seen in the figure. The process of training a neural network is essentiall the process of finding these minimizing output values for all the neurons in the network. Prunin. this particular neuron (which translates to getting a zero output from it will result in a change in th. total overall error. This change in error is represented by distance between the original minimur error (shown by the dashed line) and the top red arrow. This neuron is clearly a bad candidate fc removal since removing it will result in a huge error increase..\nThe straight red line in the figure represents the first-order approximation of the error using Taylor Series as described before while the parabola represents a second-order approximation. It can be clearly seen that the second-order approximation is a much better estimate of the change in error.\nOne thing to note here is that it is possible in some cases that there is some thresholding requirec when trying to approximate the error using the 2nd order Taylor Series expansion. These cases might arise when the parabolic approximation undergoes a steep slope change. To take into account such cases, mean and median thresholding were employed, where any change above a certain threshold was assigned a mean or median value respectively.\ndE a2 E AE. =-0i +0.5.0 ao d02 Ok\nThe additional second-order gradient term appearing above represents the quadratic change in error with respect to the output of a given neuron. This term can be generated by performing back ropagation using second order derivatives. Collecting these quadratic gradients involves some non-trivial mathematics, the entire step-by-step derivation procedure of which is provided in the Supplementary Material as an Appendix..\n20000 2nd Order Real Change in Error Approximation 15000 2nd Order Estimate Frnrr 10000 5000 1st Order Estimate 0 20 40 60 80 100 Output\nFigure 1: The intuition behind 1st & 2nd order neuron pruning decisions\nThe first step in both the algorithms is to decide a stopping criterion. This can vary depending on the application but some intuitive stopping criteria can be: maximum number of neurons to remove, percentage scaling needed, maximum allowable accuracy drop etc."}, {"section_index": "11", "section_name": "3.3.1 ALGORITHM I: SINGLE OVERALL RANKING", "section_text": "The complete algorithm is shown in Algorithm 1. The idea here is to generate a single ranked list. based on the values of Ek. This involves a single pass of second-order back-propagation (without. weight updates) to collect the gradients for each neuron. The neurons from this rank-list (with the lowest values of Ek) are then pruned according to the stopping criterion decided. We note here. that this algorithm is intentionally naive and is used for comparison only..\nIn this greedy variation of the algorithm (Algorithm 2), after each neuron removal, the remaining network undergoes a single forward and backward pass of second-order back-propagation (without weight updates) and the rank list is formed again. Hence, each removal involves a new pass through\nTwo pruning algorithms are proposed here. They are different in the way the neurons are ranked but. both of them use Ek, the approximation of the change in error as the basis for the ranking. Eg. can be calculated using the Brute Force method, or one of the two Taylor Series approximations discussed previously.\nthe network. This method is computationally more expensive but takes into account the dependen cies the neurons might have on one another which would lead to a change in error contribution every time a dependent neuron is removed.\nData: optimally trained network, training set Result: A pruned network."}, {"section_index": "12", "section_name": "4.1 EXAMPLE REGRESSION PROBLEM", "section_text": "This problem serves as a quick example to demonstrate many of the phenomena described in previ. ous sections. We trained two networks to learn the cosine function, with one input and one output. This is a task which requires no more than 11 sigmoid neurons to solve entirely, and in this case we. don't care about overfitting because the cosine function has a precise definition. Furthermore, the. cosine function is a good toy example because it is a smooth continuous function and, as demon-. strated by Nielsen (2015), if we were to tinker directly with the weights and bias parameters of the. network, we could allocate individual units within the network to be responsible for constrained ranges of inputs, similar to a basis spline function with many control points. This would distribute. the learned function approximation evenly across all hidden units, and thus we have presented the. network with a problem in which it could productively use as many hidden units as we give it. In. this case, a pruning algorithm would observe a fairly consistent increase in error after the removal. of each successive unit. In practice however, regardless of the number of experimental trials, this is. not what happens. The network will always use 10-11 hidden units and leave the rest to cancel each. Other's influence.\n4500 Brute Force Brute Force 1st Order / Skeletonization. 4000 1st Order / Skeletonization. 4000 2nd Order Taylor Approx 2nd Order Taylor Approx 3500 Sunnrn punnrr no unss Purnrr purner r ss 3000 3000 2500 2000 2000 uns 1500 1000 1000 500 0 0 0 5 10 15 20 0 20 40 60 80 100 Number of Neurons Removed Number of Neurons Removed\nFigure 2: Degradation in squared error after pruning a two-layer network trained to compute the cosine function (Left Network: 2 layers, 10 neurons each, 1 output, logistic sigmoid activation starting test accuracy: 0.9999993, Right Network: 2 layers, 50 neurons each, 1 output, logistic sigmoid activation, starting test accuracy: 0.9999996)\nFigure 2 shows two graphs. Both graphs demonstrate the use of the iterative re-ranking algorithn and the comparative performance of the brute-force pruning method (in blue), the first order method (in green), and the second order method (in red). The graph on the left shows the performance oi these algorithms starting from a network with two layers of 10 neurons (20 total), and the graph on the right shows a network with two layers of 50 neurons (100 total).\nIn the left graph, we see that the brute-force method shows a graceful degradation, and the erro. only begins to rise sharply after 50% of the total neurons have been removed. The error is basicall. constant up to that point. In the first and second order methods, we see evidence of poor decisio. making in the sense that both made mistakes early on, which disrupted the output function approxi. mation. The first order method made a large error early on, though we see after a few more neuron were removed this error was corrected somewhat (though it only got worse from there). This i direct evidence of the lack of fault tolerance in a trained neural network. This phenomenon is eve! more starkly demonstrated in the second order method. After making a few poor neuron remova decisions in a row, the error signal rose sharply, and then went back to zero after the 6th neuron wa. removed. This is due to the fact that the neurons it chose to remove were trained to cancel eacl others' influence within a localized part of the network. After the entire group was eliminated, th. approximation returned to normal. This can only happen if the output function approximation is no evenly distributed over the hidden units in a trained network..\nThis phenomenon is even more starkly demonstrated in the graph on the right. Here we see the first order method got \"lucky' in the beginning and made decent decisions up to about the 4Oth removed neuron. The second order method had a small error in the beginning which it recovered from gracefully and proceeded to pass the 50 neuron point before finally beginning to unravel. The brute force method, in sharp contrast, shows little to no increase in error at all until 90% of the neurons in the network have been obliterated. Clearly first and second order methods have som. value in that they do not make completely arbitrary choices, but the brute force method is far better at this task.\nThis also demonstrates the sharp dualism in neuron roles within a trained network. These network. were trained to near-perfect precision and each pruning method was applied without any re-training. of any kind. Clearly, in the case of the brute force or oracle method, up to 90% of the network can b completely extirpated before the output approximation even begins to show any signs of degradation. This would be impossible if the learning representation were evenly or equitably distributed. Note. for example, that the degradation point in both cases is approximately the same. This example is no. a real-world application of course, but it brings into very clear focus the kind of phenomena we wil. discuss in the following sections.."}, {"section_index": "13", "section_name": "4.2 RESULTS ON MNIST DATASET", "section_text": "For all the results presented in this section, the MNIST database of Handwritten Digits by LeCun &. Cortes (2010) was used. It is worth noting that due to the time taken by the brute force algorithm we rather used a 5o00 image subset of the MNIST database in which we have normalized the pixe. values between 0 and 1.0, and compressed the image sizes to 20x20 images rather than 28x28, sc the starting test accuracy reported here appears higher than those reported by LeCun et al. We do not. believe that this affects the interpretation of the presented results because the basic learning problem. does not change with a larger dataset or input dimension..\nThe network architecture in this case consisted of 1 layer, 100 neurons, 10 outputs, logistic sigmoid activations, and a starting test accuracy of 0.998"}, {"section_index": "14", "section_name": "4.3.1 SINGLE OVERALL RANKING ALGORITHM", "section_text": "We first present the results for a single-layer neural network in Figure 3, using the Single Overall. algorithm (Algorithm 1) as proposed in Section 3. (We again note that this algorithm is intentionally naive and is used for comparison only. Its performance should be expected to be poor.) After train. ing, each neuron is assigned its permanent ranking based on the three criteria discussed previously:. A brute force \"ground truth' ranking, and two approximations of this ranking using first and second. order Taylor estimations of the change in network output error resulting from the removal of each. neuron.\nAn interesting observation here is that with only a single layer, no criteria for ranking the neurons. in the network (brute force or the two Taylor Series variants) using Algorithm 1 emerges superior. indicating that the 1st and 2nd order Taylor Series methods are actually reasonable approximations\n1.0 Brute Force 8000 Brute Force 1st Order / Skeletonization 1st Order / Skeletonization 7000 2nd Order Taylor Approx 0.8 2nd Order Taylor Approx. 6000 5000 0.6 I punner co ! 4000 0.4 uns 3000 2000 0.2 1000 0 0.0 0 20 40 60 80 100 0 20 40 60 80 100 Percentage of Neurons Removed Percentage of Neurons Removed\nFigure 3: Degradation in squared error (left) and classification accuracy (right) after pruning a. single-layer network using The Single Overall Ranking algorithm (Network: 1 layer, 100 neurons 10 outputs, logistic sigmoid activation, starting test accuracy: 0.998).\nof the brute force method under certain conditions. Of course, this method is still quite bad ir. terms of the rate of degradation of the classification accuracy and in practice we would likely follov. Algorithm 2 which is takes into account Mozer & Smolensky (1989a)'s observations stated in the Related Work section. The purpose of the present investigation, however, is to demonstrate how. much of a trained network can be theoretically removed without altering the network's learnec. parameters in any way.\n4.3.2 ITERATIVE RE-RANKING ALGORITHM\n8000 1.2 Brute Force Brute Force 1st Order / Skeletonization 1st Order / Skeletonization 7000 2nd Order Taylor Approx 2nd Order Taylor Approx 1.0 Frrrrs 6000 0.8 I punne go I 5000 4000 0.6 uns 3000 0.4 2000 0.2 1000 0 0.0 0 20 40 60 80 100 0 20 40 60 80 100 Percentage of Neurons Removed Percentage of Neurons Removed\nFigure 4: Degradation in squared error (left) and classification accuracy (right) after pruning a single-layer network the iterative re-ranking algorithm (Network: 1 layer, 100 neurons, 10 outputs logistic sigmoid activation, starting test accuracy: 0.998)\nIn Figure 4 we present our results using Algorithm 2 (The iterative re-ranking Algorithm) in which all remaining neurons are re-ranked after each successive neuron is switched off. We compute the same brute force rankings and Taylor series approximations of error deltas over the remaining active neurons in the network after each pruning decision. This is intended to account for the effects of cancelling interactions between neurons."}, {"section_index": "15", "section_name": "4.3.3 VISUALIZATION OF ERROR SURFACE & PRUNING DECISIONS", "section_text": "As explained in Section 3, these graphs are a visualization of the error surface of the network output with respect to the neurons chosen for removal using each of the 3 ranking criteria, represented in\nThere are 2 key observations here. Using the brute force ranking criteria, almost 60% of the neurons in the network can be pruned away without any major loss in performance. The other noteworthy observation here is that the 2nd order Taylor Series approximation of the error performs consistently better than its 1st order version, in most situations, though Figure 21 is a poignant counter-example\nintervals of 10 neurons. In each graph, the error surface of the network output is displayed in lo, space (left) and in real space (right) with respect to each candidate neuron chosen for removal. We create these plots during the pruning exercise by picking a neuron to switch off, and then multiplying its output by a scalar gain value which is adjusted from 0.0 to 10.0 with a step size of O.001. Wher. the value of a is 1.0, this represents the unperturbed neuron output learned during training. Betweer 0.0 and 1.0, we are graphing the literal effect of turning the neuron off (a = 0), and when a > 1.(. we are simulating a boosting of the neuron's influence in the network, i.e. inflating the value of it. outgoing weight parameters.\nWe graph the effect of boosting the neuron's output to demonstrate that for certain neurons in th network, even doubling, tripling, or quadrupling the scalar output of the neuron has no effect on th overall error of the network, indicating the remarkable degree to which the network has learned t ignore the value of certain parameters. In other cases, we can get a sense of the sensitivity of th network's output to the value of a given neuron when the curve rises steeply after the red 1.0 line This indicates that the learned value of the parameters emanating from a given neuron are relativel important, and this is why we should ideally see sharper upticks in the curves for the later-remove. neurons in the network, that is, when the neurons crucial to the learning representation start to b picked off. Some very interesting observations can be made in each of these graphs.\n4.3.4 VISUALIZATION OF BRUTE FORCE PRUNING DECISIONS\n104 5000 Neuron: 0 Neuron: 0 Neuron: 10 Neuron: 10 Neuron: 20 Neuron: 20 Neuron: 30 Neuron: 30 Neuron: 40 4000 Neuron: 40 Neuron: 50 Neuron: 50 Neuron: 60 Neuron: 60 103 Neuron: 70 Neuron: 70 sunnrn pnnnnn no unss Sunnn pnnnnn no unss Neuron: 80 Neuron: 80 3000 Neuron: 90 Neuron: 90 2000 102 1000 101 0 0 2 4 6 8 10 0 2 4 6 8 10 Gain value (1 = normal output, 0 = off) Gain value (1 = normal output, 0 = off)\nFigure 5: Error surface of the network output in log space (left) and real space (right) with respec to each candidate neuron chosen for removal using the brute force criterion; (Network: 1 layer, 100 neurons, 10 outputs, logistic sigmoid activation, starting test accuracy: 0.998)\nIn Figure ??, we notice how low to the floor and flat most of the curves are. It's only until the 9Oth removed neuron that we see a higher curve with a more convex shape (clearly a more sensitive. influential piece of the network)."}, {"section_index": "16", "section_name": ".3.5 VISUALIZATION OF 1ST ORDER APPROXIMATION PRUNING DECISIONS", "section_text": "The method in Figure 7 looks similar to the brute force method choices, though clearly not as good. (they're more spread out). Notice the difference in convexity between the 2nd and 1st order method\nRemember that lower is better in terms of the height of the curve and minimal (or negative) horizon- tal change between the vertical red line at 1.0 (neuron on, = 1.0) and O.0 (neuron off, = 0.0). is indicative of a good candidate neuron to prune, i.e. there will be minimal effect on the network. output when the neuron is removed..\nIt can be seen in Figure 6 that most choices seem to have flat or negatively sloped curves, indicating that the first order approximation seems to be pretty good, but examining the brute force choices shows they could be better..\n104 9000 Neuron: 0= Neuron: 0 Neuron: 10 Neuron: 10 Neuron: 20 8000 Neuron: 20 Neuron: 30 Neuron: 30 Neuron: 40 7000 Neuron: 40 Neuron: 50 Neuron: 50 Neuron: 60 Neuron: 60 103 Neuron: 70 6000 Neuron: 70 Neuron: 80 Neuron: 80 Neuron: 90 Neuron: 90 puunnbs Jo uns puunnbs Jo uns 5000 4000 102 3000 2000 1000 101 0 0 2 4 6 8 10 0 2 4 6 8 10 Gain value (1 = normal output, 0 = off) Gain value (1 = normal output, 0 = off)\nFigure 6: Error surface of the network output in log space (left) and real space (right) with respec to each candidate neuron chosen for removal using the 1st order Taylor Series error approximatior criterion; (Network: 1 layer, 100 neurons, 10 outputs, logistic sigmoid activation, starting tes accuracy: 0.998)\n104 9000 Neuron: 0 Neuron: 0 Neuron: 10 I Neuron: 10 8000 Neuron: 20 Neuron: 20 Neuron: 30 Neuron: 30 Neuron: 40 7000 Neuron: 40 Neuron: 50 Neuron: 50 Neuron: 60 Neuron: 60 103 Neuron: 70 6000 Neuron: 70 sunnrn pnnnnen no nnss Sunnrn puner Jo unss Neuron: 80 Neuron: 80 Neuron: 90 Neuron: 90 5000 4000 103 3000 2000 1000 101 0 0 2 4 6 8 10 0 2 4 6 8 10 Gain value (1 = normal output, 0 = off) Gain value (1 = normal output, 0 = off)\nFigure 7: Error surface of the network output in log space (left) and real space (right) with respec to each candidate neuron chosen for removal using the 2nd order Taylor Series error approximatioi criterion; (Network: 1 layer, 100 neurons, 10 outputs, logistic sigmoid activation, starting tes accuracy: 0.998)\nchoices. It's clear that the first order method is fitting a line and the 2nd order method is fitting parabola in their approximation.\nThe network architecture in this case consisted of 2 layers, 50 neurons per layer, 10 outputs, logistic sigmoid activations, and a starting test accuracy of 1.00o.."}, {"section_index": "17", "section_name": "4.4.1 SINGLE OVERALL RANKING ALGORITHM", "section_text": "Figure 8 shows the pruning results for Algorithm 1 on a 2-layer network. The ranking procedur. is identical to the one used to generate Figure 3. (We again note that this algorithm is intentionall naive and is used for comparison only. Its performance should be expected to be poor.).\nUnsurprisingly, a 2-layer network is harder to prune because a single overall ranking will never capture the interdependencies between neurons in different layers. It makes sense that this is worse\n1.0 Brute Force 8000 Brute Force 1st Order / Skeletonization. 1st Order / Skeletonization 7000 2nd Order Taylor Approx 0.8 2nd Order Taylor Approx. Frrnrs 6000 0.6 5000 4000 0.4 uns 3000 2000 0.2 1000 0 0.0 0 20 40 60 80 100 0 20 40 60 80 100 Percentage of Neurons Removed Percentage of Neurons Removed\nFigure 8: Degradation in squared error (left) and classification accuracy (right) after pruning a 2 layer network using the Single Overall Ranking algorithm; (Network: 2 layers, 50 neurons/layer 10 outputs, logistic sigmoid activation, starting test accuracy: 1.000)\nthan the performance on the 1-layer network, even if this method is already known to be bad, ar we'd likely never use it in practice.\n1.2 8000 Brute Force Brute Force 1st Order / Skeletonization 1st Order / Skeletonization 1.0 7000 2nd Order Taylor Approx 2nd Order Taylor Approx Frrrrs 6000 0.8 5000 0.6 4000 uns 3000 0.4 2000 0.2 1000 0 0.0 0 20 40 60 80 100 0 20 40 60 80 100 Percentage of Neurons Removed Percentage of Neurons Removed\nFigure 9: Degradation in squared error (left) and classification accuracy (right) after pruning a 2 layer network using the iterative re-ranking algorithm; (Network: 2 layers, 50 neurons/layer, 10 outputs, logistic sigmoid activation, starting test accuracy: 1.Oo0).\nIt is clear that it becomes harder to remove neurons 1-by-1 with a deeper network (which makes sense because the neurons have more interdependencies in a deeper network), but we see an overall better performance with 2nd order method vs. 1st order, except for the first 20% of the neurons (but this doesn't seem to make much difference for classification accuracy.)\nPerhaps a more important observation here is that even with a more complex network, it is possible. to remove up to 40% of the neurons with no major loss in performance which is clearly illustrated by the brute force curve. This shows the clear potential of an ideal pruning technique and also shows. how inconsistent 1st and 2nd order Taylor Series approximations of the error can be as ranking. criteria.\nFigure 9 shows the results from using Algorithm 2 on a 2-layer network. We compute the same brute force rankings and Taylor series approximations of error deltas over the remaining active neurons in. the network after each pruning decision used to generate Figure 4. Again, this is intended to account for the effects of cancelling interactions between neurons.."}, {"section_index": "18", "section_name": "4.3 VISUALIZATION OF ERROR SURFACE & PRUNING DECISIONS", "section_text": "4.4.4 VISUALIZATION OF BRUTE FORCE PRUNING DECISIONS\n104 3500 Neuron: 0 Neuron: 0 Neuron: 10 Neuron: 10 Neuron: 20 Neuron: 20 3000 Neuron: 30 Neuron: 30 Neuron: 40 Neuron: 40 Neuron: 50 Neuron: 50 Neuron: 60 2500 Neuron: 60 103 Neuron: 70 Neuron: 70 sunnrn pnnnen Jo unss sunnnn pnnnnnn Jo unns Neuron: 80 Neuron: 80 Neuron: 90 2000 Neuron: 90 1500 102 1000 500 101 0 0 2 4 6 8 10 0 2 4 6 8 10 Gain value (1 = normal output, 0 = off) Gain value (1 = normal output, 0 = off)\nFigure 10: Error surface of the network output in log space (left) and real space (right) with respec to each candidate neuron chosen for removal using the brute force criterion; (Network: 2 layers, 50 neurons/layer, 10 outputs, logistic sigmoid activation, starting test accuracy: 1.000)\nIn Figure 10, it is clear why these neurons got chosen, their graphs clearly show little change wher neuron is removed. are mostly near the floor. and show convex behaviour of error surface. whicl argues for the rationalization of using 2nd order methods to estimate difference in error when they are turned off.\n4.4.5 VISUALIZATION OF 1ST ORDER APPROXIMATION PRUNING DECISIONS\n104 7000 Neuron: 0 Neuron:0 Neuron: 10 Neuron: 10 Neuron: 20 Neuron: 20 6000 Neuron: 30 Neuron: 30 Neuron: 40 Neuron: 40 Neuron: 50 Neuron: 50 Neuron: 60 5000 Neuron: 60 103 Neuron: 70 Neuron: 70 Sunnnn punrnen do unss Neuron:80 Neuron:80 Neuron:90 I punnbs Jo unt 4000 Neuron: 90 3000 102 2000 1000 101 0 0 2 4 6 8 10 0 2 4 6 8 10 Gain value (1 = normal output, 0 = off) Gain value (1 = normal output, 0 = off)\nFigure 11: Error surface of the network output in log space (left) and real space (right) with respec to each candidate neuron chosen for removal using the 1st order Taylor Series error approximatior. criterion; (Network: 2 layers, 50 neurons/layer, 10 outputs, logistic sigmoid activation, starting tes. accuracy: 1.000)\nDrawing a flat line at the point of each neurons intersection with the red vertical line (no change in gain) shows that the 1st derivative method is actually accurate for estimation of change in error in these cases, but still ultimately leads to poor decisions.\nAs seen in the case of a single layered network, these graphs are a visualization the error surface of the network output with respect to the neurons chosen for removal using each algorithm, represented in intervals of 10 neurons.\n4.4.6 VISUALIZATION OF 2ND ORDER APPROXIMATION PRUNING DECISIONS\n104 7000 Neuron: 0 Neuron: 0 Neuron: 10 Neuron: 10 Neuron: 20 Neuron: 20 6000 Neuron: 30 Neuron: 30 Neuron: 40 Neuron: 40 Neuron: 50 Neuron: 50 Neuron: 60 5000 Neuron: 60 103 Neuron: 70 Neuron: 70 Fhrnn s Neuron: 80 Neuron: 80 Neuron: 90 4000 Neuron: 90 punnbs Jo uns spnnee J Jo uns 3000 102 2000 1000 101 0 0 2 4 6 8 10 0 2 4 6 8 10 Gain value (1 = normal output, 0 = off) Gain value (1 = normal output, 0 = off)\nFigure 12: Error surface of the network output in log space (left) and real space (right) with respect. to each candidate neuron chosen for removal using the 2nd order Taylor Series error approximation. criterion; (Network: 2 layers, 50 neurons/layer, 10 outputs, logistic sigmoid activation, starting test accuracy: 1.00o)\nClearly these neurons are not overtly poor candidates for removal (error doesn't change much be. tween 1.0 & zero-crossing left-hand-side), but could be better (as described above in the brute force Criterion discussion).\nIn our experiments thus far we have tacitly assumed that we start with a network which has learne an \"optimal' representation of the training objective, i.e. it has been trained to the point wher. we accept its performance on the test set. Here we explore what happens when we prune with sub-optimal starting network.\nIf the assumptions of this paper regarding the nature of neural network learning are correct, we expect that two processes are essentially at work during back-propagation training. First, we ex pect that the neurons which directly participate in the fundamental learning representation (even if redundantly) work together to reduce error on the training data. Second, we expect that neurons which do not directly participate in the learning representation work to cancel each other's negative influence. Furthermore, we expect that these two groups are essentially distinct, as evinced by the fact that multiple neurons can often be removed as a group with little to no effect on the network output. Some non-trivial portion of the training time, then, is spent doing work which has nothing intrinsically to do with the learning representation and essentially functions as noise cancellation.\nIf this is the case, when we attempt to prune a network which has not fully canceled the noisy influence of extraneous or redundant units, we might expect to see the error actually improve aftel removing a few bad apples. This is in fact what we observe, as demonstrated in the following experiments.\nFor each experiment in this section we trained with the full MNIST training set (LeCun & Cortes. (2010)), uncompressed and without any data normalization. We trained three different networks to learn to distinguish a single handwritten digit from the rest of the data. The network architectures were each composed of 784 inputs, 1 hidden layer with 100 neurons, and 2 soft-max outputs; one to say yes, and the other to say no. These networks were trained to distinguish the digits 0, 1, and. 2, and their respective starting accuracies were a sub-optima1 0.9757, 0.9881, and 0.9513. Finally, we only consider the iterative re-ranking algorithm, as the single overall ranking algorithm is clearly. nonviable."}, {"section_index": "19", "section_name": "4.5.1 MNIST SINGLE DIGIT CLASSIFICATION: DIGIT O", "section_text": "25000 Brute Force 1st Order /Skeletonization 20000 2nd Order Taylor Approx sunnn pnnnen no nnss 15000 10000 5000 0 0 20 40 60 80 100 Number of Neurons Removed\nFigure 13: Degradation in squared error after pruning a single-layer network trained to do a one versus-all classification of the digit O using the iterative re-ranking algorithm\nThe behavior of the brute force method here demonstrates that the network was essentially working to cancel the effect of a few bad neurons when the training convergence criteria were met, i.e. the network was no longer able to make progress on the training set. After removing these neurons during pruning, the output improved. We can investigate this by looking at the error surface with respect to the neurons chosen for removal by each method in turn. Below in Figure 14 is the grapl of the brute force method.\n105 25000 Neuron: 0 Neuron: 0 Neuron: 10 Neuron: 10 Neuron: 20 Neuron: 20 Neuron: 30 Neuron: 30 Neuron: 40 20000 Neuron: 40 Neuron: 50 Neuron: 50 Neuron: 60 Neuron: 60 Neuron: 70 Neuron: 70 snnrn pnnnen no nnss snnnn pnnnnn Jo wnss Neuron: 80 Neuron: 80 15000 Neuron: 90 Neuron: 90 104 10000 5000 103 0 0 2 4 8 10 0 2 4 6 8 10 Gain value (1 = normal output, 0 = off) Gain value (1 = normal output, 0 = off)\nFigure 14: Error surface of the network output in log space (left) and real space (right) with respect tc each candidate neuron chosen for removal using the brute force iterative re-ranking removal criterion\nFigure 14 shows an interesting phenomenon, which we will see in later experiments as well. The high blue curve corresponding to neuron O is negatively sloped in the beginning and clearly after removing this neuron, the output will improve. The rest of the curves, in correspondence with the squared error degradation curve above, are mostly flat and tightly layered together, indicating thai they are good neurons to remove.\nIn Figure 15 below, we observe a stark contrast to this. The curves corresponding to neurons O and 10 are mostly flat, and fairly lower than the rest, though clearly a mistake was made early on and the rest of the curves are clearly bad choices. In all of these cases however, we see that the curves are\nFigure 13 shows the degradation in squared error after removing neurons from a network trained. to distinguish the digit O. What we observe is that the first and second order methods both fail in different ways, though clearly the second order method makes better decisions overall. The first. order method explodes spectacularly in the first few iterations. The brute force method, in stark. contrast, actually improves in the first few iterations, and remains essentially flat until around the 60% mark, at which point it begins to gradually increase and meet the other curves..\neasily approximated with a straight line and so the first order method may have been fairly accurate in its predictions, even though it still made poor decisions.\n105 30000 Neuron: 0 Neuron: 0 Neuron: 10 Neuron: 10 Neuron: 20 Neuron: 20 Neuron: 30 25000 Neuron: 30 Neuron: 40 Neuron: 40 Neuron: 50 Neuron: 50 Neuron: 60 Neuron: 60 Neuron: 70 20000 Neuron: 70 sunnnn pnnnnnn Jo nnss snnrn pnnnnnr Jo wnss Neuron: 80 Neuron: 80 Neuron: 90 Neuron: 90 104 15000 10000 5000 103 0 0 2 4 6 8 10 0 2 4 6 8 10 Gain value (1 = normal output, 0 = off) Gain value (1 = normal output, 0 = off)\nFigure 15: Error surface of the network output in log space (left) and real space (right) with respect tc each candidate neuron chosen for removal using the first-order iterative re-ranking removal criterior\n105 35000 Neuron: 0 Neuron: 0 Neuron: 10 Neuron: 10 Neuron: 20 Neuron: 20 30000 Neuron: 30 Neuron: 30 Neuron: 40 Neuron: 40 Neuron: 50 Neuron: 50 Neuron: 60 25000 Neuron: 60 Neuron: 70 Neuron: 70 sunnrn mnnnnn Jo mnss Frrns Neuron: 80 Neuron: 80 Neuron: 90 20000 Neuron: 90 104 15000 wns 10000 5000 103 0 0 2 4 6 8 10 0 2 4 6 8 10 Gain value (1 = normal output, 0 = off) Gain value (1 = normal output, 0 = off)\nFigure 16: Error surface of the network output in log space (left) and real space (right) with respec to each candidate neuron chosen for removal using the second-order iterative re-ranking remova criterion"}, {"section_index": "20", "section_name": "4.5.2 MNIST SINGLE DIGIT CLASSIFICATION: DIGIT 1", "section_text": "One of the most striking things about the blue curve in Figure 17 is the fact that the network never. drops below its starting error until it crosses the 80% mark, indicating that only 20% of the neurons. in this network are actually essential to the learning the training objective. In this sense, we can only\nFigure 15 is an example of how things can go south once a few bad mistakes are made at the outset.. Figure 16 shows a much better set of choices made by the second order method, though clearly not. as good as the brute force method. The log-space plots make it a bit easier to see the difference between the brute force and second order methods in Figures 14 and 16, respectively..\nExamining Figure 17, we see a much starker example of the previous phenomenon, in which the. brute force method continues to improve the performance of the network after removing 80% of the. neurons in the network. The first and second order methods fail early and proceed in fits and starts. (clearly demonstrating evidence of interrelated groups of noise-canceling neurons), and never fully. recover. It should be noted that it would be impossible to see curves like this if neural networks. evenly distributed the learning representation evenly or equitably over their hidden units.\n8000 Brute Force 7000 1st Order/Skeletonization 2nd Order Taylor Approx 6000 5000 4000 3000 2000 1000 0 0 20 40 60 80 100 Number of Neurons Removed\nFigure 17: Degradation in squared error after pruning a single-layer network trained to do a one versus-all classification of the digit 1 using the iterative re-ranking algorithm\nwonder how much of the training time was spent winnowing the error out of the remaining 80% of the network.\n105 25000 Neuron: 0 Neuron: 0 Neuron: 10 Neuron: 10 Neuron: 20 Neuron: 20 I Neuron: 30 Neuron: 30 Neuron: 40 20000 Neuron: 40 Neuron: 50 Neuron: 50 Neuron: 60 Neuron: 60 Neuron: 70 Neuron: 70 sunnn punnnn no unss sunnnn punnnen Jo wnss Neuron: 80 Neuron: 80 15000 Neuron: 90 Neuron: 90 104 10000 5000 103 0 0 2 4 6 8 10 0 2 6 8 10 Gain value (1 = normal output, 0 = off) Gain value (1 = normal output, 0 = off)\nFigure 18: Error surface of the network output in log space (left) and real space (right) with respect tc each candidate neuron chosen for removal using the brute force iterative re-ranking removal criterion\nIn Figures 18, 19 and 20 we can examine the choices made by the respective methods. The brute force method serves as our example of a near-optimal pruning regimen, and the rest are first and second order approximations of this. Small differences, clearly, can lead to large effects on the network output as shown in Figure 17."}, {"section_index": "21", "section_name": "4.5.3 MNIST SINGLE DIGIT CLASSIFICATION: DIGIT 2", "section_text": "Figure 21 is an interesting case because it shatters our confidence in the reliability of the second order method to make good pruning decisions, and further demonstrates the phenomenon of how much the error can improve if the right neurons are removed after training gets stuck. In this case.. though still a poor performance overall, the first order method vastly outperforms the second order. method.\nFigure 22 shows us a clear example of the first element to remove having a negative error slope, an. improving the output as a result. The rest of the pruning decisions are reasonable. Comparing witl the blue curve in Figure 21, we see the correspondence between the first pruning decision improving the output, and the remaining pruning decisions keeping the output fairly flat. Clearly, however there isn't much room to get worse given our starting point with a sub-optimal network, and we see that the ending sum of squared errors is not much higher than the starting point. At the same time we can still see the contrast in performance if we make optimal pruning decisions, and most of the neurons in this network were clearly doing nothing.\n105 18000 Neuron: 0 Neuron: 0 Neuron: 10 Neuron: 10 Neuron: 20 16000 Neuron: 20 Neuron: 30 Neuron: 30 Neuron: 40 Neuron: 40 Neuron: 50 14000 Neuron: 50 Neuron: 60 Neuron: 60 Neuron: 70 Neuron: 70 Sunrn munrner nr uns Hrns Neuron: 80 12000 Neuron:80 Neuron: 90 I pumnbs Jo unt Neuron: 90 104 10000 8000 6000 4000 103 2000 0 2 4 6 8 10 0 2 4 6 8 10 Gain value (1 = normal output, 0 = off) Gain value (1 = normal output, 0 = off)\nFigure 19: Error surface of the network output in log space (left) and real space (right) with respect tc each candidate neuron chosen for removal using the first-order iterative re-ranking removal criterion\n105 20000 Neuron: 0 Neuron: 0 Neuron: 10 Neuron: 10 18000 Neuron: 20 Neuron: 20 Neuron: 30 Neuron: 30 Neuron: 40 16000 Neuron: 40 Neuron: 50 Neuron: 50 Neuron: 60 Neuron: 60 Neuron: 70 14000 Neuron: 70 sunnn pnnnner no uns Neuron: 80 Neuron: 80 Neuron: 90 Neuron: 90 punnnbs Jo uns 12000 104 10000 8000 6000 4000 103 2000 0 2 4 6 8 10 0 2 4 6 8 10 Gain value (1 = normal output, 0 = off) Gain value (1 = normal output, 0 = off)\nFigure 20: Error surface of the network output in log space (left) and real space (right) with respeci to each candidate neuron chosen for removal using the second-order iterative re-ranking remova criterion\n25000 Brute Force 1st Order / Skeletonization 20000 dOrder Taylor Approx 15000 10000 5000 0 0 20 40 60 80 100 Number of Neurons Removed\nZ00O Brute Force 1st Order /Skeletonization 20000 2nd Order Taylor Approx Sunnn punrner no uns 15000 10000 5000 0 0 20 40 60 80 100 Number of Neurons Removed\nFigure 21: Degradation in squared error after pruning a single-layer network trained to do a one versus-all classification of the digit 2 using the iterative re-ranking algorithm.\nIn Figure 23, we see a mixed bag in which the decisions are clearly sub-optimal, though much better than Figure 24, in which we can observe how a bad first decision essentially ruined the network for. good. The jagged edges of the red curve in Figure 21 correspond with the positive and negative.\n105 40000 Neuron: 0 Neuron: 0 Neuron: 10 Neuron: 10 Neuron: 20 35000 Neuron: 20 Neuron: 30 Neuron: 30 Neuron: 40 Neuron: 40 Neuron: 50 30000 Neuron: 50 Neuron: 60 Neuron: 60 Neuron: 70 Neuron: 70 shnnrn munnen no wns Hrrnrs Neuron:80 25000 Neuron: 80 Neuron: 90 Neuron: 90 I punned Jo I 104 20000 uns 15000 10000 5000 103 0 0 2 4 6 8 10 0 2 6 8 10 4 Gain value (1 = normal output, 0 = off) Gain value (1 = normal output, 0 = off)\nFigure 22: Error surface of the network output in log space (left) and real space (right) with respect tc each candidate neuron chosen for removal using the brute force iterative re-ranking removal criterior\n105 30000 Neuron: 0 Neuron: 0 Neuron: 10 Neuron: 10 Neuron: 20 Neuron: 20 Neuron: 30 Neuron: 30 Neuron: 40 25000 Neuron: 40 Neuron: 50 Neuron: 50 Neuron: 60 Neuron: 60 Neuron: 70 Neuron: 70 Frros Neuron: 80 Frrnrs Neuron: 80 20000 Neuron: 90 Neuron: 90 puunnbs Jo wnd pdnned Jo I 104 uns 15000 10000 103 5000 0 2 4 6 8 10 0 4 6 8 10 Gain value (1 = normal output, 0 = off) Gain value (1 = normal output, 0 = off)\nFigure 23: Error surface of the network output in log space (left) and real space (right) with respect to. each candidate neuron chosen for removal using the first-order iterative re-ranking removal criterion"}, {"section_index": "22", "section_name": "4.5.4 ASIDE: IMPLICATIONS OF THIS EXPERIMENT", "section_text": "From the three examples above, we see that in each case, starting from a sub-optimal network, a brute force removal technique consistently improves performance for the first few pruning iterations and the sum of squared errors does not degrade beyond the starting point until around 60-80% of the neurons have been removed. This is only possible if we have an essentially strict dichotomy between the roles of different neurons during training. If the network needs only 20-40% of the neurons it began with, the training process is essentially dominated by the task of canceling the residual noise of redundant neurons. Furthermore, the network can get stuck in training with redundant units and distort the final output. This is strong evidence of our thesis that the learning representation is neither equitably nor evenly distributed and that most of the neurons which do not directly participate in the learning representation can be removed without any retraining."}, {"section_index": "23", "section_name": "4.6 EXPERIMENTS ON TOY DATASETS", "section_text": "As can be seen from the experiments on MNIST, even though the 2nd-order approximation criterior. is consistently better than 1st-order, its performance is not nearly as good as brute force based rank.\nslopes of the cluster of bad pruning decisions in 24. Once again, these are not necessarily bad decisions, but the starting point is already bad and this cannot be recovered without re-training the network.\n105 30000 Neuron: 0 Neuron: 0 Neuron: 10 Neuron: 10 Neuron: 20 Neuron: 20 Neuron: 30 25000 Neuron:30 Neuron: 40 Neuron: 40 Neuron: 50 Neuron: 50 Neuron: 60 Neuron: 60 Neuron: 70 20000 Neuron:70 sunnrn pnnnnr no nns Neuron: 80 Neuron: 80 Neuron: 90 Neuron: 90 I puunnbs Jo ung 104 15000 10000 5000 103 0 0 2 4 6 8 10 0 4 6 8 10 Gain value (1 = normal output, 0 = off) Gain value (1 = normal output, 0 = off)\nFigure 24: Error surface of the network output in log space (left) and real space (right) with respec to each candidate neuron chosen for removal using the second-order iterative re-ranking remova criterion\ning, especially beyond the first layer. What is interesting to note is that from some other experiment conducted on toy datasets (predicting whether a given point would lie inside a given shape on th Cartesian plane), the performance of the 2nd-order method was found to be exceptionally good anc produced results very close to the brute force method. The 1st-order method, as expected, performec poorly here as well. Some of these results are illustrated in Figure 25.\n1.2 20000 Brute Force Brute Force 1.1 1st Order / Skeletonization 1st Order / Skeletonization 2nd Order Taylor Approx 2nd Order Taylor Approx 1.0 15000 Aeeneee Sunnrn punnnn go nnss 0.9 10000 0.8 0.7 5000 0.6 0.5 0 0.4 0 20 40 60 80 100 0 20 40 60 80 100 Number of Neurons Removed Number of Neurons Removed 1.1 30000 Brute Force Brute Force 1.0 1st Order / Skeletonization 1st Order / Skeletonization. 25000 2nd Order Taylor Approx 2nd Order Taylor Approx 0.9 AAenree Frrors 20000 0.8 I pdreee 0.7 15000 0.6 uun 10000 0.5 S 0.4 5000 0.3 0 0 20 40 60 80 100 0 20 40 60 80 100 Number of Neurons Removed Number of Neurons Removed\nFigure 25: Degradation in squared error (left) and classification accuracy (right) after pruning a 2 layer network using the iterative re-ranking algorithm on a toy \"diamond' shape dataset (top) and a toy \"random shape\"' dataset (below); (Network: 2 layers, 50 neurons/layer, 10 outputs, logistic sigmoid activation, starting test accuracy: 0.992(diamond); 0.986(random shape)"}, {"section_index": "24", "section_name": "CONCLUSIONS & FUTURE EWORK", "section_text": "In conclusion, we must first re-assert that we do not present this work as a bench-marking study o the algorithm we derived and tested. We have merely used this algorithm as a jumping off point t investigate the nature of learning representations in neural networks. What we discovered is that firs. and second order methods do not make particularly good pruning decisions, and can get hopelessl lost after making a bad pruning decision resulting in a network fault. Furthermore, the brute-forc. algorithm does surprisingly well, despite being computationally expensive. This method does s. well in fact, we argue that further investigation is warranted to make this algorithm computationall. tractable, though we do not speculate on how that should be done here..\nWe also observed strong evidence for the hypotheses of Mozer & Smolensky (1989a) regarding the \"dualist' nature of hidden units, i.e. that learning representations are divided between units which either participate in the output approximation or learn to cancel each others influence. This suggests that neural networks may in fact learn a minimal network implicitly, though we cannot say for sure that this is the case without further investigation. A necessary experiment to this end would be to compare the size of network constructed using cascade correlation (Fahlman & Lebiere (1989)) anc compare it to the results described herein.\nWe have presented a novel algorithm for pruning whole neurons from a trained neural network using a second-order Taylor series approximation of the change in error resulting from the removal a given neuron as a pruning criteria. We compared this method to a first order method and a brute- force serial removal method which exhaustively found the next best single neuron to remove at each stage. Our algorithm relies on a combination of assumptions similar to the ones made by Mozer & Smolensky (1989a) and LeCun et al. (1989) in the formulation of the Skeletonization and Optimal Brain Damage algorithms.\nFirst, we assumed that the error function with respect to each individual neuron can be approximated. with a straight line or more precisely with a parabola. Second, for second derivative terms we consider only the diagonal elements of the Hessian matrix, i.e. we assume that each neuron-weight. connection can be treated independently of the other elements in the network. Third, we assumed that pruning could be done in a serial fashion in which we find the single least productive element. in the network, remove it, and move on. We found that all of these assumptions are deeply flawed in. the sense that the true relevance of a neuron can only be partially approximated by a first or second. order method, and only at certain stages of the pruning process..\nFor most problems, these methods can usually remove between 10-30% of the neurons in a trained network, but beyond this point their reliability breaks down. For certain problems, none of the described methods seem to perform very well, though for obvious reasons the brute-force method always exhibits the best results. The reason for this is that the error function with respect to each hid-. den unit is more complex than a simple second-order Taylor series can approximate. Furthermore. we have not directly taken into account the interdependence of elements within a network, though the work of Hassibi & Stork (1993) could provide some guidance in this regard. This is another. critical issue to investigate in the future.\nWe have observed that pruning whole neurons from an optimally trained network without major. loss in performance is not only possible but also enables compressing networks to 40-70% of their. original size, which is of great importance in constrained memory environments like embeddec. devices. We cite the results of our experiments using the brute force criterion as evidence of this. conclusion. However expensive, it would be extremely easy to parallelize this method, or potentially. approximate it using a subset of the training data to decide which neurons to prune. This avoids the. problem of trying to approximate the importance of a unit and potentially making a mistake.\nIt would also be interesting to see how these methods perform on deeper networks and on some other popular and real world datasets. In our case, on the MNIST dataset, we observed that it was more. difficult to prune neurons from a deeper network than from one with a single layer. We should expect\nRe-training may help in this regard. We freely admit that our algorithm does not use re-training to re cover from errors made in pruning decisions. We argue that evaluating a network pruning algorithm. using re-training does not allow us to make fair comparisons between the kinds of decisions made. by these algorithms. Neural networks are very good at recovering from the removal of individual. elements with re-training and so this compensates for sub-optimal pruning criteria..\nOur experiments using the visualization of error surfaces and pruning decisions concretely establish the fact that not all neurons in a network contribute to its performance in the same way, and the observed complexity of these functions demonstrates limitations of the approximations we used\nFinally, we encourage the readers of this work to take these results into consideration when making. decisions as to which methods to use to improve network generalization or compress their models It should be remembered that various heuristics may perform well in practice for reasons which are. in fact orthogonal to the accepted justifications given by their proponents.."}, {"section_index": "25", "section_name": "REFERENCES", "section_text": "Wolfgang Balzer, Masanobu Takahashi, Jun Ohta, and Kazuo Kyuma. Weight quantization in boltz mann machines. Neural Networks, 4(3):405-409, 1991.\nEric B Baum and David Haussler. What size net gives valid generalization? Neural computation, 1 (1):151-160, 1989\nScott E Fahlman and Christian Lebiere. The cascade-correlation learning architecture. 1989\nIan J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. arXiv preprint arXiv:1302.4389, 2013.\nSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149v5. 2016.\nBabak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal brain surgeon. Morgan Kaufmann, 1993.\nMarkus Hoehfeld and Scott E Fahlman. Learning with limited numerical precision using the cascade-correlation algorithm. IEEE Transactions on Neural Networks, 3(4):602-611. 1992\nYann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010\nYann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optima. brain damage. In NIPs, volume 89. 1989\nAnders Oland and Bhiksha Raj. Reducing communication overhead in distributed learning by an order of magnitude (almost). In IEEE International Conference on Acoustics, Speech and Signa Processing, pp. 2219-2223, 2015.\nthis trend to continue as networks get deeper and deeper, which also calls into further question the reliability of the described first and second order methods. We did investigate the order in which neurons were plucked from each layer of the networks and we found that the brute force method primarily removes neurons from the deepest layer of the network first, but there was no obvious pattern in layer preference for the other two methods.\nRohit Prabhavalkar, Ouais Alsharif, Antoine Bruguier, and Lan McGraw. On the compression ol recurrent neural networks with an application to lvcsr acoustic modeling for embedded speecl recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5970-5974. IEEE, 2016.\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine. Learning Research, 15(1):1929-1958, 2014.\n(0) 1 2 0 x ) 0 O E MSE\nName and network definitions:\nm+1\nSuperscripts represent the index of the layer of the network in question, with O representing the o(m) is the ith output in layer m generated output layer. E is the squared-error network cost function. o. t(m) is the by the activation function o, which in this paper is is the standard logistic sigmoid. x. neuron in the m + 1 layer to the input of the ith neuron in the mth layer.."}, {"section_index": "26", "section_name": "A.1 FIRST AND SECOND DERIVATIVES", "section_text": "The first and second derivatives of the cost function with respect to the outputs.\ndE do 0\na2 E do\nThe first and second derivatives of the sigmoid function in forms depending only on the output\n'(x)=o(x)1-ox) \"(x) ='(x)(1-2o(x)\n1 0 Or E MSE\nFigure 26: A computational graph of a simple feed-forward network illustrating the naming of different variables, where o() is the nonlinearity, MSE is the mean-squared error cost function and. E is the overall loss.\nThe second derivative of the sigmoid is easily derived from the first derivative:\nx)=o(x)(1-ox) d (x)(1-(x) dx f (x) g(x) \"(x)=f'(x)g(x)+f(x)gx) (x)= '(x)(1-o(x))-o(x)'(x \"(x) ='(x)-2o(x)'(x) \"(x) = '(x)(1-2o(x))\nDerivative of the error with resp ect to the ith neuron's input 0) in the output layer.\ndE dE d from (6) from (8)\nSecond derivative of the error with respect to the ith neuron's inpu in the output layer\na2E a dE a 0 0 (0) from from a2 E\nFirst derivative of the error with respect to a single input contribution from neuron j to neuron\ndE dE do .(0) 0 da (0) from from (m m m+1 W (m dE 0\nOJi (m) dE from (25) dE dE (0) (0) dx 02\n02 E dE 0 a2 E dE 0 02 a2E from (8 from (6) from (9)\ndE dE do (32) do (0) dx (0) Oc(0) 0 12 da (33) 0 from (6) from (8 d m m (34) (m m dE (35) 0 (36) from (25) dE dE (37) dc(0) dr(0)\n(0) Second derivative of the error with respect to a single input contribution c\na2 E a dE .(0) Oc(o) from (36) a .(0) dc m m m+ (0) 0 (0) a dc(0)\nWe now make use of the abbreviations f and g\nO m m+ 02 E from (31) a2 E a2E\ndE (0) do\ndE a2 E dx (0)\ndE dE a2E a2E (0) dx\ndE dE .(0) (0) dc dx 2\na2E a dE (38) dc(0) Oc(o) Oc(0)2 ji from (36 a (39) (0) a m (m+1) (m) (m+1) (40) 0 W (0) a (41) dc!?) 7 J a (42) (0)"}, {"section_index": "27", "section_name": "A.1.2 HIDDEN LAYER DERIVATIVES", "section_text": "The first derivative of the error with respect to a neuron with output o ) in the first hidden layer. summing over all partial derivative contributions from the output layer:\ndE dE (0) (1) 1 d\nm (m) (m+1) m (m+1) do m- do dE dE 0\na m m+1 m m+1) m+1\nNote that this equation does not depend on the specific form of whether it involves a sigmoic\ndE dE m (m+1) d O.\nNote that this equation does not depend on the specific form of. dE or any other activation function. We can therefore replace the specific indexes with general ones. and use this equation in the future.\nThe second derivative of the error with respect to a neuron with output o(1) j in the first hidden layer\n2H dE a dE (0) (1) a ) 0 1) d\nSumming up. we obtain the more general expression.\na2 E a2 E\na2 E a2E m+ do\nAt this point we are beginning to see the recursion in the form of the 2nd derivative terms which car be thought of analogously to the first derivative recursion which is central to the back-propagation algorithm. The formulation above which makes specific reference to layer indexes also works in the general case. (m) m) nsider the\nConsider the ith neuron in any layer m with output o.. (m).The first and seconc. and input x derivatives of the error E with respect to this neuron's input are:\n02 E d dE (m) dE m m dE H a dE dE m d dE 2 E dE m a2 E dE\nNote the form of this equation is the general form of what was derived for the output layer in (31). Both of the above first and second terms are easily computable and can be stored as we propagate back from the output of the network to the input. With respect to the output layer, the first and second derivative terms have already been derived above. In the case of the m + 1 hidden layer during bacl. propagation, there is a summation of terms calculated in the mth layer. For the first derivative, we. have this from (55).\nQ2 F 2 dE H m m-\na2E )2 E dE (m+1)2 m dx dx from (66) from (55)\ndE dE m d (m) dx do m m\ndE dE m (m+1)\nAnd this horrible mouthful of an equation gives you a general form for any neuron in the jth position of the m + 1 layer. Taking very careful note of the indexes, this can be more or less translated painlessly to code. You are welcome, world\nA.1.3 SUMMARY OF HIDDEN LAYER DERIVATIVES\ndE dE a2E a2E (m) ) (m (m+1) dxi (m do (m+1) dxi (m)\ndE dE dor (m dxi (m m dx (m) 2 F dE"}] |
ryPx38qge | [{"section_index": "0", "section_name": "A HYBRID NETWORK: SCATTERING AND CONVNET", "section_text": "Edouard Oyallon\nDepartement Informatique Ecole Normale Superieure Paris, France\nThis paper shows how, by combining prior and supervised representations, one can create architectures that lead to nearly state-of-the-art results on standard benchmarks, which mean they perform as well as a deep network learned from scratch. We use scattering as a generic and fixed initialization of the first layers of a deep network, and learn the remaining layers in a supervised manner. We numerically demonstrate that deep hybrid scattering networks generalize better on small datasets than supervised deep networks. Scattering networks could help current systems to save computation time, while guaranteeing the stability to ge- ometric transformations and noise of the first internal layers. We also show that the learned operators explicitly build invariances to geometrical variabilities, such as local rotation and translation, by analyzing the third layer of our architecture We demonstrate that it is possible to replace the scattering transform by a standard deep network at the cost of having to learn more parameters and potentially adding instabilities. Finally, we release a new software, ScatWave, using GPUs for fast computations of a scattering network that is integrated in Torch. We evaluate our model on the CIFAR10, CIFAR100 and STL10 datasets."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep architectures builds generic and low-dimensional representations that lead to state-of-the-art results on tasks such as classification (He et al.]2015), games (Silver et al.[2016), or generative models (Radford et al.|2015). These architectures are designed as cascades of non-linear modules that are fully learned. This paper addresses several questions: is it necessary to learn each module? Can a scattering networks replace the first layers? What are the potential benefits?\nHybrid architectures composed of a supervised representation learned on top of an unsupervised representation (Philbin et al.2007) have been progressively abandoned for the end-to-end training approach (LeCun et al.l|2010). Understanding the nature of the cascade of deep operators is difficult (Szegedy et al.|2013), since they are learned via back-propagation, and not layer-wise. However, the learned features appear to be transferable to other datasets and helpful for classification(Zeiler & Fergus[[2014], which implies that the learned representations have captured generic properties for image classification tasks.\nA convnet is typically a cascade of convolutional layers and nonlinearities, followed by a final aver-. age pooling or a sequence of fully connected layers. They lead to state of the art results on CIFAR10. and CIFAR100 (Zagoruyko & Komodakis2016). Some related work to ours (Perronnin & Larlus 2015) proposed to replace the first layers of convolution by a combination of SIFT and Fisher vec-"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Scattering representations (Mallat2012) are predefined and generic representations which only re-. quire the learning of a few hyper parameters. They consist of a cascade of wavelet transforms. and modulus nonlinearities are have proven to be successful in classification tasks such as tex-. tures (Bruna & Mallat]2013bf Sifre & Mallat]2013), small digits (Bruna & Mallat]2013b), sounds (Anden & Mallat2014) or complex image datasets with unsupervised representations (Oyallon & Mallat2015). Nevertheless, these representations do not adapt to the specific bias of each dataset. and there is a huge performance gap between supervised and unsupervised representations(Oyallon. & Mallat2015).\ntors, while learning on top of it a cascade of fully connected layers. This is a hybrid representation in the sense that it combines an unsupervised learned representation and a supervised learned MLP. The numerical results they obtained are competitive on ImageNet with the first AlexNet architecture. (Krizhevsky et al.2012), while saving computations.\nTraining a state of the art deep network requires a huge amount of labeled data. Several works. tried to tackle this difficulty by developing unsupervised algorithm applied to deepnetwork: for in-. stance evaluating on CIFAR10 an unsupervised generative adversarial method (GAN) pretrained on. a subset of Imagenet-1K (Radford et al.|2015). In a setting where few annotated data are available,. training a deep network is hard and requires a lot of regularization, yet a semisupervised learning. algorithm applied to a GAN can improve even more the accuracy on CIFAR10, as in Salimans et al. (2016). Yet, if few data are available, such as in medical imaging, training a deep network from. scratch is more complicated: one can only use imagenet pre-trained features (Carneiro et al.|[2015).\nSection2 describes our model, which is a cascade of a scattering network and a convnet. We ex-. plain how we build our scattering network, describe its stability properties and exhibit our learning. pipeline. Section 3|shows that our network provides competitive results on CIFAR10, CIFAR100 and STL10, while having theoretical guarantees for its representations, in both setting with limited data or not. The experiments can be reproduced using ScatWave['] an implementation of our algo-. rithm in Torch, which we make publicly available. More details about the software are available in. the Appendix A.\nWe construct an architecture that consists of two blocks: the first is based on the scattering trans form and involves no learning; the second is a classical convnet. In this section, we describe these. architectures and their properties..\nA scattering network belongs to the class of convolutional networks whose filters are predefined as. wavelets (Oyallon & Mallat]2015). The construction of this network has mathematical foundations. (Mallat]2012), meaning it is well understood, relies on few parameters and is stable, in contrast. deep networks. Stability properties are discussed in Subsection |2.1.2|and Apprendix B. Besides. most of the parameters of this representation does not need to be adapted to the bias of the dataset (Oyallon & Mallat2015), making it a suitable generic representation."}, {"section_index": "3", "section_name": "2.1.1 A CASCADE OF WAVELETS AND MODULUS", "section_text": "In this section, we briefly recall the definition of the scattering transform. It is the cascade of wavelet transforms, and modulus nonlinearity which is finally spatially averaged. Since a modulus is non- expansive, and a wavelet transform is a linear isometry, a scattering transform is also non-expansive. The local averaging of this representation thus builds a local invariance to translation. In this paper,. we only consider a second order scattering network, on the group of translations..\nConsider a signal x(u), u E R2 and an integer J E N, which is the spatial scale of our scattering transform. Let $J be an averaging with a spatial window of scale 2J. (for example, a Gaussian av- eraging) Applying a subsampled averaging AJx(u) = x * $J(2' u) builds an approximate invariant to translations smaller than 2, but it also results in a loss of high frequencies that are necessary to discriminate signals. We define Sox = AJx as the order O scattering.\nA solution to avoid this loss is provided by wavelets. A wavelet is an integrable and localized func- tion in the Fourier and space domain, with a O average. A family of wavelets is obtained by dilating a complex mother wavelet (for example, a Morlet wavelet) such that j,o(u) = 22 (r-o 2) where r-e is the rotation by -0, and j O is the scale of the wavelet. A given wavelet j,e has thus its energy concentrated at a scale j, in the angular sector 0. Let K E N be an inte- ger representing the number of angles of our operator. A wavelet transform is the convolution of a signal with the family of wavelets introduced above, with an appropriate downsampling, i.e\nthe wavelet is chosen to be selective in angle and localized in Fourier, thus the sampling is choser such that (01, j) -> W1x(u, 01, J1) is regular enough. Besides, the wavelet transform has been spa. tially oversampled by a factor 1. The wavelet parameters and this discretization were already chose! in (Oyallon & Mallat|2015), where this representation is shown to be generic, so we have used the same hyper-parameters. In their case, {Agx, W1x} is approximatively an isometry on the set o signals with limited bandwidth, and this implies the energy of the signal is preserved. This operato belongs to the category of multi-resolution analysis operator, each filter being excited by a specifi scale and angle, but the output coefficients are not invariant to translation. We can not apply AJ tc Wi x since it gives a trivial invariant, namely O..\nIn this section, we develop mathematical properties that are obtained by wavelets. Covariance with a group of variability permits building a localized invariant via local averaging. The degree of invariance will be decided by a supervised algorithm, in order to be adapted to the bias of the problem of classification. Here, the parameter J corresponds to a trade-off of invariance in translation and. discrimination to adjust A,1, and it has to be learned from the data. By construction, as a cascade of. convolutions, a scattering network is covariant with translations. Let re.x = x(r-eu) be a rotated. signal by 0. The representation is still covariant with the rotation in the following sense:.\nS2(re.x)(u, 01, 02) -eu,01-0,02 =re.(S2x)(u, 01, 02 Sox(r\nHowever, for Sx, the natural set of coordinates that gives a rotational invariant for angles is in fact given by:\nS2(re.x)(u,01,Q) = S2x(r-eu,01- 0, a) = re.(S2x)(u,01,\nIt is also important to note that a scattering transform is non-expansive, as a cascade of non-expansive operators, e.g.: ||Sx - Syll ||x yll. Thus, this representation is stable to additive noises, which. correspond to perturbations studied in [Goodfellow et al.(2016); Moosavi-Dezfooli et al.[(2015) Szegedy et al.(2013);Goodfellow et al.(2014). By adding a very small quantity to an image, the. classification performed by a deep network can be fooled, e.g. image x is well classified, but one canfindIlell < 1 such that x + e is not. with the two images being visually indistinguishable\nWe build the first order scattering coefficients. Applying a point-wise modulus to Wix, followed by an averaging AJ allows us to build an invariant. If the mother wavelet is analytic, then |W1x is more regular (Bernstein et al.]2013) which implies that the support in Fourier of [W1x] is more likely to be contained in a lower frequency domain than Wix. Thus, AJ preserves the energy of [Wjx]. In this case, it is possible to define Sx = A.y[Wi[x, which can also be written as. S1x(u, 01, J1) = |x + j1,01|* $J(2 u); this is the order 1 scattering. It is consequently invariant to translation up to 2J.\nOnce more, applying a second wavelet transform W2 = W1 on each channels permits the recov- ery of the high-frequency loss due to the averaging applied to the first order, leading to S2x =- Aj|W2|W1|, which can also be written as S2x(u, 01, 02, J1, J2) = |x+ Vj1,01|* j2,02|* $J(2u). We only compute increasing paths, i.e. j1 < j2 because non-increasing paths bear no energy (Bruna & Mallat[2013b). We do not compute higher order scatterings, because their energy has been shown experimentally not to be meaningful (Bruna & Mallat2013b).\nS1(re.x)(u,01) = S1x(r_eu,01-0) = re.(S1x)(u,01\nS2x(u, 01, ) = S2x(u, 01, 01+ a\nIn fact, this representation is covariant with the action of the roto-translation group, i.e. R2 SO2 [Sifre & Mallat]2013). In addition, it can be proven that a scattering network linearizes small deformations (Mallat2012), which means that a linear operator can build invariants to a subset of deformations. It means also that a deep network could reduce the perturbations due to this variability. Furthermore, this representation is complete, in the sense that it is possible to reconstruct a signal from its scattering coefficients (Bruna & Mallat2013a).\nApprendix B mathematically quantifies the stability of a hybrid network and proves the unstability. potentially rises from the cascaded learned deep network. Indeed, Szegedy et al.(2013) reports tha the operators of the different layers of a deep network are not contractive. Corrections were adde. to fix this by training a deep network on the fooling examples (Goodfellow et al.2014), but thi requires additional computations. With wavelets as an initialization, instabilities cannot occur in th first layers contrary to Szegedy et al.|(2013), since the operator is non-expansive..\nThis section introduces our hybrid representation, and explain its interest. We first justify why apply. ing a deep network after scattering is natural. Scattering transforms have yielded excellent numerical results on datasets where the variabilities are completely known, such as MNIST or FERET. In these task, we only encounter the problem of sample variance and handling the variance leads to solving. the problem. However, in classification tasks on more complex image datasets, such variabilities are. only partially known. Applying the scattering transform on datasets like CIFAR or Caltech leads to. nearly state-of-the-art results in the unsupervised case (Oyallon & Mallat]2015). But there is a huge. gap in performance when comparing to supervised representations, which deep networks can fill in.\nWe now explain why the scattering transform is an appropriate initialization. Recent works (Mallat 2016, Bruna et al.2013) have suggested that a deep network could build an approximation of the group of symmetries of a classification task and apply transformations along the orbits of this group The objective of a supervised classifier is to reduce the variabilities due to those symmetries. To eacl ayer corresponds an approximated Lie group of symmetry, and this approximation is progressive in the sense that the dimension of this approximation is increasing with depth. For instance, the linear Lie group of symmetry of an image is the translation group, R2. If no non-linearity is applied it is not possible to discover new linear groups of symmetry for natural images. In the case of a wavelet transform obtained by rotation of a mother wavelet, it is possible to recover a new subgroup of symmetry, the rotation SO, and the group of symmetry at this layer is the roto-translation group R2 SO2. Discovering the next groups of symmetry is however a difficult task; nonetheless the roto-translation group seems to be a good initialization for the first layers. In this work, we investigate this hypothesis.\nWe thus build a standard deep convolutional network on top of the scattering transform. Its architec ture is represented in Figure[2.2] Our network consists of a cascade of 2C convolutions with spatia kernel size 3 3. The C first convolutions use Ko input channels, except for the first layer, while the C next convolutions have Kj output channels, except for the last layer. The number of input. of the first layer is equal to the size of the scattering features, whereas the last layer consists in ar average pooling followed by a linear projection. We used a ReLU non-linearity, and no non-linea. pooling is involved in our architecture. Observe that if the scattering is applied up to a scale J, ther the signal is at spatial resolution J, i.e. its sampling is 2, allowing faster computation. In the nex1 section, we discuss the value of those parameters.\nconv conv conv conv 3 3 3 x 3 3 3 3 3 [W1] >W2>AJ ... P X x K0 x K0 X K1 x K0 x K0 x K1 X K1\nFigure 1: Architecture of our deep network. Observe that no downsampling is performec\nNotably, the first layer F1 of this deep convolutional network is structured by its input, the scattering representation. The nature of this operator and the features selected by this supervised algorithm will be discussed in the next sections.\nconv conv conv conv ind 3 x 3 3 x 3 3 x 3 3 x 3 W A j D x Ko x K0 X K1 x x Ko x K0 X K1 X K1 -1x"}, {"section_index": "4", "section_name": "3 EXPERIMENTS", "section_text": "We compare our algorithm to supervised, unsupervised and semi-supervised deep networks anc. evaluate them on the CIFAR10. CIFAR100 and STL10 datasets. CIFAR10 and CIFAR100 dataset consist of colored images of size 32 32, with 50000 images in the training set, and 10000 in the. test set. CIFAR10 and CIFAR100 have respectively 10 and 100 classes. SLT10 dataset consists oj colored images of size 96 96, with only 5000 labeled images in the training set divided equall. in 10 classes, and 8oo0 images in the test set. The unlabeled images of this dataset were not usec. during our experiments. The three datasets are whitened as preprocessing, following the standarc. procedure. Our software, ScatWave, is implemented in Torch and is publicly available online2."}, {"section_index": "5", "section_name": "3.1.1 METHODOLOGY", "section_text": "During all our experiments, we have trained our architecture with SGD with momentum O.9 to. minimize the standard negative cross-entropy. The batch size is 128. We have applied four types of regularization. First, O.6 dropout is used after each consecutive two layers. Secondly, we have used 10-4 weight decay. Futhermore, we have used batch normalization techniques which are supposed. to lead to a better conditioning of the optimization (Ioffe & Szegedy 2015). Finally, we have augmented the dataset by using random cropping and flipping. The initial learning rate is 0.25 and. we divide it by two after every 30 epochs. The networks are trained during 300 epochs..\nWe now depict the selected hyperparameters of our architecture for CIFAR and STL10 datasets. respectively, which we have kept fixed for all experiments unless specifically stated otherwise. For the CIFAR datasets, using cross-validation, the parameters of invariance are set to J = 2 and the. number of angles used is L = 8 for the scattering transform. In this case, the output of the scattering. network is a tensor of size 3(1 + LJ + L2 J(J 1)) = 8 8 243, after reshaping. For the deep net architecture, we chose to use C = 10 layers and Ko E {128, 512},Kj = 128. channels. The number of parameters for CIFAR10 is 9 (243 Ko + (C - 1)K? + KoK + (C 1)K?) + 10 K1), which is roughly equal to 1.6M and 12M parameters for Ko = 128 and Ko = 512 respectively. For the STL10 dataset, we chosed Ko = K = 512 and C = 2: the deep network is shallower to compensate the speed loss due to the use of larger images. In the following.. the symbol % corresponds to an absolute percentage of accuracy."}, {"section_index": "6", "section_name": "3.1.2 NUMERICAL EXPERIMENTS ON THE ENTIRE DATASET", "section_text": "We report the classification accuracies in Table|3.1.2land discuss them below. We compare our archi tecture with the unsupervised scattering architecture (Oyallon & Mallat|2015). The roto-translation scattering is almost identical to scattering, except that it recombines the channels along the rotation axis by applying a wavelet transform along angles, building more complex geometrical invariants. The classifier of this paper is a RBF SVM kernel, which can be interpreted as a two-layer deep neural network. For the sake of simplicity, we have trained on top of our scattering network a 3 layer fully connected network with size 2048, similarly to (Perronnin & Larlus||2015). Without data augmenta- tion, the accuracy of the network is respectively 3.7% and 8.9% below the roto-translation scattering on CIFAR10 and CIFAR100. However, applying data augmentation with translation of length less than 22 permits recovering this loss in accuracy, resulting in accuracies of 83.0% and 56.7% (we let the network train for 400 epochs here) respectively on CIFAR10 and CIFAR100. One major difference to our approach is that the system in Oyallon & Mallat(2015) uses a large amount of oversampling. This suggests that in order to learn the appropriate features it is necessary to average the small translation displacement; even if the representation is built to be invariant to translation its construction relies, for fast computation, on an approximative (but justified) downsampling that leads to non-linear aliasing.\nApplying a supervised convnet significantly improves the accuracy of the scattering with 3 fully. connected layers, and leads to comparable results with other supervised deep representations: 91.4% on CIFAR10 and 69.5% on CIFAR100. We compare our work with the Highway network (Srivastava et al.]2015), that consists in a deep cascade of 19 linear and non-linear operators, with an extensive.\nhttps://github.com/edouardoyallon/scatwave\nTable 1: Accuracy of scattering compared to similar architectures on CIFAR10\ndata augmentation. Besides, since our convnet is kept as simple as possible, we also compare ou. architecture to the All-CNN (Springenberg et al.]2014) work. The latter performs slightly better. on CIFAR10, but our hybrid network has a better generalization on CIFAR100. This indicates that. supervision is essential, but that a geometric initialization of the first layers of a deep network leads to a discriminative representation as well. It is also interesting to observe that we could not find any. architecture that was performing worse on CIFAR10 than ours, but better on CIFAR100. Since the number of samples available per class is lower, this could indicate that learning is easier in this case with a scattering initialization.\nA Wide Resnet (Zagoruyko & Komodakis||2016) outperforms our architecture by 4.8% on CIFAR10 and 12.2% on CIFAR100, but it requires more engineering process to be designed and is deeper. It is important to recall that, contrary to the wide Resnet, there are no instabilities due to geometric trans- formations (e.g. translations or deformations), and that the two first layers which are responsible for a large fraction of the computation are not learned, resulting in computational savings. Obviously, the scattering layers do not suffer from the vanishing or exploding gradient issues.\nSupervised deep networks trained on small datasets easily overfit, for instance in the case of medical. imaging where little data are available. Semisupervised algorithms exhibit good performances (Sal- imans et al.2016), but it requires a large amount of unlabeled data to work. In this subsection, we show the benefit of using scattering in a framework where those data are not available. We demon strate that scattering does prevent overfitting on small datasets, while keeping the same architecture and training methodology: this saves time to design an architecture..\nFor this experiment, we draw several random subsets of CIFAR10 for training our network, and. used the same splits for each experiments to train a supervised deep network. Namely, we used. a Network in Network (NiN) (Lin et al.]2013), VGG-like Simonyan & Zisserman(2014), Wide. ResNet|Zagoruyko & Komodakis|(2016), which perform better than our network on the full dataset. and we use an implementation available onlinq3 The VGG did not converge in this situation with. the given hyperparameters (such as the depth) and for a simple and fair comparison we decided. not to adapt them. We however applied a data augmentation to the inputs of the NiN, VGG and. ResNet by translation and flipping. Table 3|corresponds to the averaged accuracy over 5 different. subsets, with the corresponding standard deviation. We compare the two architectures with a semi-. supervised model that consists in a GAN (Salimans et al.] 2016), that is trained on all the data of. CIFAR10 yet only a fraction is labeled. With 4000 and 8000 labeled samples, a wide ResNet with 40 layers outperforms by at least 3% the supervised and unsupervised methods which indicate this. architecture suffer from less overfitting than the others. If less than 2000 samples are available, a hybrid network outperforms all the supervised architectures: the difference of accuracy between the. hybrid architecture and the others is progressively favorable to the hybrid network when the number.\nTable 2: Accuracy of scattering compared to similar architectures on CIFAR100\nArchitecture Accuracy Architecture Accuracy Unsupervised architectures Unsupervised architectures Roto-translation scattering 82.3 Roto-translation scattering 56.8 Scattering (ours) + 3 FC Scattering (ours) + 3 FC +no data augmentation 78.6 + no data augmentation 47.9 Scattering (ours) + 3 FC 83.0 Scattering (ours) + 3 FC 56.7 Supervised hybrid architectures Supervised hybrid architectures Scattering (ours) + CNN, K = 128 89.4 Scattering (ours) +CNN, K = 128 64.4 Scattering (ours) + CNN, K = 512 91.4 Scattering (ours) +CNN, K = 512 69.5 Supervised architectures Supervised architectures Highway network 92.4 Highway network 67.8 All-CNN 92.8 All-CNN 66.3 Wide ResNet 96.2 Wide ResNet 81.7\nTable 3: Accuracy of a hybrid scattering in a limited sample situation on CIFAR10 dataset. N.A and N.C. stands respectively for Not Available and Not Converged..\nArchitecture 1000 2000 4000 8000 50000 Supervised architecture Scattering+CNN, K = 512 58.51.2 69.40.5 76.20.4 81.80.7 91.4 NiN 54.81.0 65.10.7 71.23.8 79.03.8 91.9 VGG N.C. N.C. N.C. N.C. 92.5 Wide ResNet 58.01.2 68.91.4 79.10.4 86.40.3 96.4 Semi-supervised architecture GAN 79.22.0 80.42.1 82.42.3 83.31.8 N.A.\nof samples decreases. Nonetheless, a GAN performs better than a translation scattering with 3 full connected layers; its performances is almost constant equal to 80%, which shows that this algorithr can adapt itself to the bias of the dataset.\nIn a second experiment, we apply our hybrid architecture on the STL10 dataset, which is a chal-. lenging dataset in the limited sample situation since only 500 samples per class are available. The. averaged result is reported in Table[4] A supervised CNN (Swersky et al.[2013) whose hyper param-. eters have been automatically tuned achieves 70.1% accuracy. Using the unlabeled data improves. by at least 5% the accuracy of a CNN. For an Exemplar CNN (Dosovitskiy et al.2014) or an Un- supervised Discriminative CNN (Huang et al.J2016), the weights of the CNNs are unsupervisedly. learned from patches of images. Those techniques add several hyper parameters and require an ad ditional engineering process. Applying a hybrid network is straightforward and outpasses both the. supervised and unsupervised state of the art by 7.3% and 0.6% respectively..\nIn this section, we show the benefit of using a structured representation. For instance, the first layel of a convnet cascaded on top of a scattering network inherits from the structure of the scattering coefficents. In all the following experiments, we use a converged hybrid network according to the procedure detailed in Subsection|3.1.1\nWe numerically analyze the nature of the operations performed along angles by the first layer Fj of our deep network on CIFAR10. Let us define as F1x = {Fx, F1x, F?x} the components associated to the order 0,1,2 scattering coefficients respectively. Let 1 < k < K. In this case F0 is a convolutional operator that depends on the variables (u, k), F1 depends on (u, 01, k), and F2 depends on (u, 01, Q, k). We would like to characterize the smoothness of these operators with respect to the variables, because our representation is covariant to rotations.\nTo this end, we consider the Fourier transform, for each channels k. We define by F1, F? the Fourier transform of these operators along the variables 01 and (01, Q) respectively. In space, we applied a DCT transform restrained to its support which is similar to a Fourier transform, since the support\nTable 4: Accuracy of a hybrid scattering on the STL-10 dataset\nArchitecture Accuracy Supervised architectures Scattering+CNN, K = 512 77.4 0.4 CNN 70.1 Semi-supervised and unsupervised architecture Exemplar CNN 75.40.3 Unsup. Discr. CNN 76.80.3\nof the operator is small. Then the operator is expressed in the tensorial Frequency domain. Since those operations are involutive (up to constants and reflexions), one can easily recover the origina Operator.\nLet us demonstrate how to sparsify this operator in its frequency basis. First, it is possible to thresh old by e the coefficients of the operators in the Fourier domain, i.e. we replaced the operators F1 F2 by 1|H|>eF1 and 1 ?|>eF2. Before thresholding, all the frequencies were excited. After this operation, approximatively 10% of coefficients are non-zero. We have tested our network withou any retraining and observed a negligible loss of accuracy. We proved that this basis permits a spars approximation of our filters.\nFurthermore, we decided to keep only the two first frequencies of Fi along the variable 01, i.e we replaced the operators F1, F2 by 1|wo|1F1, 1|wo |1F2. This results in a loss of accuracy of roughly 5%. We then fix the layer F1, and decide to retrain the next layers, keeping the entire training procedure identical again with an initial learning rate of 1. We obtain a classification accuracy which is 3% below the original architecture. This indicates that the network can almost recover discriminative information from averaged coefficients in angle.\nThe first experiment shows that in a natural basis it is possible to sparsify the operator. The las experiment indicates that most of the operations performed by the first layer of our network ar smooth since they are localized in the Fourier space. This is equivalent to first projecting via ar angular averaging each scattering coefficient and it suggests that the system autonomously builds an invariant to geometrical variabilities. While this does not prove that nomore geometrical oper ators are applied in the next layers, but it does show it is possible to obtain a good accuracy (3% below the original result) with a representation after F1 which is stable to local roto-translation anc deformation variabilities, thanks to a roto-translation averaging.\nIn order to explore this question, we consider a 5-layer convnet as a candidate to replace our scatter ing network on CIFAR10. Its architecture is described on Figure3.2.2] and it has the same outpu size as a scattering network. It has two downsampling steps, in order to mimic the behavior of a scattering network. We keep our architecture identical, except that we replace the scattering part by this network. Then we retrain it, keeping the weights of all the other layers constant and equal tc the optimal solution found with the scattering in the previous section. Instead of minimizing a los between the output of a scattering network and this network, we target the best input for the fixec convnet given the classification task.\nThis architecture can achieve 1% accuracy below the original pipeline, which is convincing. Using a shallower network seems to degrade the performances, but we did not investigate more this question Besides, the learned network will not have any guarantee of stability properties.\nconv conv conv conv conv indun 3 x 3 3 x 3 3 3 3 x 3 3 x 3 3X 128 128 128 256 X128 X128 X128 X 256 X256\nFigure 2: Architecture of our deep network that mimics a scattering network\nconv conv conv conv conv indur 3 x 3 3 x 3 3 x 3 3 x 3 3 x 3 2 3 X 128 128 128 256 X128 X128 X128 X256 X256"}, {"section_index": "7", "section_name": "4 CONCLUSION", "section_text": "We proposed a new deep hybrid architecture that involves scattering and convnets and is competi tive with existing approaches. We demonstrate its good generalization performances on CIFAR10,. CIFAR100, STL10 and subsets of CIFAR10, CIFAR100. We showed that the cascaded convnet learns an invariant to roto-translation, and that it is possible to learn a deep network that mimics the scattering, at the cost to potentially create instabilities. We release also a fast software to compute a scattering transform on GPUs.\nThis is a preliminary work whose results must be extended to ImageNet. This paper was dedicated. to incorporate geometry into deep networks, and we will show how they refine their construction of class invariants in a future work.."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "The author would like to thank Mathieu Andreux, Eugene Belilovsky, Carmine Cella, Bogdar Cirstea, Michael Eickenberg, Stephane Mallat and Sergey Zagoruyko, for helpful discussions anc. support. Also, the author especially thanks Carmine Cella for nice suggestions of experiments. This. work is funded by the ERC grant InvariantClass 320959 and via a grant for PhD Students of the. Conseil regional d'Ile-de-France (RDM-IdF).\nTable 5: Computation time (in seconds) for different input size, one being with MATLAB on CPUs and the other with Torch on GPUs\nCascade of multi-resolution computations, such as perfomed in a scattering transform, are delicate The bottleneck of the scattering on CPU was either speed and memory. Itis thus necessary to quickly explain our algorithm: we show that by reorganizing the order of the computation of the algorithm one can speed them up.\nComputing a scattering transform at order 2 requires computing each path ||x + p1| * p2| where. P1, P2 are the parameters with increasing scales of the filters. Computing each path can be viewec. as a computational tree, where the coefficient of the scattering transform before an averaging are the leaf of the tree, and the internal nodes are the modulus of the intermediary wavelet transform. The. way the tree is walked affects the computation time. In ScatNet (Anden et al.|2014), the traversa. is done by first computing each internal node, storing the results of each internal node. In a secon. steps, the leafs are computed and stored. In terms of memory, this is not optimal since it require. storing the intermediate computations. Instead, we use an affix traversal of the tree. It reduces at its. minimal the memory used, and allow the use of GPUs..\nScatWave is a GPU version of the scattering networks in Torch, that is based on our observation. above. ScatNetLight is a MATLAB version on CPUs, which uses as much as possible multithreads The Table [5lreports the difference in computation time, for identical parameters and output repre-. sentations (e.g. same sampling, same hyper parameters). The input corresponds to batches of 128. tensor, the two first dimensions being the size of the image, and the third the number of elements in the tensor (e.g. the color in the case of images) . For comparisons, we used a machine with 24 cores. and a TiTan GPU. The speed-up is at least of 15 in all cases, and up to 70: ScatWave uses the. library cuFFT.\nInput size J ScatNetLight (in s) ScatWave (in s) 32 x 32 x 3 x 128 2 2.5 0.15 32 x 32 x 3 x 128 4 13 0.49 32 x 32 x 3 x 128 5 38 1.1 128 x 128 x 3 x 128 2 16 1.0 128 x 128 x 3 x 128 4 52 2.3 128 x 128 3 x 128 5 120 3.7 256 256 3 128 2 160 2.2"}, {"section_index": "9", "section_name": "APPENDIX B: A NOTE ON THE STABILITY OF A HYBRID DEEP NETWORK", "section_text": "In this section, we recall the notion of additive stability of a deep network and derive some simpl. properties that shows that instabilities are due to the cascaded deepnetwork, and we demonstrate. bounds to quantify them. Instabilities are due to perturbations of an initial sample that a deepnetworl does not reduce correctly: the modification is visually not significant or does not affect the label o the perturbated sample, yet the network incorrectly classifies the image. We consider a (trained. deep network f, with bounded outputs, that are for instance the probabilities output obtained after a. sigmoid function. We write label(x) the labels computed by this deepnetwork for an input x. It is. possible to define the contraction factor of a deep network f via:.\nlf(x)- f(y)ll A(f) = sup ||xyll label(x)label(y)\nObserve that label(x) label(y) => x y. A small value of (f) indicates a better stabil ity. It is possible to introduce a local version of this definition, that depends on the input sampl (Moosavi-Dezfooli et al.]2015). It is consistant with the definitions of Goodfellow et al.(2016 Moosavi-Dezfooli et al.[(2015); Szegedy et al.(2013);Goodfellow et al.(2014), in the sens that the numerator is bounded, but the denominator might become arbitrary small. Let us now consider a hy brid network, e.g. for an input x, the output is f(Sx) where Sx E Rn is its scattering transform. Le us write S = {Sx, x} the span of a scattering transform, which corresponds to a strict embedding in the standard euclidean space Rn, e.g. S Rn.\nProposition 1. The following bounds stand.\nProof. Let x, y two samples with different estimated labels, then Sx Sy. In this case\n[f(Sx) -f(Sy)l] If(Sx)-f(Sy)] l[Sx - Sy x - y| l|Sx - Syll l|x y|l\nSetting x = Sx, y = Sy and taking the supremum ends the demonstration\nThe deformation transformations are as well a class of instabilities. The cited works (Goodfellow et al.[[2016, Moosavi-Dezfooli et al.[2015 [Szegedy et al.[ 2013] Goodfellow et al.]2014) do not consider them, however, since the scattering transform linearizes them, one sees it is possible for a deepnetwork to explicitly build such invariance. Actually, the average along rotation and translation variables observed in Subsection|3.2.1 seems to indicate it is likely to occur."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "J Anden, L Sifre, S Mallat, M Kapoko, V Lostanlen, and E Oyallon. Scatnet. Computer Software. Available: http://www. di. ens. fr/data/software/scatnet/.[Accessed: December 10. 2013]. 2014\nSwanhild Bernstein, Jean-Luc Bouchot, Martin Reinhardt, and Bettina Heise. Generalized analyti signals in image processing: comparison, theory and applications. In Quaternion and Cliffor. Fourier Transforms and Wavelets, pp. 221-246. Springer, 2013.\nlf(x)- f(y)l A(foS) < sup <(f) l|x - yll label(x)label(y) (x,y)ES2\n|Sx - Syll x-y\nOne sees that the amplitude of the resulting instabilities depends on the class of f. Furthermore, the. inequalities might be strict and the bounds tighter. but there is no reason it does occur. However it suggests that such instabilities could be removed if additional constraints were added during the. training phase of f .\nJoan Bruna and Stephane Mallat. Invariant scattering convolution networks. IEEE transactions on pattern analysis and machine intelligence, 35(8):1872-1886. 2013b\nJoan Bruna, Arthur Szlam, and Yann LeCun. Learning stable group invariant representations wit convolutional networks. arXiv preprint arXiv:1301.3537, 2013.\nGustavo Carneiro, Jacinto Nascimento, and Andrew P Bradley. Unregistered multiview mammo gram analysis with pre-trained deep learning models. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 652-660. Springer, 2015.\nAlexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discrimina. tive unsupervised feature learning with convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 766-774, 2014.\nIan Goodfellow, Nicolas Papernot, and Patrick McDaniel. cleverhans v0. 1: an adversarial machine learning library. arXiv preprint arXiv:1610.00768, 2016\nYann LeCun, Koray Kavukcuoglu, Clement Farabet, et al. Convolutional networks and applications in vision. In ISCAS, pp. 253-256, 2010.\nStephane Mallat. Group invariant scattering. Communications on Pure and Applied Mathematics 65(10):1331-1398, 2012.\nSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple anc accurate method to fool deep neural networks. arXiv preprint arXiv:1511.04599, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\nJames Philbin, Ondrej Chum, Michael Isard, Josef Sivic, and Andrew Zisserman. Object retrieval with large vocabularies and fast spatial matching. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8. IEEE, 2007.\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep. convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training gans. arXiv preprint arXiv:1606.03498. 2016\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. 2014\nChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013\nMatthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. Ir European Conference on Computer Vision, pp. 818-833. Springer, 2014.\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.\nJost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806. 2014..\nKevin Swersky, Jasper Snoek, and Ryan P Adams. Multi-task bayesian optimization. In Advances in neural information processing systems, pp. 2004-2012, 2013."}] |
rJbbOLcex | [{"section_index": "0", "section_name": "TOPICRNN: A RECURRENT NEURAL NETWORK WITH LONG-RANGE SEMANTIC DEPENDENCY", "section_text": "Adji B. Dieng\nColumbia University\nIn this paper, we propose TopicRNN, a recurrent neural network (RNN)-based. language model designed to directly capture the global semantic meaning relating. words in a document via latent topics. Because of their sequential nature, RNNs are good at capturing the local structure of a word sequence - both semantic and. syntactic - but might face difficulty remembering long-range dependencies. Intu-. itively, these long-range dependencies are of semantic nature. In contrast, latent. topic models are able to capture the global semantic structure of a document but. do not account for word ordering. The proposed TopicRNN model integrates the. merits of RNNs and latent topic models: it captures local (syntactic) dependen. cies using an RNN and global (semantic) dependencies using latent topics. Unlike. previous work on contextual RNN language modeling, our model is learned end-. to-end. Empirical results on word prediction show that TopicRNN outperforms existing contextual RNN baselines. In addition, TopicRNN can be used as an un-. supervised feature extractor for documents. We do this for sentiment analysis on. the IMDB movie review dataset and report an error rate of 6.28%. This is com. parable to the state-of-the-art 5.91% resulting from a semi-supervised approach. Finally, TopicRNN also yields sensible topics, making it a useful alternative to. document models such as latent Dirichlet allocation.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "\"The U.S.presidential race isn't only drawing attention and controversy in the United States - it's. being closely watched across the globe. But what does the rest of the world think about a campaign that has already thrown up one surprise after another? CNN asked 10 journalists for their take on the race so far, and what their country might be hoping for in America's next -.\nThe missing word in the text above is easily predicted by any human to be either President or Commander in Chief or their synonyms. There have been various language models - from simple n- grams to the most recent RNN-based language models - that aim to solve this problem of predicting correctly the subsequent word in an observed sequence of words.\nA good language model should capture at least two important properties of natural language. The first one is correct syntax. In order to do prediction that enjoys this property, we often only need tc consider a few preceding words. Therefore, correct syntax is more of a local property. Word orde matters in this case. The second property is the semantic coherence of the prediction. To achieve\nChong Wang\nDeep Learning Technology Center Microsoft Research\njpaisley@columbia.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "When reading a document, short or long, humans have a mechanism that somehow allows them to remember the gist of what they have read so far. Consider the following example:\nthis, we often need to consider many preceding words to understand the global semantic meaning o. the sentence or document. The ordering of the words usually matters much less in this case\nBecause they only consider a fixed-size context window of preceding words, traditional n-gran and neural probabilistic language models (Bengio et al., 2003) have difficulties in capturing globa semantic information. To overcome this, RNN-based language models (Mikolov et al., 2010; 2011) use hidden states to \"remember' the history of a word sequence. However, none of these approaches. explicitly model the two main properties of language mentioned above, correct syntax and semantic coherence. Previous work by Chelba and Jelinek (2000) and Gao et al. (2004) exploit syntactic 01. semantic parsers to capture long-range dependencies in language.\nThe remainder of the paper is organized as follows: Section 2 provides background on RNN-based language models and probabilistic topic models. Section 3 describes the TopicRNN network ar- chitecture, its generative process and how to perform inference for it. Section 4 presents per-word perplexity results on the Penn TreeBank dataset and the classification error rate on the IMDB 100K dataset. Finally, we conclude and provide future research directions in Section 5."}, {"section_index": "3", "section_name": "2 BACKGROUND", "section_text": "We present the background necessary for building the TopicRNN model. We first review RNN-based language modeling, followed by a discussion on the construction of latent topic models\nLanguage modeling is fundamental to many applications. Examples include speech recognition anc machine translation. A language model is a probability distribution over a sequence of words in. a predefined vocabulary. More formally, let V be a vocabulary set and y1, ..., yr a sequence of T. words with each yt E V. A language model measures the likelihood of a sequence through a joint. probability distribution,\nT p(y1, , yr) = p(y1) II p(ytY1:t-1 t=2\nTraditional n-gram and feed-forward neural network language models (Bengio et al., 2003) typically. make Markov assumptions about the sequential dependencies between words, where the chain rule shown above limits conditioning to a fixed-size context window..\nRNN-based language models (Mikolov et al., 2011) sidestep this Markov assumption by defining. the conditional probability of each word yt given all the previous words y1:t-1 through a hidden\n1Ghosh et al. (2016) did not publish results on the PTB and we did not find the code online\nn this paper, we propose TopicRNN, a RNN-based language model that is designed to directly capture long-range semantic dependencies via latent topics. These topics provide context to the RNN. Contextual RNNs have received a lot of attention (Mikolov and Zweig, 2012; Mikolov et al. 2014; Ji et al., 2015; Lin et al., 2015; Ji et al., 2016; Ghosh et al., 2016). However, the models closest to ours are the contextual RNN model proposed by Mikolov and Zweig (2012) and its mos1 recent extension to the long-short term memory (LSTM) architecture (Ghosh et al., 2016). These models use pre-trained topic mode1 features as an additional input to the hidden states and/or the output of the RNN. In contrast, TopicRNN does not require pre-trained topic model features and can be learned in an end-to-end fashion. We introduce an automatic way for handling stop words that topic models usually have difficulty dealing with. Under a comparable model size set up, TopicRNN achieves better perplexity scores than the contextual RNN model of Mikolov and Zweig (2012) on the Penn TreeBank dataset '. Moreover, TopicRNN can be used as an unsupervised feature extractor for downstream applications. For example, we derive document features of the IMDB movie review dataset using TopicRNN for sentiment classification. We reported an error rate of 6.28%. This is close to the state-of-the-art 5.91% (Miyato et al., 2016) despite that we do not use the labels and adversarial training in the feature extraction stage.\nstate ht (typically via a softmax function):\nWhile in principle RNN-based models can \"remember' arbitrarily long histories if provided enough capacity, in practice such large-scale neural networks can easily encounter difficulties during opti- mization (Bengio et al., 1994; Pascanu et al., 2013; Sutskever, 2013) or overfitting issues (Srivastava et al., 2014). Finding better ways to model long-range dependencies in language modeling is there- fore an open research challenge. As motivated in the introduction, much of the long-range depen dency in language comes from semantic coherence, not from syntactic structure which is more of a local phenomenon. Therefore, models that can capture long-range semantic dependencies in lan- guage are complementary to RNNs. In the following section, we describe a family of such models called probabilistic topic models."}, {"section_index": "4", "section_name": "2.2 PROBABILISTIC TOPIC MODELS", "section_text": "Probabilistic topic models are a family of models that can be used to capture global semantic co herency (Blei and Lafferty, 2o09). They provide a powerful tool for summarizing, organizing, and navigating document collections. One basic goal of such models is to find groups of words that tenc to co-occur together in the same document. These groups of words are called topics and represent a probability distribution that puts most of its mass on this subset of the vocabulary. Documents are then represented as mixtures over these latent topics. Through posterior inference, the learned topics capture the semantic coherence of the words they cluster together (Mimno et al., 2011).\nThe simplest topic model is latent Dirichlet allocation (LDA) (Blei et al., 2003). It assumes K underlying topics = {1,..., k} , each of which is a distribution over a fixed vocabulary. The generative process of LDA is as follows:. First generate the K topics, k ~uid Dirichlet(). Then for each document containing words y1:T,. independently generate document-level variables and data:.\n1. Draw a document-specific topic proportion vector 0 ~ Dirichlet(a) 2. For the tth word in the document,. (a) Draw topic assignment zt ~ Discrete(0) (b) Draw word yt ~ Discrete(z+)\nMarginalizing each zt, we obtain the probability of y1:T via a matrix factorization followed by an integration over the latent variable 0,.\nIn LDA the prior distribution on the topic proportions is a Dirichlet distribution; it can be replaced by many other distributions. For example, the correlated topic model (Blei and Lafferty, 2006) uses a log-normal distribution. Most topic models are \"bag of words\"' models in that word order is ignored This makes it easier for topic models to capture global semantic information. However, this is alsc one of the reasons why topic models do not perform well on general-purpose language modeling applications such as word prediction. While bi-gram topic models have been proposed (Wallach, 2006), higher order models quickly become intractable.\nAnother issue encountered by topic models is that they do not model stop words well. This is because stop words usually do not carry semantic meaning; their appearance is mainly to make the sentence more readable according to the grammar of the language. They also appear frequently in\np(yt|Y1:t-1) = p(yt|ht), ht =f(ht-1,xt)\nThe function f() can either be a standard RNN cell or a more complex cell such as GRU (Cho et al., 2014) or LSTM (Hochreiter and Schmidhuber, 1997). The input and target words are related via the relation xt = yt-1. These RNN-based language models have been quite successful (Mikolov et al., 2011: Chelba et al., 2013; Jozefowicz et al., 2016).\nT T p(0) IIp(zt|9)p(yt|zt,B)d0 = p(0) I(B0)y d0. p(y1:T[B) = t=1 Zt t=1\nalmost every document and can co-occur with almost any word?. In practice, these stop words are chosen using tf-idf (Blei and Lafferty, 2009).\nWe next describe the proposed TopicRNN model. In TopicRNN, latent topic models are used tc capture global semantic dependencies so that the RNN can focus its modeling capacity on the loca dynamics of the sequences. With this joint modeling, we hope to achieve better overall performance on downstream applications.\nThe model. TopicRNN is a generative model. For a document containing the words y1:T\nx1 X2 x3 x4 X5 x6 U U U U U U Xc (bag-of-words) X (full document). W W W W stop words excluded. h2 h3 W h6 stop words included 71 n5 eX RNN W Y (target document) (b)\nFigure 1: (a) The unrolled TopicRNN architecture: x1, ..., x6 are words in the document, ht is the. state of the RNN at time step t, x = yi-1, l1,..., l6 are stop word indicators, and 0 is the latent. representation of the input document and is unshaded by convention. (b) The TopicRNN model. architecture in its compact form: l is a binary vector that indicates whether each word in the input. document is a stop word or not. Here red indicates stop words and blue indicates content words.\n1. Draw a topic vector3 0 ~ N(0, I) 2. Given word y1:t-1, for the tth word yt in the document,. (a) Compute hidden state ht = fw(xt, ht-1), where we let xt = yt-1 (b) Draw stop word indicator lt ~ Bernoulli(o(T' ht)), with o the sigmoid function (c) Draw word yt ~ p(yt|ht, 0,lt, B), where.\np(yt =i|ht,0,lt,B) x exp(v ht + (1- lt)b0)\nThe stop word indicator lt controls how the topic vector 0 affects the output. If lt = 1 (indicating yt is a stop word), the topic vector 0 has no contribution to the output. Otherwise, we add a bias tc favor those words that are more likely to appear when mixing with 0, as measured by the dot produc between 0 and the latent word vector b, for the ith vocabulary word. As we can see, the long range semantic information captured by 0 directly affects the output through an additive procedure Unlike Mikolov and Zweig (2012), the contextual information is not passed to the hidden layer of the RNN. The main reason behind our choice of using the topic vector as bias instead of passing it intc the hidden states of the RNN is because it enables us to have a clear separation of the contributions of global semantics and those of local dynamics. The global semantics come from the topics whicl are meaningful when stop words are excluded. However these stop words are needed for the loca dynamics of the language model. We hence achieve this separation of global vs local via a binary decision model for the stop words. It is unclear how to achieve this if we pass the topics to the\nhidden states of the RNN. This is because the hidden states of the RNN will account for all worc (including stop words) whereas the topics exclude stop words.\nWe show the unrolled graphical representation of TopicRNN in Figure 1(a). We denote all model parameters as O = {T, V, B, W, Wc} (see Appendix A.1 for more details). Parameter Wc is for the inference network, which we will introduce below. The observations are the word sequences y1:T. and stop word indicators l1:T.4 The log marginal likelihood of the sequence y1:T is.\nT p(0)I logp(y1:T, l1:T[ht) = log p(yt|ht,lt,0)p(lt|ht)d0 t=1\nL(y1:T,l1:r|q(0),O) = Eq(0) logp(yt|ht, lt, 0) + logp(lt|ht) + logp(0) - log q(0\nwhere g(-) denotes the feed-forward neural network. The weight matrices W1, W2 and biases a1 a, are shared across documents. Each document has its own (X) and o(X) resulting in a unique distribution q(0|Xc) for each document. The output of the inference network is a distribution on 0 which we regard as the summarization of the semantic information, similar to the topic proportions in latent topic models. We show the role of the inference network in Figure 1(b). During training the parameters of the inference network and the model are jointly learned and updated via truncated backpropagation through time using the Adam algorithm (Kingma and Ba, 2014). We use stochastic samples from q(0|Xc) and the reparameterization trick towards this end (Kingma and Welling, 2013; Rezende et al., 2014).\nGenerating sequential text and computing perplexity. Suppose we are given a word sequence Y1:t-1, from which we have an initial estimation of q(0|Xc). To generate the next word yt, we. compute the probability distribution of yt given y1:t-1 in an online fashion. We choose 0 to be a. point estimate 0, the mean of its current distribution q(0|Xc). Marginalizing over the stop word. indicator l+ which is unknown prior to observing yt, the approximate distribution of yt is\n(yt[Y1:t-1) ~ p(yt|ht,0,lt)p(lt|ht)\nModel Complexity. TopicRNN has a complexity of O(H H + H (C + K) + Wc), where. H is the size of the hidden layer of the RNN, C is the vocabulary size, K is the dimension of the. topic vector, and Wc is the number of parameters of the inference network. The contextual RNN of Mikolov and Zweig (2012) accounts for O(H H + H (C + K)), not including the pre-training. process, which might require more parameters than the additional W. in our complexity..\nModel inference. Direct optimization of Equation 2 is intractable so we use variational inference for approximating this marginal (Jordan et al., 1999). Let q(0) be the variational distribution on the marginalized variable 0. We construct the variational objective function, also called the evidence lower bound (ELBO). as follows:\nq(0|Xc,Wc) = N(0;(Xc),diag(o-(Xc))) (Xc) = Wig(Xc) + a1, logo(Xc) = W2g(Xc) + a2,\nThe predicted word yt is a sample from this predictive distribution. We update q(0|Xc) by including. yt to Xc if yt is not a stop word. However, updating q(0|Xc) after each word prediction is expensive.. so we use a sliding window as was done in Mikolov and Zweig (2012). To compute the perplexity we use the approximate predictive distribution above..\n4Stop words can be determined using one of the several lists available online. For example, ht tp : / / www"}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "Table 1: Five Topics from the TopicRNN Model with 100 Neurons and 50 Topics on the PTB Data (The word s&p below shows as sp in the data.).\nLaw Company Parties Trading Cars law spending democratic stock gm lawyers sales republicans s&p auto judge advertising gop price ford rights employees republican investor jaguar attorney state senate standard car court taxes oakland chairman cars fiscal highway general investors headquarters common appropriation democrats retirement british mr budget bill holders executives insurance ad district merrill model Inferred Topic Distribution from TopicGRU Inferred Topic Distribution from TopicGRU Inferred Topic Distribution from TopicGRU .25 0.14 0.14 0.20 0.12 0.12 0.10 0.15 0.10 0.08 0.08 0.06 0.10 0.06 0.04 0.0 0.05 0.02 ...I 0.0 0.00 1C\nFigure 2: Inferred distributions using TopicGRU on three different documents. The content o. these documents is added on the appendix. This shows that some of the topics are being picked up. depending on the input document."}, {"section_index": "6", "section_name": "4.1 WORD PREDICTION", "section_text": "We first tested TopicRNN on the word prediction task using the Penn Treebank (PTB) portion of the Wall Street Journal. We use the standard split, where sections 0-20 (930K tokens) are used for training, sections 21-22 (74K tokens) for validation, and sections 23-24 (82K tokens) for test- ing (Mikolov et al., 2010). We use a vocabulary of size 10K that includes the special token unk for rare words and eos that indicates the end of a sentence. TopicRNN takes documents as inputs We split the PTB data into blocks of 10 sentences to constitute documents as done by (Mikolov anc Zweig, 2012). The inference network takes as input the bag-of-words representation of the input document. For that reason, the vocabulary size of the inference network is reduced to 9551 after excluding 449 pre-defined stop words.\nIn order to compare with previous work on contextual RNNs we trained TopicRNN using different network sizes. We performed word prediction using a recurrent neural network with 10 neurons,.\n5Our code will be made publicly available for reproducibility\nWe assess the performance of our proposed TopicRNN model on word prediction and sentiment. analysis. For word prediction we use the Penn TreeBank dataset, a standard benchmark for as- sessing new language models (Marcus et al., 1993). For sentiment analysis we use the IMDB 100k. dataset (Maas et al., 2011), also a common benchmark dataset for this application6. We use RNN,. LSTM, and GRU cells in our experiments leading to TopicRNN, TopicLSTM, and TopicGRU.\nTable 2: TopicRNN and its counterparts exhibit lower perplexity scores across different network sizes than reported in Mikolov and Zweig (2012). Table 2a shows per-word perplexity scores for 10. neurons. Table 2b and Table 2c correspond to per-word perplexity scores for 100 and 300 neurons. respectively. These results prove TopicRNN has more generalization capabilities: for example we only need a TopicGRU with 100 neurons to achieve a better perplexity than stacking 2 LSTMs with 200 neurons each: 112.4 vs 115.9)\n100 neurons and 300 neurons. For these experiments, we used a multilayer perceptron with 2 hidden layers and 200 hidden units per layer for the inference network. The number of topics was tuned depending on the size of the RNN. For 10 neurons we used 18 topics. For 100 and 300 neurons we found 50 topics to be optimal. We used the validation set to tune the hyperparameters of the model. We used a maximum of 15 epochs for the experiments and performed early stopping using the validation set. For comparison purposes we did not apply dropout and used 1 layer for the RNN and its counterparts in all the word prediction experiments as reported in Table 2. One epoch for 10 neurons takes 2.5 minutes. For 100 neurons, one epoch is completed in less than 4 minutes. Finally for 300 neurons one epoch takes less than 6 minutes. These experiments were ran on Microsofi Azure NC12 that has 12 cores, 2 Tesla K80 GPUs, and 112 GB memory. First, we show five randomly drawn topics in Table 1. These results correspond to a network with 100 neurons. We also illustrate some inferred topic distributions for several documents from TopicGRU in Figure 2 Similar to standard topic models, these distributions are also relatively peaky.\nNext, we compare the performance of TopicRNN to our baseline contextual RNN using perplexity Perplexity can be thought of as a measure of surprise for a language model. It is defined as the exponential of the average negative log likelihood. Table 2 summarizes the results for different network sizes. We learn three things from these tables. First, the perplexity is reduced the larger the network size. Second, RNNs with context features perform better than RNNs without context features. Third, we see that TopicRNN gives lower perplexity than the previous baseline result reported by Mikolov and Zweig (2012). Note that to compute these perplexity scores for word prediction we use a sliding window to compute 0 as we move along the sequences. The topic vector 0 that is used from the current batch of words is estimated from the previous batch of words. This enables fair comparison to previously reported results (Mikolov and Zweig, 2012).7\n7we adjusted the scores in Table 2 from what was previously reported after correcting a bug in the compu tation of the ELBO.\nAnother aspect of the TopicRNN model we studied is its capacity to generate coherent text. To do. this, we randomly drew a document from the test set and used this document as seed input to the inference network to compute 0. Our expectation is that the topics contained in this seed document. are reflected in the generated text. Table 3 shows generated text from models learned on the PTB. and IMDB datasets. See Appendix A.3 for more examples..\nthey believe that they had senior damages to guarantee and frustration of unk stations eos the rush to minimum effect in composite trading the compound base inflated rate before the common charter 's report eos wells fargo inc. unk of state control funds without openly scheduling the university 's exchange rate has been downgraded it 's unk said eos the united cancer & began critical increasing rate of N N at N N to N N are less for the country to trade rate for more than three months $ N workers were mixed eos\nlee is head to be watched unk month she eos but the acting surprisingly nothing is very good eos i cant believe that he can unk to a role eos may appear of for the stupid killer really to help with unk unk unk if you wan na go to it fell to the plot clearly eos it gets clear of this movie 70 are so bad mexico direction regarding those films eos then go as unk 's walk and after unk to see him try to unk before that unk with this film\nTable 4: Classification error rate on IMDB 1o0k dataset. TopicRNN provides the state of the a error rate on this dataset.\nModel Reported Error rate. BoW (bnc) (Maas et al., 2011) 12.20% BoW (b tc) (Maas et al., 2011) 11.77% LDA (Maas et al., 2011) 32.58% Full + BoW (Maas et al., 2011) 11.67% Full + Unlabelled + BoW (Maas et al., 2011) 11.11% WRRBM (Dahl et al., 2012) 12.58% WRRBM + BoW (bnc) (Dahl et al., 2012) 10.77% MNB-uni (Wang & Manning, 2012) 16.45% MNB-bi (Wang & Manning, 2012) 13.41% SVM-uni (Wang & Manning, 2012) 13.05% SVM-bi (Wang & Manning, 2012) 10.84% NBSVM-uni (Wang & Manning, 2012) 11.71% seq2-bown-CNN (Johnson & Zhang, 2014) 14.70% NBSVM-bi (Wang & Manning, 2012) 8.78% Paragraph Vector (Le & Mikolov, 2014) 7.42% SA-LSTM with joint training (Dai & Le, 2015) 14.70% LSTM with tuning and dropout (Dai & Le, 2015) 13.50% LSTM initialized with word2vec embeddings (Dai & Le, 2015) 10.00% SA-LSTM with linear gain (Dai & Le, 2015) 9.17% LM-TM (Dai & Le, 2015) 7.64% SA-LSTM (Dai & Le, 2015) 7.24% Virtual Adversarial (Miyato et al. 2016) 5.91 % TopicRNN 6.28%"}, {"section_index": "7", "section_name": "4.2 SENTIMENT ANALYSIS", "section_text": "We performed sentiment analysis using TopicRNN as a feature extractor on the IMDB 100K dataset. This data consists of 100,O00 movie reviews from the Internet Movie Database (IMDB) website. The data is split into 75% for training and 25% for testing. Among the 75K training reviews, 50K are. unlabelled and 25K are labelled as carrying either a positive or a negative sentiment. All 25K test. reviews are labelled. We trained TopicRNN on 65K random training reviews and used the remaining. 10K reviews for validation. To learn a classifier, we passed the 25K labelled training reviews through. the learned TopicRNN model. We then concatenated the output of the inference network and the last state of the RNN for each of these 25K reviews to compute the feature vectors. We then usec. these feature vectors to train a neural network with one hidden layer, 50 hidden units, and a sigmoid. activation function to predict sentiment, exactly as done in Le and Mikolov (2014)..\nTo train the TopicRNN model, we used a vocabulary of size 5,000 and mapped all other words to the unk token. We took out 439 stop words to create the input of the inference network. We used 500 units and 2 layers for the inference network, and used 2 layers and 300 units per-layer for the"}, {"section_index": "8", "section_name": "Table 3: Generated text using the TopicRNN model on the PTB (top) and IMDB (bottom).", "section_text": "RNN. We chose a step size of 5 and defined 200 topics. We did not use any regularization such as dropout. We trained the model for 13 epochs and used the validation set to tune the hyperparameters of the model and track perplexity for early stopping. This experiment took close to 78 hours on a MacBook pro quad-core with 16GHz of RAM. See Appendix A.4 for the visualization of some of the topics learned from this data.\nTable 4 summarizes sentiment classification results from TopicRNN and other methods. Our error. rate is 6.28%.8 This is close to the state-of-the-art 5.91% (Miyato et al., 2016) despite that we do. not use the labels and adversarial training in the feature extraction stage. Our approach is most. similar to Le and Mikolov (2014), where the features were extracted in a unsupervised way and then a one-layer neural net was trained for classification..\nFigure 3 shows the ability of TopicRNN to cluster documents using the feature vectors as createc during the sentiment analysis task. Reviews with positive sentiment are coloured in green while reviews carrying negative sentiment are shown in red. This shows that TopicRNN can be used as an unsupervised feature extractor for downstream applications. Table 3 shows generated text from models learned on the PTB and IMDB datasets. See Appendix A.3 for more examples. The overall generated text from IMDB encodes a negative sentiment."}, {"section_index": "9", "section_name": "DISCUSSION AND FUTURE WORK", "section_text": "In this paper we introduced TopicRNN, a RNN-based language model that combines RNNs an latent topics to capture local (syntactic) and global (semantic) dependencies between words. Th global dependencies as captured by the latent topics serve as contextual bias to an RNN-based lar guage model. This contextual information is learned jointly with the RNN parameters by maxi mizing the evidence lower bound of variational inference. TopicRNN yields competitive per-wor perplexity on the Penn Treebank dataset compared to previous contextual RNN models. We hav reported a competitive classification error rate for sentiment analysis on the IMDB 100K datase1 We have also illustrated the capacity of TopicRNN to generate sensible topics and text. In future work, we will study the performance of TopicRNN when stop words are dynamicall discovered during training. We will also extend TopicRNN to other applications where capturin context is important such as in dialog modeling. If successful, this will allow us to have a model tha\n8The experiments were solely based on TopicRNN. Experiments using TopicGRU/TopicLSTM are bein carried out and will be added as an extended version of this paper.\nFigure 3: Clusters of a sample of 10000 movie reviews from the IMDB 100K dataset using Top. icRNN as feature extractor. We used K-Means to cluster the feature vectors. We then used PCA to reduce the dimension to two for visualization purposes. red is a negative review and green is a. positive review.\nREFERENCES Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent i difficult. IEEE transactions on neural networks, 5(2):157-166, 1994. Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. A neural probabilistic language model. journa of machine learning research, 3(Feb):1137-1155, 2003. D. Blei and J. Lafferty. Correlated topic models. Advances in neural information processing systems 18:147, 2006. D. M. Blei and J. D. Lafferty. Topic models. Text mining: classification, clustering, and applications 10(71):34, 2009. D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993-1022, 2003. C. Chelba and F. Jelinek. Structured language modeling. Computer Speech & Language, 14(4) 283-332, 2000. C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson. One bil lion word benchmark for measuring progress in statistical language modeling. arXiv preprin arXiv:1312.3005, 2013. K. Cho, B. Van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio Learning phrase representations using rnn encoder-decoder for statistical machine translation arXiv preprint arXiv:1406.1078, 2014. A. M. Dai and Q. V. Le. Semi-supervised sequence learning. In Advances in Neural Informatio. Processing Systems, pages 3079-3087, 2015. J. Gao, J.-Y. Nie, G. Wu, and G. Cao. Dependence language model for information retrieval. In Pro ceedings of the 27th annual international ACM SIGIR conference on Research and developmen in information retrieval, pages 170-177. ACM, 2004. S. Ghosh, O. Vinyals, B. Strope, S. Roy, T. Dean, and L. Heck. Contextual lstm (clstm) models fo large scale nlp tasks. arXiv preprint arXiv:1602.06291, 2016. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780 1997. Y. Ji, T. Cohn, L. Kong, C. Dyer, and J. Eisenstein. Document context language models. arXi preprint arXiv:1511.03962, 2015. Y. Ji, G. Haffari, and J. Eisenstein. A latent variable recurrent neural network for discourse relatioi language models. arXiv preprint arXiv:1603.01913, 2016. M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183-233, 1999. R. Jozefowicz, O. Vinyals, M. Schuster, N. Shazeer, and Y. Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 2014. D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 2013. Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. In ICML, vol ume 14, pages 1188-1196, 2014. R. Lin, S. Liu, M. Yang, M. Li, M. Zhou, and S. Li. Hierarchical recurrent neural network fo document modeling. In Proceedings of the 2015 Conference on Empirical Methods in Natura Language Processing, pages 899-907, 2015. 10\nM. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183-233, 1999. R. Jozefowicz, O. Vinyals, M. Schuster, N. Shazeer, and Y. Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. In ICML, vol- ume 14, pages 1188-1196, 2014. R. Lin, S. Liu, M. Yang, M. Li, M. Zhou, and S. Li. Hierarchical recurrent neural network for document modeling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 899907, 2015.\nI. Sutskever. Training recurrent neural networks. PhD thesis, University of Toronto, 2013"}, {"section_index": "10", "section_name": "A APPENDIX", "section_text": "Table 5: Dimensions of the parameters of the model\nU r W V B 0 W1 W2 dimension C x H H H x H H x C K x C K E E\nsentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Compu tational Linguistics: Human Language Technologies-Volume 1, pages 142-150. Association for Computational Linguistics, 2011. M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313-330, 1993. Y. Miao, L. Yu, and P. Blunsom. Neural variational inference for text processing. arXiv preprint arXiv:1511.06038, 2015. T. Mikolov and G. Zweig. Context dependent recurrent neural network language model. In SLT, pages 234-239, 2012. T. Mikolov, M. Karafiat, L. Burget, J. Cernocky, and S. Khudanpur. Recurrent neural network based language model. In Interspeech, volume 2, page 3, 2010. T. Mikolov, S. Kombrink, L. Burget, J. Cernocky, and S. Khudanpur. Extensions of recurrent neural network language model. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5528-5531. IEEE, 2011. T. Mikolov, A. Joulin, S. Chopra, M. Mathieu, and M. Ranzato. Learning longer memory in recurrent neural networks. arXiv preprint arXiv:1412.7753, 2014. D. Mimno, H. M. Wallach, E. Talley, M. Leenders, and A. McCallum. Optimizing semantic co- herence in topic models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 262-272. Association for Computational Linguistics, 2011. T. Miyato, A. M. Dai, and I. Goodfellow. Adversarial training methods for semi-supervised text classification. stat, 1050:7, 2016. R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. ICML(3). 28:1310-1318. 2013. D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate infer- ence in deep generative models. arXiv preprint arXiv:1401.4082, 2014. N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1): 1929-1958, 2014. I. Sutskever. Training recurrent neural networks. PhD thesis, University of Toronto, 2013. H. M. Wallach. Topic modeling: beyond bag-of-words. In Proceedings of the 23rd international\nWe use the following notation: C is the vocabulary size (including stop words), H is the number of hidden units of the RNN, K is the number of topics, and E is the dimension of the inference network hidden layer. Table 5 gives the dimension of each of the parameters of the TopicRNN model (ignoring the biases).\nFigure on the right: in','hartford','conn.', 'the','charter','oak','bridge','will', 'soon'. be', 'replaced', 'the', '<unk>', '<unk>', 'from', 'its', '<unk>', '<unk>', 'to', 'a', 'park',. <unk>', 'are', 'possible', 'citizens', 'in', 'peninsula', 'ohio', 'upset', 'over', 'changes', 'to', 'a',. bridge', 'negotiated', 'a', 'deal', 'the', 'bottom', 'half', 'of', 'the', '<unk>', 'will', 'be', 'type', 'f'. while', 'the', 'top', 'half', 'will', 'have', 'the', 'old', 'bridge', \"'s\", '<unk>', 'pattern', 'similarly'. highway', 'engineers', 'agreed', 'to', 'keep', 'the', 'old', '<unk>', 'on', 'the', 'key', 'bridge', 'in',. washington', 'd.c.', 'as', 'long', 'as', 'they', 'could, 'install', 'a', 'crash', 'barrier', 'between',. the','sidewalk','and','the','road', '<unk>','<unk>','drink', 'carrier', 'competes','with' <unk>', '<unk>', '<unk>', 'just', 'got', 'easier', 'or', 'so', 'claims', '<unk>', 'corp', 'the',. maker', 'of', 'the', '<unk>', 'the', 'chicago', 'company', \"s\", 'beverage', 'carrier', 'meant', 'to',. replace', '<unk>', '<unk>', 'at', '<unk>', 'stands', 'and', 'fast-food', 'outlets', 'resembles',. the', 'plastic', '<unk>', 'used', 'on', '<unk>', 'of', 'beer', 'only', 'the', '<unk>', 'hang', 'from',. a', '<unk>', 'of', '<unk>', 'the', 'new', 'carrier', 'can', '<unk>', 'as', 'many', 'as', 'four'. '<unk>', 'at', 'once', 'inventor', '<unk>', 'marvin', 'says', 'his', 'design', 'virtually', '<unk>',.\nTextl: but the refcorp bond fund might have been unk and unk of the point rate eos house in. national unk wall restraint in the property pension fund sold willing to zenith was guaranteed by $ N million at short-term rates maturities around unk products eos deposit posted yields slightly\nText2: it had happened by the treasury 's clinical fund month were under national disap pear institutions but secretary nicholas instruments succeed eos and investors age far compoun average new york stock exchange bonds typically sold $ N shares in the N but paying yields furthe an average rate of long-term funds.\nWe illustrate below some generated text resulting from training TopicRNN on the IMDB dataset. The settings are the same as for the sentiment analysis experiment:.\nthe film 's greatest unk unk and it will likely very nice movies to go to unk why various. david proves eos the story were always well scary friend high can be a very strange unk unk is ir. love with it lacks even perfect for unk for some of the worst movies come on a unk gave a rock unl eos whatever let 's possible eos that kyle can 't different reasons about the unk and was not wha you 're not a fan of unk unk us rock which unk still in unk 's music unk one as."}, {"section_index": "11", "section_name": "A.4 TOPICS FROM IMDB :", "section_text": "Below we show some topics resulting from the sentiment analysis on the IMDB dataset. The tota number of topics is 200. Note here all the topics turn around movies which is expected since al reviews are about movies.\nTable 6: Some Topics from the TopicRNN Model on the IMDB Data\npitt tarantino producing ken hudson campbell campbell dramas dragged africa cameron popcorn opera spots vicious cards practice carrey robinson circumstances dollar francisco unbearable ninja kong flight burton cage los catches cruise hills awake kubrick freeman revolution nonsensical intimate useless rolled friday murphy refuses cringe lie costs easier expression. 2002 cheese lynch alongside repeated kurosawa struck scorcese"}] |
Sy4tzwqxe | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Probabilistic modeling provides a principled approach for reasoning under uncertainty, and has beer. increasingly dominant in modern machine learning where highly complex, structured probabilisti models are often the essential components for solving complex problems with increasingly large datasets. A key challenge, however, is to develop computationally efficient Bayesian inferenc. methods to approximate, or draw samples from the posterior distributions. Variational inferenc (VI) provides a powerful tool for scaling Bayesian inference to complex models and big data. Th basic idea of VI is to approximate the true distribution with a simpler distribution by minimizing the. KL divergence, transforming the inference problem into an optimization problem, which is ofter. then solved efficiently using stochastic optimization techniques (e.g.,Hoffman et al.2013 Kingma. & Welling2013). However, the practical design and application of VI are still largely restricted by. the requirement of using simple approximation families, as we explain in the sequel..\nmin{KL(qnp) = Ez~qn[log(qn(z)/p(z))]}\n1. A minimum requirement is that we should be able to sample from qn efficiently, which allows us. to make estimates and predictions based on qn in placement of the more intractable p. The sample. from qn can also be used to approximate the expectation Eg:in (1) during optimization. This mean: that there should exist some computable function f(n; &), called the inference network, which takes. a random seed , whose distribution is denoted by qo, and outputs a random variable z = f(n; . whose distribution is qn.\n2. We should also be able to calculate the density qn(z) or it is derivative in order to optimize the. KL divergence in q1). This, however, casts a much more restrictive condition, since it requires us to use only simple inference network f(n; ) and input distributions qo to ensure a tractable form for. the density qn of the output z = f(n; &).\nIn fact, it is this requirement of calculating qn(z) that has been the major constraint for the design of state-of-the-art variational inference methods. The traditional VI methods are often limited to"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Let p(z) be a distribution of interest, such as the posterior distribution in Bayesian inference. VI approximates p(z) with a simpler distribution q*(z) found in a set Q = {qn(z)} of distributions indexed by parameter n by minimizing the KL divergence objective:.\nwhere we can get exact result p = q* if Q is chosen to be broad enough to actually include p. In practice, however, Q should be chosen carefully to make the optimization in (1) computationally tractable: this casts two constraints on Q:\niven distribution Inference network Samples\nC Given distribution Inference network Samples\nFigure 1: Wild variational inference allows us to train general stochastic neural inference networks to learn to. draw (approximate) samples from the target distributions, without restriction on the computational tractability of the density function of the neural inference networks..\nusing simple mean field, or Gaussian-based distributions as qn and do not perform well for approx. imating complex target distributions. There is a line of recent work on variational inference with. rich approximation families (e.g.,Rezende & Mohamed]2015b] Tran et al.]2015] Ranganath et al. 2015, to name only a few), all based on handcrafting special inference networks to ensure the com. putational tractability of qn(z) while simultaneously obtaining high approximation accuracy. These approaches require substantial mathematical insights and research effects, and can be difficult tc understand or use for practitioners without a strong research background in VI. Methods that allow. us to use arbitrary inference networks without substantial constraints can significantly simplify the design and applications of VI methods, allowing practical users to focus more on choosing proposals. that work best with their specific tasks.\nWe use the term wild variational inference to refer to variants of variational methods working wit general inference networks f(n, &) without tractability constraints on its output density qn(z); th should be distinguished with the black-box variational inference (Ranganath et al.|2014) whic refers to methods that work for generic target distributions p(z) without significant model-by-mode consideration (but still require to calculate the proposal density qn(z)). Essentially, wild variationa inference makes it possible to \"learn to draw samples', constructing black-box neural samplers fc given distributions. This enables more adaptive and automatic design of efficient Bayesian infe ence procedures, replacing the hand-designed inference algorithms with more efficient ones that ca improve their efficiency adaptively over time based on past tasks they performed.\nIn this work, we discuss two methods for wild variational inference, both based on recent works that combine kernel techniques with Stein's method (e.g.,Liu & Wang 2016 Liu et al.|2016). The first method, also discussed in|Wang & Liu|(2016), is based on iteratively adjusting parameter n to make the random output z = f(n; ) mimic a Stein variational gradient direction (SVGD) (Liu & Wang 2016) that optimally decreases its KL divergence with the target distribution. The second method is based on minimizing a kernelized Stein discrepancy, which, unlike KL divergence, does not require to calculate density qn(z) for the optimization thanks to its special form.\nAnother critical problem is to design good network architectures well suited for Bayesian infer-. ence. Ideally, the network design should leverage the information of the target distribution p(z. in a convenient way. One useful perspective is that we can view the existing MC/MCMC meth ods as (hand-designed) stochastic neural networks which can be used to construct native inference networks for given target distributions. On the other hand, using existing MC/MCMC methods as inference networks also allow us to adaptively adjust the hyper-parameters of these algorithms; this. enables amortized inference which leverages the experience on past tasks to accelerate the Bayesian. computation, providing a powerful approach for designing efficient algorithms in settings when a large number of similar tasks are needed.\nAs an example, we leverage stochastic gradient Langevin dynamics (SGLD) (Welling & Teh]|2011 as the inference network, which can be treated as a special deep residential network (He et al. 2016), in which important gradient information Vz log p(z) is fed into each layer to allow efficien approximation for the target distribution p(z). In our case, the network parameter n are the step sizes of SGLD, and our method provides a way to adaptively improve the step sizes, providing speed-up on future tasks with similar structures. We show that the adaptively estimated step sizes significantl outperform the hand-designed schemes such as Adagrad.\nThere is a large literature on traditional adaptive MCMC methods (e.g.,Andrieu & Thoms]2008 Roberts & Rosenthal, 20o9) which can be used to adaptively adjust the proposal distribution of MCMC by exploiting the special theoretical properties of MCMC (e.g., by minimizing the auto- correlation). Our method is simpler, more generic, and works efficiently in practice thanks to the use of gradient-based back-propagation. Finally, connections between stochastic gradient descent and variational inference have been discussed and exploited in|Mandt et al.(2016);[Maclaurin et al (2015).\nStein's identity Stein's identity plays a fundamental role in our framework. Let p(z) be a positive differentiable density on Rd, and $(z) = [1(z),.:. , $a(z)] is a differentiable vector-valued function. Define , : = > . dz. . Stein's identity is.\nEz~p(Vzlogp(z), 9(z)) + Vz:$(z)]: z:(p(z)(z))dx = (\nwhich holds once p(z)(z) vanishes on the boundary of I by integration by parts or Stokes' theo. rem; It is useful to rewrite Stein's identity in a more compact way:.\ndef Ez~p[Tp$(z)] = 0] with Tp$ (Vzlogp,$)+Vz\nwhere T, is called a Stein operator, which acts on function and returns a zero-mean functior Tp$(z) under z ~ p. A key computational advantage of Stein's identity and Stein operator is that they depend on p only through the derivative of the log-density Vz logp(z), which does no depend on the cumbersome normalization constant of p, that is, when p(z) = p(z)/Z, we have Vz logp(z) = , logp(z), independent of the normalization constant Z. This property makes Stein's identity a powerful practical tool for handling unnormalized distributions widely appeared ir machine learning and statistics.\nStein Discrepancy Although Stein's identity ensures that Tp has zero expectation under p, it expectation is generally non-zero under a different distribution q. Instead, for p q, there must exis a which distinguishes p and q in the sense that IEz~q[Tp(z)] 0. Stein discrepancy leverages this fact to measure the difference between p and q by considering the \"maximum violation of Stein's identity'' for in certain function set F:\nD(q| p) = max {Ez~q[Tp$(z)]} $EF\nwhere F is the set of functions that we optimize over, and decides both the discriminative power. and computational tractability of Stein discrepancy. Kernelized Stein discrepancy (KsD) is a special\nThe auxiliary variational inference methods (e.g.,Agakov & Barber2004) provide an alternative way when the variational distribution qn(z) can be represented as a hidden variable model. In particular, Salimans et al.(2015) used the auxiliary variational approach to leverage MCMC as a variational approximation. These approaches, however, still require to write down the likelihood function on the augmented spaces, and need to introduce an additional inference network related to the auxiliary variables.\nOutline Section 2lintroduces background on Stein discrepancy and Stein variational gradient de. scent. Section 3] discusses two methods for wild variational inference. Section 4 discuss using. stochastic gradient Langevin dynamics (SGLD) as the inference network. Empirical results are shown in Section5\nStein discrepancy that takes F to be the unit ball of vector-valued reproducing kernel Hilbert space. (RKHS), that is,\nD(q p) = max. {Ez~q[Tp$(z)]; s.t. H$l|Ha 1} $EHd\nD(q ||p) = max. {Ez~q[Tp$(z)] s.t. I$Hd 1} $EHd z,z|\nwhere kn(z, z ) is a positive definite kernel obtained by applying Stein operator on k(z, z) twice\nKp(z,z) = Tp (Tz O k(z,z')) = sp(z)sp(z')k(z,z)+ sp(z)Vz'k(z,z)+ sp(z)Vzk(z,z)+ Vz.(Vz'k(z,z')\nThe form of KsD in (6) allows us to estimate the discrepancy between a set of sample {zi} (e.g. drawn from q) and a distribution p specified by z log p(z),\n1 1 D?({zi}l|p) = D?({zi}l|p) >[Kp(Zi,Zj)]; [Kp(Zi,Zj)] n2 iFj\nwhere D?(q I| p) provides an unbiased estimator (hence called a U-statistic) for D2(q I p), and. D?(q II p), called V-statistic, provides a biased estimator but is guaranteed to be always non negative: D?({z} Il p) > 0.\nStein Variational Gradient Descent (SvGD) Stein operator and Stein discrepancy have a close connection with KL divergence, which is exploited in Liu & Wang. (2016) to provide a general purpose deterministic approximate sampling method. Assume that {zi}?=1 is a sample (or a set of particles) drawn from q, and we want to update {zi}=1 to make it \"move closer' to the target. distribution p to improve the approximation quality. We consider updates of form.\nZi{Zi+E$*(zi), Vi=1,...,n,\nwhere * is a perturbation direction, or velocity field, chosen to maximumly decrease the KL diver gence between the distribution of updated particles and the target distribution, in the sense that.\nd KL(q[eo]||p)|c=o =Ez~q[Tp$(z)] de\nthat is, the Stein operator transforms the perturbation on the random variable (the particles) to the change of the KL divergence. Taking F to be unit ball of Hd as in (5), the optimal solution * of 11) equals that of (6), which is shown to be (e.g.,Liu et al.]2016)\nX Ez~g zk(z,z)]=Ez~q[Vzlogp(z)k(z,z)+Vzk(z,z)]\nF ={ EHd:l$|Hd 1}\nKL(q[eo] = arg max de $EF\nwhere q[e] denotes the density of the updated particle z' = z + e$(z) when the density of the. original particle z is q, and F is the set of perturbation directions that we optimize over. A key ob servation (Liu & Wang||2016) is that the optimization in (11) is in fact equivalent to the optimization for KSD in (4); we have\nSince the direct parametric optimization of the KL divergence (1) requires calculating qn(z), ther are two essential ways to avoid calculating qn(z): either using alternative (approximate) optimiza tion approaches, or using different divergence objective functions. We discuss two possible ap proaches in this work: one based on \"amortizing SVGD (Wang & Liu]2016) which trains th inference network f(n, &) so that its output mimic the SVGD dynamics in order to decrease the KI divergence; another based on minimizing the KSD objective (9) which does not require to evaluate q(z) thanks to its special form."}, {"section_index": "2", "section_name": "3.1 AMORTIZED SVGD", "section_text": "SVGD provides an optimal updating direction to iteratively move a set of particles {z} towards the target distribution p(z). We can leverage it to train an inference network f(n; &) by iteratively ad. justing n so that the output of f(n; ) changes along the Stein variational gradient direction in order to maximumly decrease its KL divergence with the target distribution. By doing this, we \"amortize' SVGD into a neural network, which allows us to leverage the past experience to adaptively improve the computational efficiency and generalize to new tasks with similar structures. Amortized SVGD is also presented in|Wang & Liu (2016); here we present some additional discussion.\nn >l|f(n;i)-zi-ezi||3 n+argmin. n i=1\nfor iteration t do 1. Draw random {i}=1, calculate z = f(n; Si), and the Stein variational gradient z, in (13). 2. Update parameter n using (14) or (15) for amortized SVGD, or (17) for KSD minimization end for\nBy approximating the expectation under q with the empirical mean of the current particles {zt }i=1 SVGD admits a simple form of update that iteratively moves the particles towards the target distri- bution,\nZi I Zi+ EZj Vi=1,...,n, zi=EzE{zs}z=;[Vzlogp(z)k(z,zi) +Vzk(z,zi)]\nIt is easy to see from (13) that z; reduces to the typical gradient z log p(z) when there is only a single particle (n = 1) and zk(z, z) when z = Z, in which case SVGD reduces to the standard gradient ascent for maximizing log p(z) (i.e., maximum a posteriori (MAP)).\nTo be specific, assume {&} are drawn from qo and z; = f(n; &i) the corresponding random output. based on the current estimation of n. We want to adjust n so that z; changes along the Stein vari-. ational gradient direction z; in (13) so as to maximumly decrease the KL divergence with target distribution. This can be done by updating n via.\nEssentially, this projects the non-parametric perturbation direction z; to the change of the finite. dimensional network parameter n. If we take the step size e to be small, then the updated n by (14J. should be very close to the old value, and a single step of gradient descent of (14) can provide a\n0nf(n; Si)zi, ntn+e\nAmortized SVGD can be treated as minimizing the KL divergence using a rather special algorithm it leverages the non-parametric SVGD which can be treated as approximately solving the infinite dimensional optimization ming KL(q p) without explicitly assuming a parametric form on q, anc iteratively projecting the non-parametric update back to the finite dimensional parameter space o n. It is an interesting direction to extend this idea to \"amortize\" other MC/MCMC-based inferenc algorithms. For example, given a MCMC with transition probability T(z'|z) whose stationary dis tribution is p(z), we may adjust n to make the network output move towards the updated values drawn from the transition probability T(z'[z). The advantage of using SVGD is that it provides : deterministic gradient direction which we can back-propagate conveniently and is particle efficien in that it reduces to \"learning to optimize\" with a single particle. We have been using the simple L2 loss in (14) mainly for convenience; it is possible to use other two-sample discrepancy measures such as maximum mean discrepancy."}, {"section_index": "3", "section_name": "3.2 KSD VARIATIONAL INFERENCE", "section_text": "Amortized SVGD attends to minimize the KL divergence objective, but can not be interpreted as a typical finite dimensional optimization on parameter n. Here we provide an alternative method. based on directly minimizing the kernelized Stein discrepancy (KSD) objective, for which, thanks. to its special form, the typical gradient-based optimization can be performed without needing to. estimate q(z) explicitly.\nTo be specific, take qn to be the density of the random output z = f(n; ) when ~ qo, and we want to find n to minimize D(qn p). Assuming {&i} is i.i.d. drawn from qo, we can approximate D2(qn I| p) unbiasedly with a U-statistics:\n1 D2(qn l|p) ~ iFj\nfor which a standard gradient descent can be derived for optimizing n\nThis enables a wild variational inference method based on directly minimizing n with standard (stochastic) gradient descent. See Algorithm[1] Note that (17) is similar to (15) in form, but replaces z; with a z; X - j: ijz,p(zi, Zj). It is also possible to use the V-statistic in (, but we find that the U-statistic performs much better in practice, possibly because of its unbiasedness property.\nD2(q|lp) ~ =(KL(qllp) -KL(q[eo] ll p))\nThat is, KSD measures the amount of decrease of KL divergence when we update the particles along the optimal SVGD perturbation direction given by (11). If q = p, then the decrease of KI\nwhich can be intuitively interpreted as a form of chain rule that back-propagates the SVGD gradient. to the network parameter n. In fact, when we have only one particle, (15) reduces to the stan-. dard gradient ascent for maxn logp(f(n; &)), in which fn is trained to \"learn to optimize\"' (e.g.,. Andrychowicz et al.|2016), instead of \"learn to sample\"' p(z). Importantly, as we have more than. one particles, the repulsive term ,k(z, z) in z; becomes active, and enforces an amount of di- versity on the network output that is consistent with the variation in p(z). The full algorithm is summarized in Algorithm1\n2 Onf(n;Ei)Vz;Kp(Zi,Zj), where Zi=f(nSi) n1n iFj\nMinimizing KsD can be viewed as minimizing a constrastive divergence objective function. To see this, recall that q[eo] denotes the density of z' = z + e$(z) when z ~ q. Combining (11) and (6) we can show that\nD(q l|p) = KL(q[o]l|p] dQ\nThis idea is also similar to the contrastive divergence used for learning restricted Boltzmann ma. chine (RBM) (Hinton|2002) (which, however, optimizes p with fixed q). It is possible to extend this approach by replacing z' = z + e$(z) with other transforms, such as these given by a transition probability of a Markov chain whose stationary distribution is p. In fact, according the so called gen erator method for constructing Stein operator (Barbour1988), any generator of a Markov process. defines a Stein operator that can be used to define a corresponding Stein discrepancy..\nmin max Ez~q[Tp$,(z)] n\nIn contrast, our method leverages the closed form solution by taking F to be an RKHS and hence obtains an explicit optimization problem, instead of a min-max problem that can be computationally more expensive, or have difficulty in achieving convergence..\nBecause Kp(x, x') (defined in (8)) depends on the derivative x log p(x) of the target distribution. the gradient in (17) depends on the Hessian matrix V2 log p(x) and is hence less convenient to im- plement compared with amortized SVGD (the method byRanganath et al.(2016) also has the same problem). However, this problem can be alleviated using automatic differentiation tools, which be used to directly take the derivative of the objective in (16) without manually deriving its derivatives"}, {"section_index": "4", "section_name": "4 LANGEVIN INFERENCE NETWORK", "section_text": "With wild variational inference, we can choose more complex inference network structures to obtain better approximation accuracy. Ideally, the best network structure should leverage the special prop. erties of the target distribution p(z) in a convenient way. One way to achieve this by viewing existing MC/MCMC methods as inference networks with hand-designed (and hence potentially suboptimal). parameters, but good architectures that take the information of the target distribution p(z) into ac-. count. By applying wild variational inference on networks constructed based on existing MCMC methods, we effectively provide an hyper-parameter optimization for these existing methods. This. allows us to fully optimize the potential of existing Bayesian inference methods, significantly im proving the result with less computation cost, and decreasing the need for hyper-parameter tuning by human experts. This is particularly useful when we need to solve a large number of similar tasks. where the computation cost spent on optimizing the hyper-parameters can significantly improve the. performance on the future tasks.\nzt+1 zt +nt OVzlogp(zt; Mt) +2nt O t Vt = 1,...T\nThis idea is related to a very recent work by Ranganath et al.(2016), which is based on directly minimizing the variational form of Stein discrepancy in (4); Ranganath et al.(2016) assumes J consists of a neural network ,(z) parametrized by t, and find n by solving the following min-max problem:\nStochastic Gradient Langevin Dynamics We first take the original stochastic gradient Langevin dynamics (SGLD) algorithm (Welling & Teh]2011) as an example. SGLD starts with a random initialization zo, and perform iterative update of form\nwhere logp(zt; Mt) denotes an approximation of logp(zt) based on, e.g., a random mini-batch. Mt of observed data at t-th iteration, and t is a standard Gaussian random vector of the same size. as z, and nt denotes a (vector) step-size at t-th iteration; here \"o\" denotes element-wise product When running SGLD for T iterations, we can treat zT' as the output of a T-layer neural network.\nFigure 2: Results on a 1D Gaussian mixture when training the step sizes of SGLD with T = 20. iterations. The target distribution p(x) is shown by the red dashed line. (a) The distribution of the initialization zo of SGLD (the green line), visualized by kernel density estimator. (b)-(d) The. distribution of the final output zI' (green line) given by different types of step sizes, visualized by. kernel density estimator.\nparametrized by the collection of step sizes n = {nt}T-1, whose random inputs include the random. initialization zo, the mini-batch MI' and Gaussian noise ' at each iteration t. We can see that this defines a rather complex network structure with several different types of random inputs (zo, Mt. and t). This makes it intractable to explicitly calculate the density of zT and traditional variational inference methods can not be applied directly. But wild variational inference can still allow us to adaptively improve the optimal step-size n in this case..\nzt+1 Atzt + h(BtBt'Vz logp(zt; Mt)+ Btgt + Dt) Vt = 1,...T."}, {"section_index": "5", "section_name": "5.1 SGLD INFERENCE NETWORK", "section_text": "We first test our algorithm with SGLD inference network with (18) formula on both a toy Gaussiar mixture model and a Bayesian logistic regression example. We find that we can adaptively learn step sizes that significantly outperform the existing hand-designed step size schemes, and hence save computational cost in the testing phase. In particular, we compare with the following step. size schemes, for all of which we report the best results (testing accuracy in Figure 3(a); testing. likelihood in Figure[3(b)) among a range of hyper-parameters:\n1. Constant Step Size. We select a best constant step size in {1, 2, 23, ..., 229} } x 10-6 2. Power Decay Step Size. We consider et = 10a (b + t)- where y = O.55, a E {-6,-5,...,1,2},b E{0,1,...,9}. 3. Adagrad, Rmsprop, Adadelta, all with the master step size selected in {1, 2, 23, ..., 229} 10-6 with the other parameters chosen by default values.\nGaussian Mixture We start with a simple 1D Gaussian mixture example shown in Figure2|where the target distribution p(z) is shown by the red dashed curve. We use amortized SVGD and KSD to optimize the step size parameter of the Langevin inference network in (18) with T = 20 layers (i.e., SGLD with T = 20 iterations), with an initial zo drawn from a qo far away from the target distribution (see the green curve in Figure [2(a)); this makes it critical to choose a proper step size. to achieve close approximation within T = 20 iterations. We find that amortized SVGD and KSD allow us to achieve good performance with 20 steps of SGLD updates (Figure2(b)-(c)), while the. result of the best constant step size and power decay step-size are much worse (Figure|2(d)-(e))..\n(a) Initialization (b) Amortized SVGD (c) KSD Minimization (d) Constant Stepsizee (e) Power Decay Stepsize\nGeneral Langevin Networks Based on the original formula of SGLD, we proposed a more gen eral langevin network structure, and each layer of the network has a form.\nwhere At, Bt and Dt are network parameters at t-th iteration(whose size is d d, and d is the size of zt), and h() denotes a smooth element-wise non-linearity function; here t is still a standard gaussian random vector with the same size as z. With this more complex network, we can use fewer layers to construct more powerful back-box samplers.\n0.8 0.5 0.75* 0-Amortized SVGD -0.55 KSD U-statistic 0.7 x-Adadelta 3 0.65 Constant Rate 0.6 CCr -Power Decay Rate *-RMSprop A 0.6 60- -0.65 +-Adagrad 0.55 -SGLD(fully converged *-SVGD(fully converged 0.5 -0.7. 10 50 100 10 50 100 Steps Steps (a) (b)\nBayesian Logistic Regression We consider Bayesian logistic regression for binary classification. using the same setting as Gershman et al.(2012), which assigns the regression weights w with a Gaussian prior po(w|) = N(w, -) and po() = Gamma(a, 1, 0.01). The inference is applied on the posterior of z = [w, log a]. We test this model on the binary Covertype datasetwith 581,012 data points and 54 features.\nTo demonstrate that our estimated learning rate can work well on new datasets never seen by the algorithm. We partition the dataset into mini-datasets of size 50, 000, and use 80% of them for training and 20% for testing. We adapt our amortized SVGD/KSD to train on the whole populatior of the training mini-datasets by randomly selecting a mini-dataset at each iteration of Algorithm|1. and evaluate the performance of the estimated step sizes on the remaining 20% testing mini-datasets.\nFigure 3 reports the testing accuracy and likelihood on the 20% testing mini-datasets when we train the Langevin network with T = 10, 50, 100 layers, respectively. We find that our methods outperform all the hand-designed learning rates, and allow us to get performance closer to the fully. converged SGLD and SVGD with a small number T of iterations..\nFigure4shows the testing accuracy and testing likelihood of all the intermediate results when train. ing Langevin network with T = 100 layers. It is interesting to observe that amortized SVGD and KSD learn rather different behavior: KSD tends to increase the performance quickly at the first few. iterations but saturate quickly, while amortized SVGD tends to increase slowly in the beginning. and boost the performance quickly in the last few iterations. Note that both algorithms are set up. to optimize the performance of the last layers, while need to decide how to make progress on the intermediate layers to achieve the best final performance."}, {"section_index": "6", "section_name": "5.2 GENERAL LANGEVIN INFERENCE NETWORK", "section_text": "Gaussian Mixture We consider 10 components Gaussian Mixture Models with mean and co variance matrix of each component randomly drawed from a uniform distribution, and we test ou methods on different dimensions models.\nWe construct 6 layers of general Langevin networks as a black-box sampler, and our proposed two. methods to train the black-box sampler to approximate the target distribution. Figure|5 shows our\nFigure 3: The testing accuracy (a) and testing likelihood (b) when training Langevin inference net- work with T E {10, 50, 100} layers, respectively. The results reported here are the performance of. the final result zI' outputted by the last layer of the network. We find that both amortized SVGD and KSD minimization (with U-statistics) outperform all the hand-designed learning rates. Results averaged on 100 random trails.\nWe further test our algorithm with general Langevin inference network. We firstly construct one single layer general Langevin network to approach the posterior of Bayesian logistic regression pa rameters and we can achieve 74.58% average accuracy and 0.5216 average testing log-likelihood in 100 repeat experiments. This result proves the proposed general Langevin Inference Network is quite competitive and worth to explore. Moreover, we use it as a black-box sampler to approach more complicate Gaussian Mixture distributions.\n0.75 -0.5 -Amortized SVGD 0.7 -0.55 -KSD U-statistic -Adadelta ACcnrre 0.65 -0.6 +-Constant Rate 0.6 Power Decay Rate *-RMSprop 0.55 +Adagrad 0.7 0.5 50 0 100 0 50 100 Intermediate Steps. Intermediate Steps. (a) (b)\nFigure 5: Comparation between our methods and NUTS on 50 dimension Gaussian Mixture. (a)-(c show the mean square errors when using different number particles to estimate expectation E(h(x) for h(x) = x, x2, and cos(x + b); for cos(wx + b), we random draw w ~ N(0,1) and b Uniform([0, 2]) and report the average MSE over 10 random draws of and b.\nFigure 6: Comparation between our methods and NUTS For different dimension Gaussian Mixture.. (a)-(c) show the mean square errors when using different number particles to estimate expectation E(h(x)) for h(x) = x, x2, and cos(x + b); for cos(wx + b), we random draw w ~ N(0,1) and. b ~ Uniform([0, 2]) and report the average MSE over 10 random draws of and b..\nFigure 4: The testing accuracy (a) and testing likelihood (b) of the outputs of the intermediate layers when training the Langevin network with T = 100 layers. Note that both amortized SVGD and KSD minimization target to optimize the performance of the last layer, but need to optimize the progress of the intermediate steps in order to achieve the best final results..\nresults on 50 dimension Gaussian Mixture case and figure[6|shows results of different dimensions of Gaussian Mixture. From the figures we can know that our proposed sampling structure is quite competive comparing with NUT sampler(Hoffman & Gelman2014), and these two variational inference methods can both train a good black-box sampler..\n3 -0.5 2 1 E 0 lS00160| S -1.5 -Langevin VGD -2 -NUTS -KSD U-statistic O -3 -2 1 0 1 2 0 1 2 0 1 2 Particle Particle Particle (a) E(cos(wx + b)) (b) E(x2) (c) E(x)\n-0.5, 1 0 S -1.5 SW0[6oj x-Langevin VGD -2 x-NUTS KSD U-statistic -2.5 0 -3 -2 1 0 1 2 0 1 2 0 1 2 Particle Particle Particle (a) E(cos(wx + b)) (b) E(x2) (c) E(x)\n2.1 0.65 -1.1 0.6 -1.2 -2.2 4 0.55 -1.3 ISWOI S *-Langevin VGD 0.5 1.4 *-NUTS 2.3 -KSD U-statistic 80.45 9 -1.5 2.4 0.4 -1.6 0.35 -1.7 0 20 40 60 20 40 60 0 20 0 40 60 Dimension Dimension Dimension E(cos(wx + b)) E(x2) E(x)"}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Agakov, Felix V and Barber, David. An auxiliary variational method. In International Conference on Neura Information Processing, pp. 561-566. Springer, 2004.\nAndrieu, Christophe and Thoms, Johannes. A tutorial on adaptive mcmc. Statistics and Computing, 18(4): 343-373, 2008.\nBarbour, Andrew D. Stein's method and poisson process convergence. Journal of Applied Probability, pp 175-184, 1988.\nGershman, Samuel J and Goodman, Noah D. Amortized inference in probabilistic reasoning. In Proceeding of the 36th Annual Conference of the Cognitive Science Society. 2014.\nHe, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. In CVPR, 2016.\nHinton, Geoffrey E. Training products of experts by minimizing contrastive divergence. Neural computation 14(8):1771-1800, 2002.\nHoffman, Matthew D, Blei, David M, Wang, Chong, and Paisley, John. Stochastic variational inference. JMLR 2013.\nLiu, Qiang, Lee, Jason D, and Jordan, Michael I. A kernelized Stein discrepancy for goodness-of-fit tests. In Proceedings of the International Conference on Machine Learning (ICML), 2016.\nMandt, Stephan, Hoffman, Matthew D, and Blei, David M. A variational analysis of stochastic gradient algo rithms. arXiv preprint arXiv:1602.02666, 2016.\nPaige, Brooks and Wood, Frank. Inference networks for sequential monte carlo in graphical models. arXiv preprint arXiv:1602.06701, 2016\nWe consider two methods for wild variational inference that allows us to train general inference net-. works with intractable density functions, and apply it to adaptively estimate step sizes of stochastic. gradient Langevin dynamics. More studies are needed to develop better methods, more applications and theoretical understandings for wild variational inference, and we hope that the two methods we. discussed in the paper can motivate more ideas and studies in the field..\nAndrychowicz, Marcin, Denil, Misha, Gomez, Sergio, Hoffman, Matthew W, Pfau, David, Schaul, Tom. and de Freitas,Nando. Learning to learn by gradient descent by gradient descent.. arXiv preprint arXiv:1606.04474, 2016.\nChwialkowski, Kacper, Strathmann, Heiko, and Gretton, Arthur. A kernel test of goodness of fit. In Proceeding. of the International Conference on Machine Learning (ICML), 2016\nGershman, Samuel, Hoffman, Matt, and Blei, David. Nonparametric variational inference. In Proceedings oJ the International Conference on Machine Learning (ICML), 2012.\nGorham, Jack and Mackey, Lester. Measuring sample quality with Stein's method. In Advances in Neura Information Processing Systems (NIPS), pp. 226-234, 2015.\nHoffman, Matthew D and Gelman, Andrew. The no-u-turn sampler: adaptively setting path lengths in hamilto- nian monte carlo. Journal of Machine Learning Research, 15(1):1593-1623, 2014.\nKingma, Diederik P and Welling, Max. Auto-encoding variational Bayes. In Proceedings of the Internationa Conference on Learning Representations (ICLR), 2013.\nMaclaurin, Dougal, Duvenaud, David, and Adams, Ryan P. Early stopping is nonparametric variational infer. ence. arXiv preprint arXiv:1504.01344, 2015.\nRanganath, R., Altosaar, J., Tran, D., and Blei, D.M. Operator variational inference. 2016\nRanganath, Rajesh, Tran, Dustin, and Blei, David M. Hierarchical variational models. arXiv prepri. arXiv:1511.02386, 2015.\nRoberts, Gareth O and Rosenthal, Jeffrey S. Examples of adaptive mcmc. Journal of Computational an Graphical Statistics, 18(2):349-367, 2009\nWelling, Max and Teh, Yee W. Bayesian learning via stochastic gradient Langevin dynamics. In Proceeding. of the International Conference on Machine Learning (ICML), 2011..\nRanganath, Rajesh, Gerrish, Sean, and Blei, David M. Black box variational inference. In Proceedings of th International Conference on Artificial Intelligence and Statistics (AISTATS), 2O14.\nRezende, Danilo Jimenez and Mohamed, Shakir. Variational inference with normalizing flows. In Proceeding of the International Conference on Machine Learning (ICML), 2015a.\nRezende. Danilo Jimenez and Mohamed, Shakir. Variational inference with normalizing flows. arXiv preprin arXiv:1505.05770, 2015b\nSalimans, Tim et al. Markov chain monte carlo and variational inference: Bridging the gap. In International Conference on Machine Learning, 2015.\nWang, Dilin and Liu, Qiang. Learning to draw samples: With application to amortized mle for generative adversarial learning. Submitted to ICLR 2017, 2016."}] |
rJsiFTYex | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "When exploring a new problem, having a simple yet competitive off-the-shelf baseline is fundamenta to new research. For instance, Caruana et al. (2008) showed random forests to be a strong baseline fo many high-dimensional supervised learning tasks. For computer vision, off-the-shelf convolutional neural networks (CNNs) have earned their reputation as a strong baseline (Sharif Razavian et al. 2014) and basic building block for more complex models like visual question answering (Xiong et al., 2016). For natural language processing (NLP) and other sequential modeling tasks, recurrent neural networks (RNNs), and in particular Long Short-Term Memory (LSTM) networks, with a linear projection layer at the end have begun to attain a similar status. However, the standard LSTM is in many ways lacking as a baseline. Zaremba (2015), Gal (2015), and others show that large improvements are possible using a forget bias, inverted dropout regularization or bidirectionality. We add three major additions with similar improvements to off-the-shelf LSTMs: Monte Carlo mode averaging, embed average pooling, and residual connections. We analyze these and other more common improvements.\nLSTM networks are among the most commonly used models for tasks involving variable-lengt sequences of data, such as text classification. The basic LSTM layer consists of six equations:.\nit = tanh(Wxt+ Rht-1+6i Jt=(Wjxt+Rjht-1+bj) ft=(Wfxt+Rfht-1+bf Ot = tanh(W,xt + R,ht-1 + 6o Ct =itOjt+ftO Ct-1 ht = Ot O tanh (ct)"}, {"section_index": "1", "section_name": "A WAY OUT OE THE ODYSSEY: ANALYZING AND COM BINING RECENT INSIGHTS FOR LSTMS", "section_text": "Sabeek Pradhan\nStanford University Palo Alto. California\nsabeekp@cs.stanford.edu\nLSTMs have become a basic building block for many deep NLP models. In recent years, many improvements and variations have been proposed for deep sequence models in general, and LSTMs in particular. We propose and analyze a series of augmentations and modifications to LSTM networks resulting in improved perfor. mance for text classification datasets. We observe compounding improvements on traditional LSTMs using Monte Carlo test-time model averaging, average pooling and residual connections, along with four other suggested modifications. Our analysis provides a simple, reliable, and high quality baseline model..\nMonte Carlo SST IMDB: Monte Carlo 0.525 0.130 Monte Carlo Error Monte Carlo Error Inverted Dropout Error 0.128 Inverted Dropout Error 0.520 0.126 0.515 Error 0.122 saeas-s 0.510 Rneu! 0.120 SST 0.505 0.118 0.116 0.500 0.114 ENTTATH 0.495 60 0.112 0 20 40 80 100 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 200 Monte Carlo Samples Monte Carlo Samples (a) Monte Carlo for SST fine-grained error (b) Monte Carlo for IMDB binary error\nFigure 1: A comparison of the performance of Monte Carlo averaging, over sample size, to regulai single-sample inverted dropout at test-time.\nWhere is the sigmoid function, O is element-wise multiplication, and vt is the value of variable v. at timestep t. Each layer receives xt from the layer that came before it and ht-1 and ct-1 from the. previous timestep, and it outputs ht to the layer that comes after it and ht and ct to the next timestep. The c and h values jointly constitute the recurrent state of the LSTM that is passed from one timestep. to the next. Since the h value completely updates at each timestep while the c value maintains part of its own value through multiplication by the forget gate f, h and c complement each other very well. with h forming a \"fast'' state that can quickly adapt to new information and c forming a \"slow' state that allows information to be retained over longer periods of time (Zaremba, 2015). While various. papers have tried to systematically experiment with the 6 core equations constituting an LSTM (Greff. et al., 2015; Zaremba, 2015), in general the basic LSTM equations have proven extremely resilient. and. if not optimal, at least a local maximum.."}, {"section_index": "2", "section_name": "3 MONTE CARLO MODEL AVERAGING", "section_text": "We encountered one ambiguity of Monte Carlo model averaging that to our knowledge remain. unaddressed in prior literature: there is relatively little exploration as to where and how the mode averaging is most appropriately handled. We investigated averaging over the output of the fina. recurrent layer (just before the projection layer), over the output of the projection layer (the pre. softmax unnormalized logits), and the post-softmax normalized probabilities, which is the approacl. taken by Gal (2015) for language modeling. We saw no discernible difference in performance. between averaging the pre-projection and post-projection outputs. Averaging over the post-softmax. probabilities showed marginal improvements over these two methods, but interestingly only fo. bidirectional models. We also explored using majority voting among the sampled models. This.\nIt is common practice when applying dropout in neural networks to scale the weights up at train time (inverted dropout). This ensures that the expected magnitude of the inputs to any given layer are equivalent between train and test, allowing for an efficient computation of test-time predictions However, for a model trained with dropout, test-time predictions generated without dropout merely approximate the ensemble of smaller models that dropout is meant to provide. A higher fidelity method requires that test-time dropout be conducted in a manner consistent with how the model was trained. To achieve this, we sample k neural nets with dropout applied for each test example and average the predictions. With sufficiently large k this Monte Carlo average should approach the true model average (Srivastava et al., 2014). We show in Figure 1 that this technique can yield more accurate predictions on test-time data than the standard practice. This is demonstrated over a number of datasets, suggesting its applicability to many types of sequential architectures. While running multiple Monte Carlo samples is more computationally expensive, the overall increase is minimal as the process is only run on test-time forward passes and is highly parallelizable. We show that higher performance can be achieved with relatively few Monte Carlo samples, and that this number of samples is similar across different NLP datasets and tasks.\nSoftmax RNN MLP Average N Embed Word Vectors Wi W1 W2 W3 WN- WN WN-2 i=1 N\nFigure 2: An illustration of the embed average pooling extension to a standard RNN model. The output of the multilayer perceptron is concatenated to the final hidden state output by the RNN.\ninvolves tallying the maximum post-softmax probabilities and selecting the class that received the most votes. This method differs from averaging the post-softmax probabilities in the same way. max-margin differs from maximum likelihood estimation (MLE), de-emphasizing the points well inside the decision boundary or the models that predicted a class with extremely high probability. With sufficiently large k, this voting method seemed to work best of the averaging methods we tried. and thus all of our displayed models use this technique. However, for classification problems with. more classes, more Monte Carlo samples might be necessary to guarantee a meaningful plurality o1. class predictions. We conclude that the majority-vote Monte Carlo averaging method is preferable. in the case where the ratio of Monte Carlo samples to number of classification labels is large. (k/output_size).\nThe Monte Carlo model averaging experiments, shown in Figure 1, were conducted as follows. We. drew k = 400 separate test samples for each example, differentiated by their dropout masks. For each sample size p (whose values, plotted on the x-axis, were in the range from 2 to 200 with step-size 2) we selected p of our k samples randomly without replacement and performed the relevant Monte. Carlo averaging technique for that task, as discussed above. We do this m = 20 times for each point. to establish the mean and variance for that number of Monte Carlo iterations/samples p. The variance. is used to visualize the 90% confidence interval in blue, while the red line denotes the test accuracy. computed using the traditional approximation method (inverted dropout at train-time, and no dropout at test-time)."}, {"section_index": "3", "section_name": "4+ EMBED AVERAGE POOLING", "section_text": "Reliably retaining long-range information is a well documented weakness of LSTM networks (Karpathy et al., 2015). This is especially the case for very long sequences like the IMDB sentimen dataset (Maas et al.. 2011). where deep sequential models fail to capture uni- and bi-gram occurrence over long sequences. This is likely why n-gram based models, such as a bi-gram NBSVM (Wang and Manning, 2012), outperform RNN models on such datasetes. It was shown by Iyyer et al. (2015 and others that for general NLP classification tasks, the use of a deep, unordered composition (or bag of-words) of a sequence can yield strong results. Their solution, the deep averaging network (DAN) combines the observed effectiveness of depth, with the unreasonable effectiveness of unorderec representations of long sequences.\nWe suspect that the primary advantage of DANs is their ability to keep track of information thal would have otherwise been forgotten by a sequential model, such as information early in the sequence for a unidirectional RNN or information in the middle of the sequence for a bidirectional RNN. Our embed average pooling supplements the bidirectional RNN with the information from a DAN at a relatively negligible computational cost.\nSoftmax Softmax 3 (3) (3 LSTM LSTM (2 (2) 2 (2) LSTM LSTM 1 1 2 LSTM LSTM X t- 1 Ct Xt+1 Xt-1 Ct Xt+1 a) Res-V1: An illustration of vertical residual connec(b) Res-V2: An illustration of vertical and lateral resi ionS ual connections\n(a) Res-V1: An illustration of vertical residual connec(b) Res-V2: An illustration of vertical and lateral resid tions ual connections\nFigure 3: An illustration of vertical (ResV) and lateral residual (ResL) connections added to a 3-laye RNN. A model with only vertical residuals is denoted Res-V1. whereas a model with vertical anc lateral residuals is denoted \"Res-V2\"..\nAs shown in Figure 2, embed average pooling works by averaging the sequence of word vectors anc passing this average through an MLP. The averaging is similar to an average pooling layer in a CNN (hence the name), but with the averaging being done temporally rather than spatially. The output o this MLP is concatenated to the final output of the RNN, and the combined vector is then passec into the projection and softmax layer. We apply the same dropout mask to the word vectors wher passing them to the RNN as when averaging them, and we apply a different dropout mask on the output of the MLP. We experimented with applying the MLP before rather than after averaging the word vectors but found the latter to be most effective."}, {"section_index": "4", "section_name": "5 RESIDUAL CONNECTIONS", "section_text": "For feed-forward convolutional neural networks used in computer vision tasks, residual networks, Ol ResNets, have obtained state of the art results (He et al., 2015). Rather than having each layer learn a wholly new representation of the data, as is customary for neural networks, ResNets have each layer (or group of layers) learn a residual which is added to the layer's input and then passed on to the next layer. More formally, if the input to a layer (or group of layers) is x and the output of that layer (or group of layers) is F(x), then the input to the next layer (or group of layers) is x + F(x), whereas it would be F(x) in a conventional neural network. This architecture allows the training of far deepe. models. He et al. (2015) trained convolutional neural networks as deep as 151 layers, compared to 16 layers used in VGGNets (Simonyan and Zisserman, 2014) or 22 layers used in GoogLeNet (Szegedy et al., 2015), and won the 2015 ImageNet Challenge. Since then, various papers have tried to build upon the ResNet paradigm (Huang et al., 2016; Szegedy et al., 2016), and various others have tried to create convincing theoretical reasons for ResNet's success (Liao and Poggio, 2016; Veit et al., 2016)\nSoftmax Softmax 3 (3) (3 3) (3) LSTM LSTM (2) nt 2 2 (2) 2 h LSTM LSTM 1) h(1) (1) (1) ht (1) 1 LSTM LSTM Xt-1 Xt Xt+1 Xt-1 Xt Xt+1\nWe explored many different ways to incorporate residual connections in an RNN. The two most successful ones, which we call Res-V1 and Res-V2 are depicted in Figure 6. Res-V1 incorporates. only vertical residuals, while Res-V2 incorporates both vertical and lateral residuals. With vertical. residual connections, the input to a layer is added to its output and then passed to the next layer. as is done in feed-forward ResNets. Thus, whereas the input to a layer is normally the ht from. the previous layer, with vertical residuals the input becomes the ht + xt from the previous layer. This maintains many of the attractive properties of ResNets (e.g. unimpeded gradient flow across. layers, adding/averaging the contributions of each layer) and thus lends itself naturally to deeper. networks. However, it can interact unpredictably with the LSTM architecture, as the \"fast' state of. the LSTM no longer reflects the network's full representation of the data at that point. To mitigate this. unpredictability, Res-V2 also includes lateral residual connections. With lateral residual connections. the input to a layer is added to its output and then passed to the next timestep as the fast state of the. LSTM. It is equivalent to replacing equation 6 with ht = ot O tanh (ct) + xt. Thus, applying both. vertical and lateral residuals ensures that the same value is passed both to the next layer as input and. to the next timestep as the \"fast' state.\nIn addition to these two, we explored various other, ultimately less successful, ways of adding residual connections to an LSTM, the primary one being horizontal residual connections. In this architecture rather than adding the input from the previous layer to a layer's output, we added the fast state from the previous timestep. The hope was that adding residual connections across timesteps would allow information to flow more effectively across timesteps and thus improve the performance of RNNs that are deep across timesteps, much as ResNets do for networks that are deep across layers Thus, we believed horizontal residual connections could solve the problem of LSTMs not learning long-term dependencies, the same problem we also hoped to mitigate with embed average pooling Unfortunately, horizontal residuals failed, possibly because they blurred the distinction between the LSTM's \"fast' state and \"slow\" state and thus prevented the LSTM from quickly adapting to new data. Alternate combinations of horizontal, vertical, and lateral residual connections were also experimented with but yielded poor results.\nWe chose two commonly used benchmark datasets for our experiments: the Stanford Sentiment Treebank (SST) (Socher et al., 2013) and the IMDB sentiment dataset (Maas et al., 2011). This allowed us to compare the performance of our models to existing work and review the flexibility o our proposed model extensions across fairly disparate types of classification datasets. SST contains relatively well curated, short sequence sentences, in contrast to IMDB's comparatively colloquia and lengthy sequences (some up to 2, O00 tokens). To further differentiate the classification tasks we chose to experiment with fine-grained, five-class sentiment on SST, while IMDB only offered binary labels. For IMDB, we randomly split the training set of 25, 000 examples into training and validation sets containing 22, 500 and 2, 500 examples respectively, as done in Maas et al. (2011)."}, {"section_index": "5", "section_name": "6.2 METHODOLOGY", "section_text": "Our objective is to show a series of compounding extensions to the standard LSTM baseline that. enhance accuracy. To ensure scientific reliability, the addition of each feature is the only change from the previous model (see Figures 4 and 5). The baseline model is a 2-layer stacked LSTM with hidden size 170 for SST and 120 for IMDB, as used in Tai et al. (2015). All models in this paper used. publicly available 300 dimensional word vectors, pre-trained using Glove on 840 million tokens of Common Crawl Data (Pennington et al., 2014), and both the word vectors and the subsequent weight matrices were trained using Adam with a learning rate of 10-4..\nThe first set of basic feature additions were adding a forget bias and using dropout. Adding a bias of 1.0 to the forget gate (i.e. adding 1.0 to the inside of the sigmoid function in equation 3) improves results across NLP tasks, especially for learning long-range dependencies (Zaremba, 2015). Dropout (Srivastava et al., 2014) is a highly effective regularizer for deep models. For SST and IMDB we used grid search to select dropout probabilities of 0.5 and 0.7 respectively, applied to the input of each layer, including the projection/softmax layer. While forget bias appears to hurt performance in Figure\n5, the combination of dropout and forget bias yielded better results in all cases than dropout without forget bias. Our last two basic optimizations were increasing the hidden sizes and then adding shared weight bidirectionality to the RNN. The hidden sizes for SST and IMDB were increased to 800 and 360 respectively; we found significantly diminishing returns to performance from increases beyond this. We chose shared-weight bidirectionality to ensure the model size did not increase any further Specifically, the forward and backward weights are shared, and the input to the projection/softmax layer is a concatenation of the forward and backward passes' final hidden states.\nAll of our subsequent proposed model extensions are described at length in their own sections. For both datasets, we used 60 Monte Carlo samples, and the embed average pooling MLP had one hidden layer and both a hidden dimension and an output dimension of 300 as the output dimension of the embed average pooling MLP. Note that although the MLP weights increased the size of their respective models, this increase is negligible (equivalent to increasing the hidden size for SST from 800 to 804 or the hidden size of IMDB from 360 to 369), and we found that such a size increase had no discernible effect on accuracy when done without the embed average pooling."}, {"section_index": "6", "section_name": "6.3 RESULTS", "section_text": "Figure 4: These box-plots show the performance of compounding model features on fine-grain SST validation accuracy. The red points, red lines, blue boxes, whiskers and plus-shaped points indicate the mean, median, quartiles, range, and outliers, respectively\nWe originally suspected that MC would provide marginal yet consistent improvements across datasets while embed average pooling would especially excel for long sequences like in IMDB, where n-gram based models and deep unordered compositions have benefited from their ability to retain information from disparate parts of the text. The former hypothesis was largely confirmed. However, while embed average pooling was generally performance-enhancing, the performance boost it yielded for IMDB was not significantly larger than the one it yielded for SST, though that may have been because the other enhancements already encompassed most of the advantages provided by deep unordered compositions.\nThe only evident exceptions to the positive trend are the variations of residual connections. Which o Res-V1 (vertical only) and Res-V2 (vertical and residual) outperformed the other depended on th dataset and whether the network was bidirectional. The Res-V2 architecture dominated in experiment 4b and 5 while the Res-V1 (only vertical residuals) architecture is most performant in Figure 4a. Thi\nSince each of our proposed modifications operate independently, they are well suited to use i. ombination as well as in isolation. In Figures 4 and 5 we compound these features on top of th nore traditional enhancements. Due to the expensiveness of bidirectional models, Figure 4 als. hows these compounding features on SST with and without bidirectionality. The validation accurac. listributions show that each augmentation usually provides some small but noticeable improvemen. n the previous model, as measured by consistent improvements in mean and median accuracy..\nSST: Full Compounding Model Features SST: Compounding Model Features. 0.53 0.53 0.52 0.52 0.51 0.51 0.50 0.50 0.49 0.49 SSt 0.48 0.48 5 0.47 0.47 0.46 0.46 0.45 0.45 Bias -LSTM SIZ ctional Carlo sidual ST Bias S\\Z Carlo ging idua Dro Dro I Residual line Forge + Hidden + Bidire + Monte line Forg + Hidden +Monte + Embed Averaging Vertical Res + Lateral Resi Vertical Re Lateral L Embed Av Features Features\nIMDB: Compounding Model Features 0.910 0.905 0.900 ACeanccy 0.895 Be Aneual 0.890 0.885 0.880 0.875 0.870 Dropout Forget Bias idden Size Bidirectional Monte Carlo Vertical Residual ateral Residua Baseline: 2-LSTM Embed Averaging Features\nFigure 5: These box-plots show the performance of compounding model features on binary IMDB validation accuracy.\nModel Comparison 0.51 I 1 Res-V2 I I Res-V1 Vanilla 0.50 0.49 ACC 0.48 0.47 0.46 0 2 4 6 8 10 Layers\nFigure 6: Comparing the effects of layer depth between Vanilla RNNs, Res-V1 and Res-V2 models. on fine-grained sentiment classification (SST). As we increase the layers, we decrease the hidden size. to maintain equivalent model sizes. The points indicate average validation accuracy, while the shadec regions indicate 90% confidence intervals.\nsuggests for short sequences, bidirectionality and lateral residuals conflict. Further analysis of the effect of residual connections and model depth can be found in Figure 6. In that figure, the number of parameters, and hence model size, are kept uniform by modifying the hidden size as the layer depth changed. The hidden sizes used for 1, 2, 4, 6, and 8 layer models were 250, 170, 120, 100, and 85 respectively, maintaining ~ 550, 000 total parameters for all models. As the graph demonstrates\nTable 1: Test performance on the Stanford Sentiment Treebank (SST) sentiment classification task\nTable 2: Test performance on the IMDB sentiment classification task\nnormal LSTMs (\"Vanilla\") perform drastically worse as they become deeper and narrower, while Res-V1 and Res-V2 both see their performance stay much steadier or even briefly rise. While depth wound up being far from a panacea for the datasets we experimented on, the ability of an LSTM with residual connections to maintain its performance as it gets deeper holds promise for other domains where the extra expressive power provided by depth might prove more crucial..\nSelecting the best results for each model, we see results competitive with state-of-the-art performance for both IMDB1 and SST, even though many state-of-the-art models use either parse-tree information (Tai et al., 2015), multiple passes through the data (Kumar et al., 2016) or tremendous train and test-time computational and memory expenses (Le and Mikolov, 2014). To our knowledge, our models constitute the best performance of purely sequential, single-pass, and computationally feasible models, precisely the desired features of a solid out-of-the-box baseline. Furthermore, for SST, the compounding enhancement model without bidirectionality, the final model shown in Figure 4b, greatly exceeded the performance of the large bidirectional model (51.6% vs 50.9%), with significantly less training time (Table 1). This suggests our enhancements could provide a similarly reasonable and efficient alternative to shared-weight bidirectionality for other such datasets."}, {"section_index": "7", "section_name": "7 CONCLUSION", "section_text": "We explore several easy to implement enhancements to the basic LSTM network that positively. impact performance. These include both fairly well established extensions (biasing the forget gate. dropout, increasing the model size, bidirectionality) and several more novel ones (Monte Carlo\n'For IMDB, we benchmark only against results obtained from training exclusively on the labeled training se Thus, we omit results from unsupervised models that leveraged the additional 50, 000 unlabeled examples, sucl as Miyato et al. (2016).\n# Params (M) Train Time / Epoch (sec Test Acc (%)\nModel # Params (M) Train Time / Epoch (sec) Test Acc (%) RNTN (Socher et al., 2013) 45.7 CNN-MC (Kim, 2014) 47.4 DRNN (Irsoy and Cardie, 2014) 49.8 CT-LSTM (Tai et al., 2015) 0.317 51.0 DMN (Kumar et al., 2016) 52.1 NTI-SLSTM-LSTM (Munkhdalai and 53.1 Yu, 2016) Baseline 2-LSTM 0.553 ~ 2,100 46.4 Large 2-LSTM 8.650 ~ 3,150 48.7 Bi-2-LSTM 8.650 ~ 6,100 50.9 Bi-2-LSTM+MC+Pooling+ResV 8.740 ~ 8, 050 52.2 2-LSTM+MC+Pooling+ResV+ResL 8.740 ~ 4, 800 51.6\nModel # Params (M) Train Time / Epoch (sec). Test Acc (%) SVM-bi (Wang and Manning, 2012) 89.2 DAN-RAND (Iyyer et al., 2015) 88.8 DAN (Iyyer et al., 2015) 89.4 NBSVM-bi (Wang and Manning, 2012) 91.2 NBSVM-tri, RNN, Sentence-Vec En- 92.6 semble (Mesnil et al., 2014) Baseline 2-LSTM 0.318 ~ 1, 800 85.3 Large 2-LSTM 2.00 ~ 2, 500 87.6 Bi-2-LSTM 2.00 ~ 5, 100 88.9 Bi-2-LSTM+MC+Pooling+ResV+ResL 2.08 ~ 5, 500 90.1\nmodel averaging, embed average pooling, residual connections). We find that these enhancements. improve the performance of the LSTM in classification tasks, both in conjunction or isolation, witl an accuracy close to state of the art despite being more lightweight and using less information thar the current state of the art models. Our results suggest that these extensions should be incorporatec into LSTM baselines."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Rich Caruana, Nikos Karampatziakis, and Ainur Yessenalina. An empirical evaluation of supervisec learning in high dimensions. In Proceedings of the 25th international conference on Machine learning, pages 96-103. ACM, 2008.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imag. recognition. CoRR, abs/1512.03385, 2015. URL http://arxiv.0rg/abs/1512.03385\nAndrej Karpathy, Justin Johnson, and Fei-Fei Li. Visualizing and understanding recurrent network. CoRR, abs/1506.02078, 2015.\nQuoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. In ICMI volume 14, pages 1188-1196, 2014.\nAndrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting oj the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http : //www.aclweb.org/anthology/P11-1015.\nTakeru Miyato, Andrew M Dai, and Ian Goodfellow. Virtual adversarial training for semi-supervise text classification. arXiv preprint arXiv:1605.07725, 2016.\nAnkit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victo. Zhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks foi natural language processing. In ICML, 2016.\nTsendsuren Munkhdalai and Hong Yu. Neural tree indexers for text understanding. CoRR abs/1607.04492.2016. URL http://arxiv.0rg/abs/1607.04492\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. URL http://arxiv.0rg/abs/1409.1556.\nKai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075, 2015\nWojciech Zaremba. An empirical exploration of recurrent network architectures. 2015\nRichard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng and Christopher Potts. Recursive deep models for semantic compositionality over a sentimen treebank. In Proceedings of the conference on empirical methods in natural language processing EMNLP), volume 1631, page 1642. Citeseer, 2013.\nChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du-. mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1-9, 2015. Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the. impact of residual connections on learning. CoRR, abs/1602.07261, 2016. URL http : / / arxiv . Org/abs/1602.07261.\nAndreas Veit, Michael J. Wilber, and Serge J. Belongie. Residual networks are exponential ensembles of relatively shallow networks. CoRR, abs/1605.06431, 2016. URL http: //arxiv.org/ abs/1605.06431. Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 5Oth Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, pages 90-94. Association for Computational Linguistics, 2012. Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. In ICML, 2016."}] |
r1Usiwcex | [{"section_index": "0", "section_name": "COUNTERPOINT BY CONVOLUTION", "section_text": "Cheng-Zhi Anna Huang\nadarob@google.com"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Machine learning models of music typically break down the task of composition. into a chronological process, composing a piece of music in a single pass from. beginning to end. On the contrary, human composers write music in a nonlin ear fashion, scribbling motifs here and there, often revisiting choices previously. made. We explore the use of blocked Gibbs sampling as an analogue to the hu-. man approach, and introduce CocoNET, a convolutional neural network in the. NADE family of generative models (Uria et al.]2016). Despite ostensibly sam. pling from the same distribution as the NADE ancestral sampling procedure, we. find that a blocked Gibbs approach significantly improves sample quality. We. provide evidence that this is due to some conditional distributions being poorly. modeled. Moreover, we show that even the cheap approximate blocked Gibbs. procedure from Yao et al.[(2014) yields better samples than ancestral sampling. We demonstrate the versatility of our method on unconditioned polyphonic music. generation.\nFigure 1: Ancestral inpainting of a corrupted Bach chorale by CoconeT. Colors are used tc distinguish the four voices. Grayscale heatmaps show predictions p(x; xc). The pitch sample in the current step is indicated by a rectangular outline. The original Bach chorale is shown in th bottom right. Step 0 shows the corrupted Bach chorale. Step 64 shows the result\n*Work done while at Google Brain ' work done while at Google Brain.\nTim Cooijmans\nMILA. Universite de Montreal\ntim.cooiimans@umontreal.ca\naaron.courville@umontreal.ca\nStep 0 Step 1 Step 4 Step 16 Step 64 Ground Truth\nStep 0 Step 1 Step 4 Step 16 Step 64 Ground Truth\nMachine learning can be used to create compelling art. This was shown recently by Deep. Dream (Mordvintsev et al.|2015), an optimization process that created psychedelic transformation of images. A similar idea underlies a variety of style transfer algorithms (Gatys et al.|2015), whicl impose textures and colors from one image onto another. More recently, the multistyle pastiche gen. erator (Dumoulin et al.2016) exposes adjustable knobs that allow users of the system fine-graine. control over style transfers. Neural doodle (Champandard2016) further closes the feedback loop. between algorithm and artist.\nWe wish to bring similar artistic tools to the domain of music. Whereas previous work in music has. relied mainly on sequence models such as Hidden Markov Models (HMMs, Baum & Petrie (1966)) and Recurrent Neural Networks (RNNs,Rumelhart et al.(1988)), we instead employ convolutional. neural networks due to their emphasis on capturing local structure and their invariance properties.. Moreover, convolutional neural networks have shown to be extremely versatile once trained, as shown by a variety of creative uses in the literature (Mordvintsev et al.]2015] Gatys et al.]2015 Almahairi et al.2015;Lamb et al.2016).\nCocoNET is an instance of deep orderless NADE Uria et al.(2014), and thus learns an ensemble of. factorizations of the joint p(x). However, the sampling procedure for orderless NADE is not order less. Sampling from an orderless NADE involves (randomly) choosing an ordering, and sampling. ancestrally according to the chosen ordering. We have found that this produces poor results for the. highly structured and complex domain of musical counterpoint..\nInstead, we propose to use blocked-Gibbs sampling, essentially improving sample quality through. rewriting. An instance of this was previously explored by Yao et al.(2014) who employed a NADE. in the transition operator for a Markov Chain, yielding a Generative Stochastic Network (GSN). The. transition consists of a corruption process that masks out a subset x-c of variables, followed by a process that independently resamples variables x, i C according to the distribution pe(x; xc. emitted by the NADE. Crucially, the effects of independent sampling are amortized by annealing the. probability with which variables are masked out. Whereas[Yao et al.(2014) treat their procedure as a cheap approximation to ancestral sampling, we find that it produces superior samples..\nWe show the versatility of our method on unconditioned polyphonic music generation\nSection 2ldiscusses previous work in the area of automatic musical composition. The details of our model and training procedure are laid out in Section[3] In Section 4|we show that our approach is equivalent to that of deep and orderless NADE Uria et al.(2014). We discuss sampling from our model in Section 5] Results of quantitative and qualitative evaluations are reported in Section |6 Finally, Section7concludes.\nSequence models such as HMMs and RNNs are a natural choice for modeling music. However one of the challenges in adapting such models to music is that music generally consists of multiple interdependent streams of events. This can be most clearly seen in the notion of counterpoint which refers to the relationships between the movement of individual instruments in a musical work Compare this to typical sequence domains such as speech and language, which involve modeling a single stream of events: a single speaker or a single stream of words.\nWe introduce CoconeT, a deep convolutional model trained to reconstruct partial scores. Once trained, CocoNeT provides direct access to all conditionals of the form p(x; xc) where xc is a. fragment of a musical score x and i C is in its complement. Figure|1|shows an example of such conditionals used in completing a partial score.\nSuccessful application of sequence models to music hence requires serializing or otherwise re- representing the music to fit the sequence paradigm. For instance, Liang(2016) serialize four-part Bach chorales by interleaving the parts, while |Allan & Williams (2o05) construct a chord vocab- ulary.Boulanger-Lewandowski et al.(2012) adopt a piano roll representation, which is a binary matrix x such that xt is hot if some instrument is playing pitch i at time t. To model the joint probability distribution of the multi-hot pitch vector xt, they employ a Restricted Boltzmann Ma-"}, {"section_index": "2", "section_name": "chine (RBM (Smolensky]1986] Hinton et al.]2006) or Neural Autoregressive Distribution Estima tor (Uria et al.2016) at each time step", "section_text": "Moreover, the behavior of human composers does not fit the chronological mold assumed by previ. ous authors. A human composer might start his work with a coarse chord progression and iteratively. refine it, revisiting choices previously made. Sampling according to xt ~ p(xt[x<t), as is common. cannot account for the kinds of timeless dependencies that composers employ. Hadjeres et al.(2016. sidestep the choice of causal factorization and instead employ an undirected Markov model to learn. pairwise relationships between neighboring notes up to a specified number of steps away in a score. Sampling involves Markov Chain Monte Carlo (MCMC) using the model as a Metropolis-Hastings. (MH) objective. The model permits constraints on the state space to support tasks such as melody. harmonization. However. the Markov assumption severely limits the expressivity of the model.\nWe opt instead for a convolutional approach that avoids many of these issues and naturally captures both relationships across time and interactions between instruments.\nWe approach the task of music composition with a deep convolutional neural network (Krizhevsk) et al.|2012). This choice is motivated by the locality of contrapuntal rules and their near-invariance to translation, both in time and in the frequency spectrum..\nWe represent the music as a stack of piano rolls encoded in a binary three-tensor x E {0, 1} I T P Here I denotes the number of instruments, T the number of time steps, P the number of pitches and xi,t,p = 1 iff the ith instrument plays pitch p at time t. We will assume each instrument plays. exactly one pitch at a time, that is, n Xi,t,p = 1 for all i, t..\nFor the present work we will restrict ourselves to the study of four-part Bach chorales as used in prior work (Allan & Williams2005 Boulanger-Lewandowski et al.]2012]Goel et al.]2014]Liang 2016, Hadjeres et al.2016). Hence we assume I = 4 throughout. We discretize pitch according to equal temperament, but constrain ourselves to only the range that appears in our training data (MIDI pitches 36 through 88). Time is discretized at the level of 16th notes for similar reasons. To curb memory requirements, we enforce T = 64 by randomly cropping the training examples.\nGiven a training example x ~ p(x), we present the model with the values of only a strict subset of its elements xc = {x(i,t) | (i, t) E C} and ask it to reconstruct its complement x-c. The input to the model is obtained by masking the piano rolls x to obtain the context xc and concatenating this with the corresponding mask:\nwhere the first dimension ranges over channels and the time and pitch dimensions are convolved OVer.\nWith the exception of the first and final layers, all of our convolutions preserve the size of the input That is, we use \"same\"' padding throughout and all activations h',1 < l < L have 128 channels The network consists of 64 layers with 3 3 filters on each layer. After each convolution we apply batch normalization|Ioffe & Szegedy(2015) (denoted by BN()) with statistics tied across time and pitch. After every second convolution, we introduce a skip connection from the hidden state two levels below to reap the benefits of residual learningHe et al.(2015).\nFinally, we obtain predictions for the pitch at each instrument/time pair.\nh l(i,t)ECXi,t,P ,t,p 0 1(i,t)EC\na' = BN(w'* hl-1;y, h' = ReLU(a' + hl-2 for 3 < l < L - 1 andl = 0 mod 2 hL = aL\nexp() ni,t,p Pe(Xi,t,pxC,C) Cnexp(ht,t,p\nThe loss function is given b\n(x;C,0)=- log pe(Xi,txc,C) (i,t)C xi,t,p logpe(xi,t,p|xc,C) (i,t)C p\nx)EC~p(C)(x;C,0\nwith respect to 0, we sample piano rolls x from the training set and contexts C ~ p(C) and optimize by stochastic gradient descent with step size determined by Adam (Kingma & Ba][2014)\npe(x) =II Pe(xo d\nwhere o is a permutation, and the parameters 0 are shared among the conditionals. NADE can be trained for all orderings o simultaneously using the orderless NADe (Uria et al.2014) training. procedure. This procedure relies on the observation that, thanks to parameter sharing, computing Pe(xog, | xo<a) for all d' d is no more expensive than computing it only for d' = d. Hence for a. given o and d we can simultaneously obtain partial losses for all orderings that agree with o up to d:\nFor any one sample (x, C), this loss consists of -C terms of the form log pe(xi,txc, C). We le p(C) be uniform in the size of the mask [C| and reweight the sample losses according to\nOur approach is an instance of orderless and deep Neural Autoregressive Distribution Estimators [Uria et al.||2016). NADE models a d-variate distribution p(x) through a factorization\nNADE(x;O<d,0) =->logPe(Xod Xo<d,O<d,Od O d\nCOCONET(x; C, 0) = logPe(Xi,txc, C (i,t)C\n1 L(x;C,0) = C(x;C,0)"}, {"section_index": "3", "section_name": "This correction, due to|Uria et al.(2014), ensures consistent estimation of the negative log-likelihoo. of the joint pe(x)", "section_text": "However, we might wish to increase the difficulty by choosing p(C) so as to frequently mask out large contiguous regions, as otherwise the model might learn only superficial local relationships This is discussed in|Pathak et al.(2016) for the case of images, where a model might learn only that. pixels are similar to their neighbors. Similar low-level relationships hold in our case, as our piano. roll representation is binary and very sparse. For instance, if we mask out only a single sixteenth. step in the middle of a long-held note, reconstructing the masked out step does not require any deep understanding of music. To this end we also consider choosing the context C by independent. Bernoulli samples, such that each variable has a low probability of being included in the context."}, {"section_index": "4", "section_name": "5 SAMPLING", "section_text": "We can sample from the model using the NADE ancestral ordering procedure. However, we find tha this yields poor samples, and we propose instead to use Gibbs sampling"}, {"section_index": "5", "section_name": "5.1 NADE SAMPLING", "section_text": "Unfortunately, samples thus generated are of low quality, which we surmise is due to accumulation of errors. While the model provides conditionals pe(x;,t|xc, C) for all (i,t) C, some of these conditionals may be better modeled than others. We suspect in particular those conditionals used early on in the procedure, for which the context C consists of very few variables. Moreover, although the model is trained to be order-agnostic, different orderings invoke different distributions, which is another indication that some conditionals are poorly learned. We test this hypothesis in Section6.2"}, {"section_index": "6", "section_name": "5.2 GIBBS SAMPLING", "section_text": "To remedy this, we allow the model to revisit its choices: we repeatedly mask out some part of th. piano roll and then repopulate it. This is a form of blocked Gibbs sampling (Liu1994). Blockec. sampling is crucial for mixing, as the high temporal resolution of our representation causes strong correlations between consecutive notes. For instance, without blocked sampling, it would take many steps to snap out of a long-held note. Similar observations hold for the Ising model from statistica mechanics, leading to the development of the Swendsen-Wang algorithm (Swendsen & Wang1987. in which large clusters of variables are resampled at once..\nUsing independent blocked Gibbs to sample from a NADE model has been studied byYao et a. (2014), who propose to use an annealed masking probability given by\nn Qn = max Qmin, Qmax lmax Xmur\nfor some minimum and maximum probabilities Qmin, Qmax, number of Gibbs steps N and fraction. n of time spent before settling onto the minimum probability Qmin. This scheme ensures the Gibbs\nTo sample according to NADE, we start with an empty (zero everywhere) piano roll x and context Co and populate them iteratively by the following process. We feed the piano roll xs and context Cs into the model to obtain a set of categorical distributions pe(xi,t|xgs, Cs) for (i, t) Cs. As. the x,.t are not conditionally independent, we cannot simply sample from these distributions inde- pendently. However, if we sample from one of them, we can compute new conditional distributions one-hot realization. Augment the context with Cs+1 = Cs U (i, t) and repeat until the piano roll is. populated. This procedure is easily generalized to tasks such as melody harmonization and partial score completion by starting with a nonempty piano roll..\nWe consider two strategies for resampling a given block of variables: ancestral sampling and inde. pendent sampling. Ancestral sampling invokes the orderless NADE sampling procedure described in Section|5.1|on the masked-out portion of the piano roll. Independent sampling simply treats the. masked-out variables x-c as independent given the context xc.\nModel NolewiseNLL DrdewseND Bachbot (Liang2016) 0.477 NADE (Boulanger-Lewandowski et al.2012) 7.19 RNN-RBM (Boulanger-Lewandowski et al.|2012 6.27 RNN-NADE (Boulanger-Lewandowsk1 et al.T|2012 5.56 CoCONET, i.i.d Bernoulli(0.50) 0.924 CoCONET, i.i.d Bernoulli(0.25) 0.655 4.48 CocONET, i.i.d Bernoulli(0.10) 0.812 4.66 COCONET, importance sampling. 0.569 3.73\nprocess with independent resampling produces samples from the model distribution pe(x). Initially when the masking probability is high, the chain mixes fast but samples are poor due to independent sampling. As the masking probability reduces, fewer variables are sampled at a time, until finally variables are sampled one at a time and conditioned on all the others.\nYao et al.[(2014) treat independent blocked Gibbs as a cheap approximation to ancestral sam. pling. Indeed, per Gibbs step, independent sampling requires only a single model evaluation. whereas ancestral sampling requires as many model evaluations as there are variables to sam-. ple. Moreover, we find that independent blocked Gibbs sampling in fact yields better sam- ples than the NADE procedure from Section 5.1] Samples can be heard here: https:// soundcloud.com/czhuang/sets/coconet-nade and https://soundcloud.com/\nWe evaluate our approach on a corpus of four-part Bach chorales. The literature features many. variants of this dataset (Allan & Williams2005) Boulanger-Lewandowski et al.]2012] Liang2016 Hadjeres et al.2016), and we follow the unfortunate tradition of introducing our own adaptation. Although this complicates comparisons against earlier work, we feel justified in doing so as our approach requires instruments to be separated, and other authors' eighth-note temporal resolution is too coarse to accurately convey counterpoint..\nWe rebuilt our dataset from the Bach chorale musicXML scores readily available through (Cuthber & Ariza]2010), which was also the basis for the dataset used in (Liang 2016). The scores includec 357 four-part Bach chorales. We excluded scores that included note durations less than sixteent notes, resulting in 354 pieces. These pieces were split into train/valid/test in 60/20/20% ratios\nHowever, evaluation of generative models is hard (Theis et al.|2015). The gold standard for evalua tion is qualitative comparison by humans, and we therefore report human evaluation results\nTo estimate the log-likelihood of a datapoint x, we follow the orderless NADE approach. That is, we sample a random ordering (i1,t1). (i2,t).. . (i 1T. t1T). and compute the notewise log-likelihood.\nIT 1 log pe(x) = log pe(Xia,ta | XCd- I1 d=1\nTable 1: Negative log-likelihood on the test set for the Bach corpus. As discussed in the text, our. numbers are not directly comparable to those of other authors due to the use of different splits. Re-. sults from Boulanger-Lewandowski et al.(2012) were based on an eighth-note temporal resolution (our resolution is sixteenth notes). Please note that our results are preliminary validation likelihoods\nNotewise NLL Framewise NLL\nwhere Cd = Uc-1{(ic,tc)}. Note that we randomly crop each datapoint to be T time steps long before processing it, as this facilitates batch processing..\nWe repeat this procedure k times and average across all point estimates. The numbers for our models in Table1were obtained with k = 5.\nThe process for computing the notewise log-likelihood is akin to teacher-forcing, where at each step. of the way the model observes the ground truth for all its previous predictions. To compute the. framewise log-likelihood, we instead let the model run free within each frame t. This results in a more representative measure of the model's quality as it is sensitive to accumulation of error.\nTable[1|lists notewise and framewise likelihoods of the validation data under variants of our model as well as comparable results from other authors. We include four variants of CocoNET that diffe. in the choice of the distribution p(C) over contexts during training. By importance sampling we refer to the orderless NADE strategy discussed in Section4] in which p(C) is uniform over C anc the sampled losses are reweighted by 1/|-C|. We also evaluate three variants where the contexts are chosen by biased coin flips, that is, Pr((i, t) E C) = p, for p E 0.5, 0.25, 0.1. The framewise log-likelihood for p = 0.5 is listed as oo as its estimation repeatedly overflowed.\nOverall, CocONET seems to underperform in terms of notewise likelihood, yet perform well in terms of framewise likelihood. Estimating the loss by importance sampling appears to work significantly better than determining the context using independent Bernoulli variables, as one might expect However, the choice of Bernoulli probability p strongly affects the resulting loss, which suggests that some of the conditionals benefit from more training.\nTable 2: Mean ( SEM) negative log-likelihood under the model of unconditioned samples gener ated from the model by various procedures\nIn Section[5|we conjectured that the low quality of NADE samples is due to poorly modeled condi- tionals pe(x,.t xc, C) where C is small. We test this hypothesis by evaluating the likelihood under the model of samples generated by the ancestral blocked Gibbs procedure with C chosen according to independent Bernoulli variables. When we set the inclusion probability p to O, we obtain NADE Increasing p increases the expected context size C|, which should yield better samples if our hy pothesis is true. The results shown in Table2 confirm that this is the case. For these experiments, we used sample length T = 32 time steps and number of Gibbs steps N = 100.\nFigure[2|shows the convergence behavior of the various Gibbs procedures, averaged over 100 runs We see that for low values of p (small C), the chains hardly make progress beyond NADE in terms of likelihood. Higher values of p (large C) enable the model to bootstrap and reach significantly better likelihood. However, high values of p cause the chain to mix slowly, as can be seen in the case where p = 0.50. For comparison, we included a variant, Contiguous(0.50), that always masks out in contiguous chunks of at least four sixteenth notes. This variant converges much more rapidly than Bernoulli(0.50) despite masking out equally many variables on average. Note that whereas ancestral sampling (NADE) requires O(IT) model evaluations and ancestral Gibbs requires O(ITN) model evaluations, independent Gibbs requires only O(N) model evaluations, with typi cally N < IT.\nComparing sample quality 0.8 Contiguous(0.50) Bernoulli(0.05) Bernoulli(0.50) Bernoulli(0.01) X-X Bernoulli(0.25) NADE Bernoulli(0.10) 0.7 0.6 0.5 0.4 0 20 40 60 80 100 # of Gibbs steps.\nFigure 2: Likelihood under the model of Gibbs samples obtained with various context distributions p(C). NADE (equivalent to Bernoulli(0.00)) is included for reference..\nWe carried out a listening test on Amazon's Mechani. cal Turk (MTurk) to compare quality of samples from. different sources (sampling schemes and Bach). The. sampling schemes under study are ancestral Gibbs. with Bernoulli(0.00) masking (NADE), independent. Gibbs (Yao et al.]2014), and ancestral Gibbs with. Contiguous(0.50). For each scheme, we generate. four unconditioned samples from empty piano rolls.. For Bach, we randomly crop four fragments from the. chorale validation set. We thus obtain four sets of fout sounds each. All fragments are two measures long.. and last twelve seconds after synthesis.\nFor each MTurk hit, users are asked to rate on a Likert scale which of two random samples they perceive as more musical. The study resulted in 192 ratings, where each source was involved in 92. pairwise comparisons. Figure|6.3 reports for each source the number of times it was rated as more musical. We see that although ancestral sampling on NADE performs poorly compared to Bach, both. ancestral and independent Gibbs|Yao et al.(2014) were considered at least as musical as fragments from Bach, with independent Gibbs|Yao et al.(2014) outperforming ancestral sampling (NADE) by a large margin. Pairwise comparisons are listed in Appendix|A.\nWe introduced a convolutional approach to modeling musical scores based on the NADE (Uria et al. 2016) framework. Our experiments show that the NADE ancestral sampling procedure yields poo. samples for our domain, which we have argued is because some conditionals are not captured wel by the model. We have shown that sample quality improves significantly when we use blocked Gibbs sampling to iteratively rewrite parts of the score. Moreover, annealed independent blocked Gibbs sampling as proposed by[Yao et al.(2014) is not only faster but in fact produces better samples.\nBach NADE Ancestral Gibbs contiguous(0.50) S Independent Gibbs 0 10 20 30 40 50 # of wins.\nFigure 3: Human evaluations from MTurk on comparing sampling schemes"}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Moray Allan and Christopher KI Williams. Harmonising chorales by probabilistic inference. Ad vances in neural information processing systems, 17:25-32, 2005\nAmjad Almahairi, Nicolas Ballas, Tim Cooijmans, Yin Zheng, Hugo Larochelle, and Aaron Courville. Dynamic capacity networks. arXiv preprint arXiv:1511.07838. 2015\nLeonard E Baum and Ted Petrie. Statistical inference for probabilistic functions of finite state markov chains. The annals of mathematical statistics, 37(6):1554-1563, 1966\nMichael Scott Cuthbert and Christopher Ariza. music21: A toolkit for computer-aided musicology and symbolic music data. 2010.\nVincent Dumoulin, Johnathon Shlens, and Manjunath Kudlur. A learned representation for artistic style. arXiv preprint arXiv:1610.07629, 2016.\nGeoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527-1554, 2006\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin arXiv:1412.6980, 2014.\nAlex Lamb, Vincent Dumoulin, and Aaron Courville. Discriminative regularization for generative models. arXiv preprint arXiv:1602.03220, 2016\nJun S Liu. The collapsed gibbs sampler in bayesian computations with applications to a gene regu lation problem. Journal of the American Statistical Association. 89(427):958-966. 1994\nWe thank Kyle Kastner and Guillaume Alain, Curtis (Fjord) Hawthorne, the Google Brain Magenta team, as well as Jason Freidenfelds for helpful feedback. discussions. suggestions and support\nDeepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context. encoders: Feature learning by inpainting. arXiy preprint arXiv:1604.07379. 2016\nDavid E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back propagating errors. Cognitive modeling, 5(3):1, 1988.\nPaul Smolensky. Information processing in dynamical systems: Foundations of harmony theory Technical report, DTIC Document, 1986..\nRobert H Swendsen and Jian-Sheng Wang. Nonuniversal critical dynamics in monte carlo simula tions. Physical review letters, 58(2):86, 1987.\nLucas Theis, Aaron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015.\nBenigno Uria, Iain Murray, and Hugo Larochelle. A deep and tractable density estimator. In ICML pp. 467-475, 2014.\nBenigno Uria, Marc-Alexandre Cote, Karol Gregor, Iain Murray, and Hugo Larochelle. Neura autoregressive distribution estimation. arXiv preprint arXiv:1605.02226, 2016."}, {"section_index": "8", "section_name": "A PAIRWISE HUMAN EVALUATION RESULTS", "section_text": "C N B C N B 1 C N B 1 11 20 20 1 6 7 2 1 15 5 10 C 15 12 16 C 6 6 6 C 11 14 10 N 5 14 7 N 7 6 3 N 20 12 22 B 10 10 22 B 2 6 3 B 20 16 7 (a) Wins (b) Ties (c) Losses\nFigure 3: Pairwise human evaluation results. Each element of Table[3(a)|shows the number of times the source corresponding to the row was preferred over the source corresponding to the column Table[3(b)|shows the number of ties. Table[3(c)|shows the number of losses and is the transpose of Table 3(a) Source legend: I denotes Independent Gibbs (Yao et al.]2014), C denotes Contiguous Gibbs, N denotes NADE and B denotes Bach.\nLi Yao, Sherjil Ozair, Kyunghyun Cho, and Yoshua Bengio. On the equivalence between deep nade and generative stochastic networks. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 322-336. Springer, 2014..\nThis appendix supplements Section 6.3|on the evaluation of samples by human subjects. Figure[3 lists the number of wins, ties and losses for each sample source against each other sample source. All pairs of sources were compared 32 times."}] |
HkE0Nvqlg | [{"section_index": "0", "section_name": "STRUCTURED ATTENTION NETWORKS", "section_text": "Attention networks have proven to be an effective approach for embedding cat egorical inference within a deep neural network. However, for many tasks we may want to model richer structural dependencies without abandoning end-to-end training. In this work, we experiment with incorporating richer structural distri- butions, encoded using graphical models, within deep networks. We show that these structured attention networks are simple extensions of the basic attention procedure, and that they allow for extending attention beyond the standard soft- selection approach, such as attending to partial segmentations or to subtrees. We experiment with two different classes of structured attention networks: a linear- chain conditional random field and a graph-based parsing model, and describe how these models can be practically implemented as neural network layers. Ex- periments show that this approach is effective for incorporating structural biases, and structured attention networks outperform baseline attention models on a va- riety of synthetic and real tasks: tree transduction, neural machine translation. question answering, and natural language inference. We further find that mod- els trained in this way learn interesting unsupervised hidden representations that generalize simple attention."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Attention networks are now a standard part of the deep learning toolkit, contributing to impressive results in neural machine translation (Bahdanau et al., 2015; Luong et al., 2015), image captioning (Xu et al., 2015), speech recognition (Chorowski et al., 2015; Chan et al., 2015), question answering (Hermann et al., 2015; Sukhbaatar et al., 2015), and algorithm-learning (Graves et al., 2014; Vinyals et al., 2015), among many other applications (see Cho et al. (2015) for a comprehensive review) This approach alleviates the bottleneck of compressing a source into a fixed-dimensional vector by equipping a model with variable-length memory (Weston et al., 2014; Graves et al., 2014; 2016) hereby providing random access into the source as needed. Attention is implemented as a hidder ayer which computes a categorical distribution (or hierarchy of categorical distributions) to make a soft-selection over source elements.\nNoting the empirical effectiveness of attention networks, we also observe that the standard attention. based architecture does not directly model any structural dependencies that may exist among th. source elements, and instead relies completely on the hidden layers of the network. While one migh argue that these structural dependencies can be learned implicitly by a deep model with enough data. in practice, it may be useful to provide a structural bias. Modeling structural dependencies at th. final, output layer has been shown to be important in many deep learning applications, most notabl. in seminal work on graph transformers (LeCun et al., 1998), key work on NLP (Collobert et al. 2011), and in many other areas (Peng et al., 2009; Do & Artieres, 2010; Jaderberg et al., 2014; Che. et al., 2015; Durrett & Klein, 2015; Lample et al., 2016, inter alia).\nIn this work, we consider applications which may require structural dependencies at the attention. layer, and develop internal structured layers for modeling these directly. This approach generalizes categorical soft-selection attention layers by specifying possible structural dependencies in a soft."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "manner. Key applications will be the development of an attention function that segments the source input into subsequences and one that takes into account the latent recursive structure (i.e. parse tree of a source sentence.\nOur approach views the attention mechanism as a graphical model over a set of latent variables. Th standard attention network can be seen as an expectation of an annotation function with respect to single latent variable whose categorical distribution is parameterized to be a function of the source In the general case we can specify a graphical model over multiple latent variables whose edge encode the desired structure. Computing forward attention requires performing inference to obtai the expectation of the annotation function, i.e. the context vector. This expectation is computed ove an exponentially-sized set of structures (through the machinery of graphical models/structured pre diction), hence the name structured attention network. Notably each step of this process (includin inference) is differentiable, so the model can be trained end-to-end without having to resort to dee policy gradient methods (Schulman et al., 2015).\nThe differentiability of inference algorithms over graphical models has previously been noted b. various researchers (Li & Eisner, 2009; Domke, 2011; Stoyanov et al., 2011; Stoyanov & Eisner 2012; Gormley et al., 2015), primarily outside the area of deep learning. For example, Gormley. et al. (2015) treat an entire graphical model as a differentiable circuit and backpropagate risk througl variational inference (loopy belief propagation) for minimium risk training of dependency parsers Our contribution is to combine these ideas to produce structured internal attention layers withir. deep networks, noting that these approaches allow us to use the resulting marginals to create nev. features, as long as we do so a differentiable way..\nWe focus on two classes of structured attention: linear-chain conditional random fields (CRFs) (Laf ferty et al., 2001) and first-order graph-based dependency parsers (Eisner, 1996). The initial work of Bahdanau et al. (2015) was particularly interesting in the context of machine translation, as the model was able to implicitly learn an alignment model as a hidden layer, effectively embedding inference into a neural network. In similar vein, under our framework the model has the capacity to learn a segmenter as a hidden layer or a parser as a hidden layer, without ever having to see a segmented sentence or a parse tree. Our experiments apply this approach to a difficult synthetic re ordering task, as well as to machine translation, question answering, and natural language inference We find that models trained with structured attention outperform standard attention models. Analy sis of learned representations further reveal that interesting structures emerge as an internal layer o the model. All code is available at http: //github. com/harvardnlp/struct-attn.\nA standard neural network consist of a series of non-linear transformation layers, where each laye. produces a fixed-dimensional hidden representation. For tasks with large input spaces, this paradign. makes it hard to control the interaction between components. For example in machine translation the source consists of an entire sentence, and the output is a prediction for each word in the translatec sentence. Utilizing a standard network leads to an information bottleneck, where one hidden laye. must encode the entire source sentence. Attention provides an alternative approach.1 An attentior. network maintains a set of hidden representations that scale with the size of the source. The mode uses an internal inference step to perform a soft-selection over these representations. This methoc. allows the model to maintain a variable-length memory and has shown to be crucially important fo. scaling systems for many tasks..\nFormally, let x = [x1,..., xn] represent a sequence of inputs, let q be a query, and let z be a. categorical latent variable with sample space {1, ..., n} that encodes the desired selection among. these inputs. Our aim is to produce a context c based on the sequence and the query. To do so, we assume access to an attention distribution z ~ p(z | x, q), where we condition p on the inputs x and. a query q. The context over a sequence is defined as expectation, c = Ez~p(z | x,q)[f (x, z)] where f(x, z) is an annotation function. Attention of this form can be applied over any type of input,. however, we will primarily be concerned with \"deep'' networks, where both the annotation function\n' Another line of work involves marginalizing over latent variables (e.g. latent alignments) for sequence-to sequence transduction (Kong et al., 2016; Lu et al., 2016; Yu et al., 2016; 2017)..\nand attention distribution are parameterized with neural networks, and the context produced is a vector fed to a downstream network..\nFor example, consider the case of attention-based neural machine translation (Bahdanau et al., 2015) Here the sequence of inputs [x1, ..., Xn] are the hidden states of a recurrent neural network (RNN), running over the words in the source sentence, q is the RNN hidden state of the target decoder (i.e. vector representation of the query q), and z represents the source position to be attended to for translation. The attention distribution p is simply p(z = i |x, q) = softmax(0;) where 0 e Rn is a parameterized potential typically based on a neural network, e.g. 0, = MLP([x; q]). The annotation function is defined to simply return the selected hidden state, f(x, z) = xz. The context vector can then be computed using a simple sum,\nn c =Ez~p(z|x,q)[f(x,z)]=`p(z=i|x,q)x, i=1\nOther tasks such as question answering use attention in a similar manner, for instance by replacin source x1, . . . , xn with a set of potential facts and q with a representation of the question..\nIn summary we interpret the attention mechanism as taking the expectation of an annotation function f(x, z) with respect to a latent variable z ~ p, where p is parameterized to be function of x and q.\nAttention networks simulate selection from a set using a soft model. In this work we consider gener. alizing selection to types of attention, such as selecting chunks, segmenting inputs, or even attending. to latent subtrees. One interpretation of this attention is as using soft-selection that considers all pos sible structures over the input, of which there may be exponentially many possibilities. Of course. this expectation can no longer be computed using a simple sum, and we need to incorporate the. machinery of inference directly into our neural network..\nDefine a structured attention model as being an attention model where z is now a vector of discrete. latent variables [z1, ..., Zm] and the attention distribution is p(z | x, q) is defined as a conditional. random field (CRF), specifying the independence structure of the z variables. Formally, we assume an undirected graph structure with m vertices. The CRF is parameterized with clique (log-)potentials. 0c(zc) E R, where the zc indicates the subset of z given by clique C. Under this definition, the attention probability is defined as, p(z x, q; 0) = softmax(c 0c(zc)), where for symmetry we. use softmax in a general sense, i.e. softmax(g(z)) = exp(g(z)) where Z = z, exp(g(z')) is the implied partition function. In practice we use a neural CRF, where 0 comes from a deep model Over x, q.\nIn structured attention. we also assume that the annotation function f factors (at least) into clique annotation functions f (x, z) = c fc(x, zc). Under standard conditions on the conditional inde-. pendence structure, inference techniques from graphical models can be used to compute the forward. pass expectations and the context:\nc=Ez~p(z|x,q)[f(x,z)]= Ez~p(zc|x,q)[fc(x,zC)]\nSuppose instead of soft-selecting a single input, we wanted to explicitly model the selection of con-. tiguous subsequences. We could naively apply categorical attention over all subsequences, or hope. the model learns a multi-modal distribution to combine neighboring words. Structured attention provides an alternate approach.\nConcretely, let m = n, define z to be a random vector z = [z1, . . . , n] with z; E {0, 1}, and define our annotation function to be, f(x,z) = f=1 fi(x,zi) where fi(x, zi) = 1{zi = 1}x. The explicit expectation is then,\nn Ez. [f(x,z)]=) p(z=1x,q)x, i=1\nZ1 Z1 72 Z3 74 Z1 Z2 73 74 x1 x2 x3 x 4 X1 x2 x3 x4 X1 x2 x3 x 4 (a) (b) (c)\nFigure 1: Three versions of a latent variable attention model: (a) A standard soft-selection attention network. (b) A Bernoulli (sigmoid) attention network, (c) A linear-chain structured attention model for segmentation The input and query are denoted with x and q respectively..\nEquation (2) is similar to equation (1)-both are a linear combination of the input representations where the scalar is between [0, 1] and represents how much attention should be focused on each input. However, (2) is fundamentally different in two ways: (i) it allows for multiple inputs (or no inputs) to be selected for a given query; (ii) we can incorporate structural dependencies across the z;'s. For instance, we can model the distribution over z with a linear-chain CRF with pairwise edges\nIn the case of the linear-chain CRF in (3), the marginal distribution p(z; = 1|x) can be calculate efficiently in linear-time for all i using message-passing, i.e. the forward-backward algorithm. Thes marginals allow us to calculate (2), and in doing so we implicitly sum over an exponentially-size set of structures (i.e. all binary sequences of length n) through dynamic programming. We refer tc this type of attention layer as a segmentation attention layer.\nNote that the forward-backward algorithm is being used as parameterized pooling (as opposed to. output computation), and can be thought of as generalizing the standard attention softmax. Crucially this generalization from vector softmax to forward-backward is just a series of differentiable steps,. and we can compute gradients of its output (marginals) with respect to its input (potentials). This. will allow the structured attention model to be trained end-to-end as part of a deep model..\nThis same approach can be used for more involved structural dependencies. One popular structure. for natural language tasks is a dependency tree, which enforces a structural bias on the recursive. dependencies common in many languages. In particular a dependency tree enforces that each word. in a source sentence is assigned exactly one parent word (head word), and that these assignments dc. not cross (projective structure). Employing this bias encourages the system to make a soft-selectior based on learned syntactic dependencies, without requiring linguistic annotations or a pipelined decision.\nA dependency parser can be partially formalized as a graphical model with the following cliques (Smith & Eisner, 2008): latent variables zi; E {0,1} for all i j, which indicates that the i-th word is the parent of the j-th word (i.e. x; -> xj); and a special global constraint that rules out configurations of z;'s that violate parsing constraints (e.g. one head, projectivity).\nThe parameters to the graph-based CRF dependency parser are the potentials O, which reflect the score of selecting x; as the parent of xj. The probability of a parse tree z given the sentence\n2 As are other dynamic programming algorithms for inference in graphical models, such as (loopy and non loopy) belief propagation.\nn-1 ...,Zn x,g) = softmax i=1\nwhere Ok.1 is the pairwise potential for z, = k and z+1 = l. This model is shown in Figure 1c. Compare this model to the standard attention in Figure 1a, or to a simple Bernoulli (sigmoid) selec ion method, p(z; = 1|x, q) = sigmoid(0,), shown in Figure 1b. All three of these methods car. use potentials from the same neural network or RNN that takes x and q as inputs..\nQ0,t0 [n+1,(t)]0 for i = 1,...,n; c E C do a[i,c]ya[i-1,y]O0i-1,i(y,c] for i = n,..., 1; c E C do [i,c],B[i+1,y] O0i,i+1[c,y Aa[n+1,(t)] for i = 1,..., n; c E C do p(zi = c|x) exp(a[i,c] O [i,c] X- return p\nFigure 2: Algorithms for linear-chain CRF: (left) computation of forward-backward tables a, , and margina probabilities p from potentials 0 (forward-backward algorithm); (right) backpropagation of loss gradients witl respect to the marginals f. C denotes the state space and (t) is the special start/stop state. Backpropagatior loo n Typically the forward-backward with marginals is performed in the log-space semifield R U{oo} with binary operations = logadd and = + for numerical precision. However, backpropagation requires working witl the log of negative values (since V could be negative), so we extend to a field [R U {oo}] {+, -} witl special +/- log-space operations. Binary operations applied to vectors are implied to be element-wise. The signexp function is defined as signexp(la) = sa exp(la). See Section 3.3 and Table 1 for more details.\n1{z is valid}. 1{zij=1}0ij p(z|x, q) = softmax iFj\nwhere z is represented as a vector of z;'s for all i j. It is possible to calculate the margina probability of each edge p(zi; = 1 x, q) for all i, j in O(n3) time using the inside-outside algorithm (Baker, 1979) on the data structures of Eisner (1996).\nThe parsing contraints ensure that each word has exactly one head (i.e. r=1 Zij = 1). Therefore if. we want to utilize the soft-head selection of a position j, the context vector is defined as:\nNote that in this case the annotation function has the subscript j to produce a context vector for each word in the sentence. Similar types of attention can be applied for other tree properties (e.g soft-children). We refer to this type of attention layer as a syntactic attention layer.."}, {"section_index": "3", "section_name": "3.3 END-TO-END TRAINING", "section_text": "Graphical models of this form have been widely used as the final layer of deep models. Our contri bution is to argue that these networks can be added within deep networks in place of simple attentior. layers. The whole model can then be trained end-to-end..\nThe main complication in utilizing this approach within the network itself is the need to backprop. agate the gradients through an inference algorithm as part of the structured attention network. Past work has demonstrated the techniques necessary for this approach (see Stoyanov et al. (2011)), but. to our knowledge it is very rarely employed..\nConsider the case of the simple linear-chain CRF layer from equation (3). Figure 2 (left) shows th. standard forward-backward algorithm for computing the marginals p(z; = 1 x, q; 0). If we treat th. forward-backward algorithm as a neural network layer, its input are the potentials 0, and its outpu\nprocedure BACKPROPFORWARDBACKWARD(O, p, V~\nJ8p y 1 [n+1, (t)] 0 Vf logp O logVf 8 a 8-A for i = 1,..., n;c E C do a[0,(t)] 0 a[i,c]y a[i-1,y] O0i-1,i(y,c) [n+1, (t)] 0 for i =n,..., 1; c E C do for i = n,...1; c E C do B[i,c],P[i+1,y]O0i,i+1(c,y) [i,c] V&[i,c] y 0i,i+1(c,y) OP[i+1,y] A a[n+1,(t)] for i = 1,..., n; c E C do for i = 1,...,n; c E C do a[i,c]V5[i,c] ,0i-1,i(y,c) Q[i-1,y] p(zi = c|x) exp(a[i,c] O [i,c] for i = 1,..., n; y, c E C do 8 - A) V6;-1,(y,c) F signexp(Q[i,y] O[i+1, q] return p a[i, y] O R[i+1,c]\nn n x,z)=`1{zij=1}x Cj=Ez[fj(x,z)]=>`p(zij=1|x,q)x i=\nafter the forward pass are these marginals.3 To backpropagate a loss through this layer we need tc compute the gradient of the loss with respect to 0, , as a function of the gradient of the loss with respect to the marginals, V.4 As the forward-backward algorithm consists of differentiable steps, this function can be derived using reverse-mode automatic differentiation of the forward-backward algorithm itself. Note that this reverse-mode algorithm conveniently has a parallel structure to the forward version, and can also be implemented using dynamic programming.\ncannot simply se current off-the-shelf tools for this task. or one, efficiency is quite important for Sa Sb nese models and so the benefits of hand- + + la + ptimizing the reverse-mode implementa- + la + on still outweighs simplicity of automatic + la + ifferentiation. Secondly, numerical pre- la + ision becomes a major issue for struc- red attention networks. For computing Table 1: Signed e forward-pass and the marginals, it is im-. ner (2009)).Each ortant to use the standard log-space semi- (la, Sa) where la eld over R U{oo} with binary opera- a = Sa exp(la). Fc ons ( = logadd, = +) to avoid un- assume [a| > b. erflow of probabilities. For computing the ackward-pass, we need to remain in log- oace, but also handle log of negative values (since Vf could b. the signed log-space semifield over [R U{oo}] {+, le 1. based on Li & Eisner (2009). demonstrates how to hai escribes backpropagation through the forward-backward algc. rward pass can be computed using the inside-outside implen. er, 1996). Similarly, the backpropagation parallels the inside-c. ass through the inside-outside algorithm is described in Appei."}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "We experiment with three instantiations of structured attention networks on four different tasks: (a. a simple, synthetic tree manipulation task using the syntactic attention layer, (b) machine translatior with segmentation attention (i.e. two-state linear-chain CRF), (c) question answering using an n. state linear-chain CRF for multi-step inference over n facts, and (d) natural language inference witl syntactic tree attention. These experiments are not intended to boost the state-of-the-art for these. tasks but to test whether these methods can be trained effectively in an end-to-end fashion, can yielc improvements over standard selection-based attention, and can learn plausible latent structures. Al model architectures, hyperparameters, and training details are further described in Appendix A.."}, {"section_index": "5", "section_name": "4.1 TREE TRANSDUCTION", "section_text": "The first set of experiments look at a tree-transduction task. These experiments use synthetic data. to explore a failure case of soft-selection attention models. The task is to learn to convert a random formula given in prefix notation to one in infix notation, e.g.,.\nk++15718 +19 0 11 =15+7+1+8) * (19 + 0 + 11\nThe alphabet consists of symbols {(, ), +, *}, numbers between 0 and 20, and a special root symbo. $. This task is used as a preliminary task to see if the model is able to learn the implicit tree structure. on the source side. The model itself is an encoder-decoder model, where the encoder is definec below and the decoder is an LSTM. See Appendix A.2 for the full model..\n3Confusingly, \"forward' in this case is different than in the forward-backward algorithm, as the marginals. themselves are the output. However the two uses of the term are actually quite related. The forward-backward. algorithm can be interpreted as a forward and backpropagation pass on the log partition function. See Eisner. (2016) for further details (appropriately titled \"Inside-Outside and Forward-Backward Algorithms Are Just. Backprop\"). As such our full approach can be seen as computing second-order information. This interpretation is central to Li & Eisner (2009).\n4In general we use Vg to denote the Jacobian of a with respect to b.\n8 Sa la+b la.b Sb Sa+b Sab + + la + log(1 + d) + la +lb + + la + log(1 - d) + la+lb + la + log(1 - d) la+lb la + log(1 + d) la +lb +\n8 Sa lab Sb la+b Sa+b Sab + + la + log(1 + d) + la +lb + + la + log(1 - d) + la+lb + la + log(1 - d) la +lb la + log(1 + d) la+lb +\nTable 1: Signed log-space semifield (from Li & Eis- ner (2009)). Each real number a is represented as a pair (la, sa) where la = log|a| and sa = sign(a). Therefore a = Sa exp(la). For the above we let d = exp(l la) and assume |a| > |b|.\n$ + * 0 13 ) C + 17 18 ) ) $ + * 0 13 ( + 17 18 ) ) + + C ( * * 0 0 13 13 ) C C + + 17 17 18 18 )\nFigure 3: Visualization of the source self-attention distribution for the simple (left) and structured (right attention models on the tree transduction task. $ is the special root symbol. Each row delineates the distributior. over the parents (i.e. each row sums to one). The attention distribution obtained from the parsing marginals ar. more able to capture the tree structure- -e.g. the attention weights of closing parentheses are generally placec. on the opening parentheses (though not necessarily on a single parenthesis)..\nTraining uses 15K prefix-infix pairs where the maximum nesting depth is set to be between 2-4 (the. above example has depth 3), with 5K pairs in each depth bucket. The number of expressions in each parenthesis is limited to be at most 4. Test uses 1K unseen sequences with depth between 2-6 (note specifically deeper than train), with 200 sequences for each depth. The performance is measured as the average proportion of correct target tokens produced until the first failure (as in Grefenstette et al. (2015)).\nFor experiments we try using different forms of self-attention over embedding-only encoders. Le x, be an embedding for each source symbol; our three variants of the source representation x, are (a) no atten, just symbol embeddings by themselves, i.e. x; = x; (b) simple attention, symbo embeddings and soft-pairing for each symbol, i.e. x; = [x;; cj] where c, = =1 softmax(0j)x is calculated using soft-selection; (c) structured attention, symbol embeddings and soft-parent, i.e Xj = [xj; Cj] where cj = t=1 p(zij = 1|x)x; is calculated using parsing marginals, obtaine from the syntactic attention layer. None of these models use an explicit query value- the potential come from running a bidirectional LSTM over the source, producing hidden vectors h,, and ther computing\nwhere s. b. W1. W, are parameters (see Appendix A.1)\nDepth No Atten Simple Structured 2 7.6 87.4 99.2 3 4.1 49.6 87.0 4 2.8 23.3 64.5 5 2.1 15.0 30.8 6 1.5 8.5 18.2\nTable 2: Performance (average length to fail. ure %) of models on the tree-transduction task.\n5Thus there are two attention mechanisms at work under this setup. First, structured attention over th. source only to obtain soft-parents for each symbol (i.e. self-attention). Second, standard softmax alignmen attention over the source representations during decoding..\nThe source representation x1,...,X, are attended over using the standard attention mechanism at each decoding step by an LSTM decoder.5 Additionally symbol embedding parameters are shared between the parsing LSTM and the source encoder.\nResults Table 2 has the results for the task. No1. 8.5 18.2 that this task is fairly difficult as the encoder is quit. simple. The baseline model (unsurprisingly) perform. nce (average length to fail- poorly as it has no information about the source ordej. the tree-transduction task. ing. The simple attention model performs better, bu. is significantly outperformed by the structured mode. with a tree structure bias.We hypothesize that th. econstructing the arithmetic tree. Figure 3 shows the attention distribution for th models on the same source sequence, which indicates that the structured model i. daries (i.e. parentheses)"}, {"section_index": "6", "section_name": "4.2 NEURAL MACHINE TRANSLATION", "section_text": "Our second set of experiments use a full neural machine translation model utilizing attention ove. subsequences. Here both the encoder/decoder are LSTMs, and we replace standard simple attentior. with a segmentation attention layer. We experiment with two settings: translating directly fron unsegmented Japanese characters to English words (effectively using structured attention to perforn soft word segmentation), and translating from segmented Japanese words to English words (whicl can be interpreted as doing phrase-based neural machine translation). Japanese word segmentatior. is done using the KyTea toolkit (Neubig et al., 2011)..\nThe segmentation attention layer is a two-state CRF where the unary potentials at the j-th decoder step are parameterized as\nh, Wh. k = 1 k = 0\nHere [h1, ..., hn] are the encoder hidden states and h', is the j-th decoder hidden state (i.e. th query vector). The pairwise potentials are parameterized linearly with b, i.e. all together.\n0i,i+1(Zi,Zi+1)=0i(Zi)+0i+1(Zi+1)+ bzi,zi\ni.i+1\nTherefore the segmentation attention layer requires just 4 additional parameters. Appendix A describes the full model architecture.\nn n Z x,q 1 Cj = pZi=1|x,q V i=1 i=1\nThe normalization term y is not ideal but we found it to be helpful for stable training.6 X is a. hyperparameter (we use = 2) and we further add an l2 penalty of 0.005 on the pairwise potentials b. These values were found via grid search on the validation set..\nSimple Sigmoid Structured CHAR 12.6 13.1 14.6 WORD 14.1 13.8 14.3\nTable 3: Translation performance as mea- sured by BLEU (higher is better) on character to-word and word-to-word Japanese-English translation for the three different models\nFor further analysis, Figure 4 shows a visualization oi. the different attention mechanisms on the character-to-word setup. The simple model generally focuses attention heavily on a single character. In contrast, the sigmoid and structured models are able to spread their attention distribution on contiguous subsequences. The structured attention learns additional parameters (i.e. b) to smooth out this type of attention..\n6With standard expectation (i.e. cj = =1 p(z; = 1|x, q)h;) we empirically observed the marginals to quickly saturate. We tried various strategies to overcome this, such as putting an l2 penalty on the unary. potentials and initializing with a pretrained sigmoid attention model, but simply normalizing the marginal. proved to be the most effective. However, this changes the interpretation of the context vector as the expectatior. of an annotation function in this case..\nThe data comes from the Workshop on Asian Translation (WAT) (Nakazawa et al., 2016). We randomly pick 500K sentences from the original training set (of 3M sentences) where the Japanese sentence was at most 50 characters and the English sentence was at most 50 words. We apply the same length filter on the provided validation/test sets for evaluation. The vocabulary consists of all tokens that occurred at least 10 times in the training corpus.\nWe experiment with three attention configurations: (a) standard simple attention, i.e. c t=1 softmax(0,)h;; (b) sigmoid attention: multiple selection with Bernoulli random variables,. i.e. c; = =1 sigmoid(0,)h;; (c) structured attention, encoded with normalized CRF marginals,.\nResults Results for the translation task on the test. set are given in Table 3. Sigmoid attention outper-. forms simple (softmax) attention on the character-to-. word task, potentially because it is able to learn many-. to-one alignments. On the word-to-word task, the op-. posite is true, with simple attention outperforming sig-. moid attention. Structured attention outperforms both models on both tasks, although improvements on the word-to-word task are modest and unlikely to be sta tistically significant.\nC te 9 i* 9 $ C te There There were were two two problems problems in in solution solution of of the the electrification electrification problem problem </s> </S> i* te There There were were two two problems problems in in solution solution of of the the electrification electrification problem problem </s> </s>"}, {"section_index": "7", "section_name": "4.3 OUESTION ANSWERING", "section_text": "Our third experiment is on question answering (QA) with the linear-chain CRF attention layer for. inference over multiple facts. We use the bAbI dataset (Weston et al., 2015), where the input is a se of sentences/facts paired with a question, and the answer is a single token. For many of the tasks the model has to attend to multiple supporting facts to arrive at the correct answer (see Figure 5 fo1. an example), and existing approaches use multiple 'hops' to greedily attend to different facts. We. experiment with employing structured attention to perform inference in a non-greedy way. As the ground truth supporting facts are given in the dataset, we are able to assess the model's inference. accuracy.\nThe baseline (simple) attention model is the End-To-End Memory Network (Sukhbaatar et al.,. 2015) (MemN2N), which we briefly describe here. See Appendix A.4 for full model details. Le. x1, ..., Xn be the input embedding vectors for the n sentences/facts and let q be the query embed-. ding. In MemN2N, zk is the random variable for the sentence to select at the k-th inference step. (i.e. k-th hop), and thus zk E {1, ..., n}. The probability distribution over zk is given by p(zk =. i|x,q) = softmax((x)q), and the context vector is given by ch = =1 P(zk = i|x, q)ok where x,, of are the input and output embedding for the i-th sentence at the k-th hop, respectively. The k-th context vector is used to modify the query qk+1 = qk + ck, and this process repeats for. k = 1, ..., K (for k = 1 we have x = xi, q = q, c = O). The K-th context and query vectors. are used to obtain the final answer. The attention mechanism for a K-hop MemN2N network car therefore be interpreted as a greedy selection of a length-K sequence of facts (i.e. Z1, ..., k)..\nFor structured attention, we use an n-state, K-step linear-chain CRF.7 We experiment with twc different settings: (a) a unary CRF model with node potentials.\n0k(i)=(xh)qk\nNote that this differs from the segmentation attention for the neural machine translation experiments de scribed above, which was a K-state (with K = 2), n-step linear-chain CRF.\nFigure 4: Visualization of the source attention distribution for the simple (top left), sigmoid (top right), and. structured (bottom left) attention models over the ground truth sentence on the character-to-word translatior task. Manually-annotated alignments are shown in bottom right. Each row delineates the attention weights. over the source sentence at each step of decoding. The sigmoid/structured attention models are able learn an. implicit segmentation model and focus on multiple characters at each time step.\n0k,k+1(i,j)=(x)'q+x\nThe binary CRF model is designed to test the model's ability to perform sequential reasoning. For both (a) and (b), a single context vector is computed: c = z1,,zk p(z1, ..., 2K |x, q) f(x, z) (unlike MemN2N which computes K context vectors). Evaluating c requires summing over all nK possible sequences of length K, which may not be practical for large values of K. However, if f(x, z) factors over the components of z (e.g. f(x, z) = k=1 fk(x, zk) then one can rewrite the we use fk(x, zk) = ok,. All three models are described in further detail in Appendix A.4.\nResults We use the version of the dataset with 1K questions for each task. Since all models reduce to the same network for tasks with 1 supporting fact, they are excluded from our experiments. The number of hops (i.e. K) is task-dependent, and the number of memories (i.e. n) is limited to be at most 25 (note that many question have less than 25 facts-e.g. the example in Figure 5 has 9 facts). Due to high variance in model performance, we train 20 models with different initializations for each task and report the test accuracy of the model that performed the best on a 10% held-out validation set (as is typically done for bAbI tasks).\nResults of the three different models are shown in Table 4. For correct answer seletion (Ans %) we find that MemN2N and the Binary CRF model perform similarly while the Unary CRF mode does worse, indicating the importance of including pairwise potentials. We also assess each model's ability to attend to the correct supporting facts in Table 4 (Fact %). Since ground truth supporting facts are provided for each query, we can check the sequence accuracy of supporting facts for each model (i.e. the rate of selecting the exact correct sequence of facts) by taking the highest probability sequence 2 = argmax p(1,..., K |x, q) from the model and checking against the ground truth Overall the Binary CRF is able to recover supporting facts better than MemN2N. This improvement is significant and can be up to two-fold as seen for task 2, 11, 13 & 17. However we observed that on many tasks it is sufficient to select only the last (or first) fact correctly to predict the answer and thus higher sequence selection accuracy does not necessarily imply better answer accuracy (anc vice versa). For example, all three models get 100% answer accuracy on task 15 but have different supporting fact accuracies.\nFinally, in Figure 5 we visualize of the output edge marginals produced by the Binary CRF mode for a single question in task 16. In this instance, the model is uncertain but ultimately able to selec the right sequence of facts 5 -> 6 -> 8.\nMemN2N Binary CRF Unary CRF Task K Ans % Fact % Ans % Fact % Ans % Fact % TASK 02 - TWO SUPPORTING FACTS 2 87.3 46.8 84.7 81.8 43.5 22.3 TASK 03 - THREE SUPPORTING FACTS 3 52.6 1.4 40.5 0.1 28.2 0.0 TASK 07 - COUNTING 3 83.2 83.5 79.3 TASK 08 - LISTS SETS 3 94.1 93.3 87.1 TASK 11 - INDEFINITE KNOWLEDGE 2 97.8 38.2 97.7 80.8 88.6 0.0 TASK 13 - COMPOUND COREFERENCE 2 95.6 14.8 97.0 36.4 94.4 9.3 TASK 14 - TIME REASONING 2 99.9 77.6 99.7 98.2 90.5 30.2 TASK 15 - BASIC DEDUCTION 2 100.0 59.3 100.0 89.5 100.0 51.4 TASK 16 - BASIC INDUCTION 3 97.1 91.0 97.9 85.6 98.0 41.4 TASK 17 - POSITIONAL REASONING 2 61.1 23.9 60.6 49.6 59.7 10.5 TASK 18 - SIZE REASONING 2 86.4 3.3 92.2 3.9 92.0 1.4 TASK 19 - PATH FINDING 2 21.3 10.2 24.4 11.5 24.3 7.8 AVERAGE 81.4 39.6 81.0 53.7 73.8 17.4\nTable 4: Answer accuracy (Ans %) and supporting fact selection accuracy (Fact %) of the three QA models on the 1K bAbI dataset. K indicates the number of hops/inference steps used for each task. Task 7 and 8 both contain variable number of facts and hence they are excluded from the fact accuracy measurement. Supporting fact selection accuracy is calculated by taking the average of 10 best runs (out of 20) for each task.\nFigure 5: Visualization of the attention distribution over supporting fact sequences for an example questior. in task 16 for the Binary CRF model. The actual question is displayed at the bottom along with the correc answer and the ground truth supporting facts (5 -> 6 -> 8). The edges represent the marginal probabilities p(Zk, Zk+1|x, q), and the nodes represent the n supporting facts (here we have n = 9). The text for the. supporting facts are shown on the left. The top three most likely sequences are: p(z1 = 5, z2 = 6, z3 =. 8x,q) = 0.0564,p(z1 = 5, z2 = 6, z3 = 3|x,q) = 0.0364,p(z1 = 5, z2 = 2, z3 = 3|x,q) = 0.0356."}, {"section_index": "8", "section_name": "4.4 NATURAL LANGUAGE INFERENCE", "section_text": "The final experiment looks at the task of natural language inference (NLI) with the syntactic atten tion layer. In NLI, the model is given two sentences (hypothesis/premise) and has to predict their relationship: entailment, contradiction, neutral.\nFor this task, we use the Stanford NLI dataset (Bowman et al., 2015) and model our approach off of the decomposable attention model of Parikh et al. (2016). This model takes in the matrix of word embeddings as the input for each sentence and performs inter-sentence attention to predict the answer. Appendix A.5 describes the full model.\nAs in the transduction task, we focus on modifying the input representation to take into account soft parents via self-attention (i.e. intra-sentence attention). In addition to the three baselines described for tree transduction (No Attention, Simple, Structured), we also explore two additional settings: (d) hard pipeline parent selection, i.e. x, = [x; xhead(j)], where head(j) is the index of x3's parent; (e) pretrained structured attention: structured attention where the parsing layer is pretrained for one epoch on a parsed dataset (which was enough for convergence).\nResults Results of our models are shown in Table 5. Simple attention improves upon the nc attention model. and this is consistent with improvements observed by Parikh et al. (2016) with their intra-sentence attention model. The pipelined model with hard parents also slightly improves upon the baseline. Structured attention outperforms both models, though surprisingly, pretraining the syntactic attention layer on the parse trees performs worse than training it from scratch-it is possible that the pretrained attention is too strict for this task.\nWe also obtain the hard parse for an example sentence by running the Viterbi algorithm on th syntactic attention layer with the non-pretrained model:\nS The are fighting men outside a deli\n8The parents are obtained from running the dependency parser of Andor et al. (2016), available at ttps://github.com/tensorflow/models/tree/master/syntaxnet\n0.18 julius is a lion 0.16 greg is a frog 2 0.14 greg is white 3 0.12 julius is white 4 0.10 bernhard is a rhino 5 5 0.08 brian is a rhino 6 6 6 0.06 lily is a lion 7 brian is green 8 0.04 8 8 lily is gray 9 9 0.02 Question: what color is bernhard? green 0.00 CorrectFacts:5.6.8\nModel Accuracy % Handcrafted features (Bowman et al., 2015) 78.2 LSTM encoders (Bowman et al., 2015) 80.6 Tree-Based CNN (Mou et al., 2016) 82.1 Stack-Augmented Parser-Interpreter Neural Net (Bowman et al., 2016) 83.2 LSTM with word-by-word attention (Rocktaschel et al., 2016) 83.5 Matching LSTMs (Wang & Jiang, 2016) 86.1 Decomposable attention over word embeddings (Parikh et al., 2016) 86.3 Decomposable attention + intra-sentence attention (Parikh et al., 2016) 86.8 Attention over constituency tree nodes (Zhao et al., 2016). 87.2 Neural Tree Indexers (Munkhdalai & Yu, 2016). 87.3 Enhanced BiLSTM Inference Model (Chen et al., 2016) 87.7 Enhanced BiLSTM Inference Model + ensemble (Chen et al., 2016) 88.3 No Attention 85.8 No Attention + Hard parent. 86.1 Simple Attention 86.2 Structured Attention 86.8 Pretrained Structured Attention. 86.5\nTable 5: Results of our models (bottom) and others (top) on the Stanford NLI test set. Our baseline model has the same architecture as Parikh et al. (2016) but the performance is slightly different due to different settings (e.g. we train for 100 epochs with a batch size of 32 while Parikh et al. (2016) train for 400 epochs with a batch size of 4 using asynchronous SGD.)\nDespite being trained without ever being exposed to an explicit parse tree, the syntactic attentior layer learns an almost plausible dependency structure. In the above example it is able to correctly identify the main verb fight ing, but makes mistakes on determiners (e.g. head of The should be men). We generally observed this pattern across sentences, possibly because the verb structure is. more important for the inference task.\nThis work outlines structured attention networks, which incorporate graphical models to generalize. simple attention, and describes the technical machinery and computational techniques for backprop. agating through models of this form. We implement two classes of structured attention layers: a. linear-chain CRF (for neural machine translation and question answering) and a more complicated first-order dependency parser (for tree transduction and natural language inference). Experiments. show that this method can learn interesting structural properties and improve on top of standard mod- els. Structured attention could also be a way of learning latent labelers or parsers through attention. on other tasks.\nIt should be noted that the additional complexity in computing the attention distribution increases run-time- for example, structured attention was approximately 5 slower to train than simple at- tention for the neural machine translation experiments, even though both attention layers have the same asymptotic run-time (i.e. O(n)).\nEmbedding differentiable inference (and more generally, differentiable algorithms) into deep mod-. els is an exciting area of research. While we have focused on models that admit (tractable) exact inference, similar technique can be used to embed approximate inference methods. Many optimiza tion algorithms (e.g. gradient descent, LBFGS) are also differentiable (Domke, 2012; Maclaurin. et al., 2015), and have been used as output layers for structured prediction in energy-based models (Belanger & McCallum, 2016; Wang et al., 2016). Incorporating them as internal neural network. layers is an interesting avenue for future work.."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Tao Lei. Ankur Parikh, Tim Vieira, Matt Gormley, Andre Martins, Jason Eisner, Yoav Goldberg, and the anonymous reviewers for helpful comments, discussion, notes, and code. We additionally thank Yasumasa Miyamoto for verifying Japanese-English translations"}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of ICLR, 2015..\nDavid Belanger and Andrew McCallum. Structured Prediction Energy Networks. In Proceedings oJ ICML, 2016.\nKyunghyun Cho, Aaron Courville, and Yoshua Bengio. Describing Multimedia Content using. Attention-based Encoder-Decoder Networks. In IEEE Transactions on Multimedia, 2015\nJan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengic Attention-Based Models for Speech Recognition. In Proceedings of NIPs, 2015.\nRonan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural Language Processing (almost) from Scratch. Journal of Machine Learning Re- search, 12:2493-2537, 2011.\nTrinh-Minh-Tri Do and Thierry Artieres. Neural Conditional Random Fields. In Proceedings of AISTATS, 2010.\nJustin Domke. Parameter Learning with Truncated Message-Passing. In Proceedings of CVPR 2011.\nJustin Domke. Generic methods for optimization-based modeling. In A1STATS, pp. 318-326, 2012\nGreg Durrett and Dan Klein. Neural CRF Parsing. In Proceedings of ACL, 2015.\nAlex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing Machines. arXiv:1410.5401, 2014\nQian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. Enhancing and Combining Se quential and Tree LSTM for Natural Language Inference. arXiv:1609.06038, 2016.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive Subgradient Methods for Online Learning. and Stochastic Optimization. Journal of Machine Learning Research. 12:2021-2159. 2011.\nJason M. Eisner. Three New Probabilistic Models for Dependency Parsing: An Exploration. In Proceedings of ACL, 1996\nEdward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning Transduce with Unbounded Memory. In Proceedings of NIPs, 2015..\nMax Jaderberg, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep Structured Outpu Learning for Unconstrained Text Recognition. In Proceedings of ICLR, 2014..\nDiederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In Proceedings oJ ICLR, 2015.\nEliyahu Kipperwasser and Yoav Goldberg. Simple and Accurate Dependency Parsing using Bidi rectional LSTM Feature Representations. In TACL, 2016.\nLingpeng Kong, Chris Dyer, and Noah A. Smith. Segmental Recurrent Neural Networks. In Pro ceedings of ICLR, 2016.\nJohn Lafferty, Andrew McCallum, and Fernando Pereira. Conditional Random Fields: Probabilisti Models for Segmenting and Labeling Sequence Data. In Proceedings of ICML, 2001..\nGuillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer Neural Architectures for Named Entity Recognition. In Proceedings of NAACL. 2016\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based Learning Applied to Document Recognition. In Proceedings of IEEE, 1998.\nZhifei Li and Jason Eisner. First- and Second-Order Expectation Semirings with Applications tc Minimum-Risk Training on Translation Forests. In Proceedings of EMNLP 2009. 2009\nLiang Lu, Lingpeng Kong, Chris Dyer, Noah A. Smith, and Steve Renals. Segmental Recurrent Neural Networks for End-to-End Speech Recognition. In Proceedings of INTERsPEECH, 2016.\nMinh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective Approaches to Attention based Neural Machine Translation. In Proceedings of EMNLP. 2015.\nDougal Maclaurin, David Duvenaud, and Ryan P. Adams. Gradient-based Hyperparameter Opti mization through Reversible Learning. In Proceedings of ICML, 2015..\nLili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. Natural language inference by tree-based convolution and heuristic matching. In Proceedings of ACL, 2016.\nGraham Neubig, Yosuke Nakata, and Shinsuke Mori. Pointwise Prediction for Robust, Adaptable Japanese Morphological Analysis. In Proceedings of ACL, 2011.\nToshiaki Nakazawa, Manabu Yaguchi, Kiyotaka Uchimoto, Masao Utiyama, Eiichiro Sumita, Sadao Kurohashi, and Hitoshi Isahara. Aspec: Asian scientific paper excerpt corpus. In Nicoletta Calzo- lari (Conference Chair), Khalid Choukri, Thierry Declerck, Marko Grobelnik, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (eds.), Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2016), pp. 2204 2208, Portoro, Slovenia, may 2016. European Language Resources Association (ELRA). ISBN 978-2-9517408-9-1.\nian Peng, Liefeng Bo, and Jinbo Xu. Conditional Neural Fields. In Proceedings of NIPs, 2009\nTim Rocktaschel, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunsom Reasoning about Entailment with Neural Attention. In Proceedings of ICLR, 2016\nShenlong Wang, Sanja Fidler, and Raquel Urtasun. Proximal Deep Structured Models. In Proceed ings of NIPS, 2016.\nJason Weston, Sumit Chopra, and Antoine Bordes. Memory Networks. arXiv:1410.3916, 2014\nJason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merrienboer, Armand Joulin, and Tomas Mikolov. Towards Ai-complete Question Answering: A Set of Prerequisite. Toy Tasks. arXiv preprint arXiv:1502.05698. 2015.\nKelvin Xu, Jimma Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov Richard Zemel, and Yoshua Bengio. Show, Attend and Tell: Neural Image Caption Generatior with Visual Attention. In Proceedings of ICML, 2015.\nLei Yu, Jan Buys, and Phil Blunsom. Online Segment to Segment Neural Transduction. In Proceed. ings of EMNLP, 2016.\nKai Zhao, Liang Huang, and Minbo Ma. Textual Entailment with Structured Attentions and Com position. In Proceedings of COLING, 2016.\nJohn Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In Advances in Neural Information Processing Systems, pp. 3528 3536, 2015.\nSainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-To-End Memory Net works. In Proceedings of NIPS, 2015."}, {"section_index": "11", "section_name": "A.1 SYNTACTIC ATTENTION", "section_text": "The syntactic attention layer (for tree transduction and natural language inference) is similar to the first-order graph-based dependency parser of Kipperwasser & Goldberg (2016). Given an input sen- tence [1, ..., n] and the corresponding word vectors [x1,..., xn], we use a bidirectional LSTM to get the hidden states for each time step i E 1, ..., n],\nhbwd = LSTM(x;, hd) h; = [hfwd, hbwd]\nwhere the forward and backward LSTMs have their own parameters. The score for x; -> x; (i.e. x is the parent of x;), is given by an MLP\n0; = tanh(s' tanh(Wih + W2h; + b)\nThese scores are used as input to the inside-outside algorithm (see Appendix B) to obtain the prob. ability of each word's parent p(z; = 1 |x), which is used to obtain the soft-parent c; for each word x;. In the non-structured case we simply have p(zi; = 1x) = softmax(0i)\nLet [x1, ... , xn], [y1, ... , Ym] be the sequence of source/target symbols, with the associated embed- dings [x1, ..., Xn], [y1, ..., Ym] with x,, yj E R'. In the simplest baseline model we take the source. representation to be the matrix of the symbol embeddings. The decoder is a one-layer LSTM which produces the hidden states h' = LSTM(yj, h's-1), with h' E R. The hidden states are combined. with the input representation via a bilinear map W E Rll to produce the attention distribution used. to obtain the vector m;, which is combined with the decoder hidden state as follows,\nexp x, Wh', n mi = h, = tanh(U[m;; h;] Qi = QjXi k=1 exp xzWhJ i=1\nHere we have W E Rll and U E R2ll. Finally, h, is used to to obtain a distribution over the next symbol yi+1\np(yj+1x1,..., Xn, Y1,..., Yj) = softmax(Vh; + b.\nn n p(Zki =1x) Xt softmax(0ki) xk X k=1 k=1\nwhere 0; comes from the bidirectional LSTM described in A.1. Then Q; and m; changed accord in gly,\nexp x, Wh' n Qi mi = Q;Xj k=1 exp xx Wh' i=1\nNote that in this case we have W E R2ll and U E R3lxl. We use l = 50 in all our experiments. The forward/backward LSTMs for the parsing LSTM are also 50-dimensional. Symbol embeddings are shared between the encoder and the parsing LSTMs.\nAdditional training details include: batch size of 20; training for 13 epochs with a learning rate of 1.0, which starts decaying by half after epoch 9 (or the epoch at which performance does not improve on validation, whichever comes first); parameter initialization over a uniform distribution U[-0.1, 0.1]; gradient normalization at 1 (i.e. renormalize the gradients to have norm 1 if the l2 norm exceeds 1). Decoding is done with beam search (beam size = 5).\nn m Qi X i=1\nThe baseline NMT system is from Luong et al. (2015). Let [x1,..., xn], [y1,..., ym] be tl source/target sentence, with the associated word embeddings [x1,..., Xn], [y1,..., Ym]. The e coder is an LSTM over the source sentence, which produces the hidden states [h1,. .., hn] where\nh; = LSTM(xi,hi-1)\nn 0, = h,Wh Cj = softmax(0)l i=1\nThe Bernoulli attention network has the same 0, but instead uses a sigmoid to obtain the weights of the linear combination, i.e.\nn Cj = sigmoid()h i=1\nAnd finally, the structured attention model uses a bilinear map to parameterize one of the unar potentials\nha Wh k = 1 k = 0\ni,Zi+1\nn n 7 1 pz=1x,q Cj i=1 i\nh, = tanh(U[c;; h,])\n= softmax(Vh; + b p(yi+1 X1,...,Xn,Y1,...Yi\nThe encoder/decoder LSTMs have 2 layers and 500 hidden units (i.e. l =- 500).\nAdditional training details include: batch size of 128; training for 30 epochs with a learning rate of 1.0, which starts decaying by half after the first epoch at which performance does not improve on validation; dropout with probability O.3; parameter initialization over a uniform distribution U[-0.1, 0.1]; gradient normalization at 1. We generate target translations with beam search (beam size = 5), and evaluate with multi-bleu.per1 from Moses.9"}, {"section_index": "12", "section_name": "A.4 OUESTION ANSWERING", "section_text": "Our baseline model (MemN2N) is implemented following the same architecture as described ii Sukhbaatar et al. (2015). In particular, let x = [x1,..., xn] represent the sequence of n facts witl. the associated embeddings [x1, . .. , xn] and let q be the embedding of the query q. The embedding.\n9https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic. multi-bleu.perl\nand h, E R'. The decoder is another LSTM which produces the hidden states h', E R'. In the simple attention case with categorical attention, the hidden states are combined with the input representation via a bilinear map W E Rll and this distribution is used to obtain the context vector at the j-th time step,\nwhere b are the pairwise potentials. These potentials are used as inputs to the forward-backward algorithm to obtain the marginals p(z; = 1 | x, q), which are further normalized to obtain the context Vector\nWe use = 2 and also add an l2 penalty of 0.005 on the pairwise potentials b. The context vector is then combined with the decoder hidden state.\nare obtained by simply adding the word embeddings in each sentence or query. The full model with K hops is as follows:\nwhere p(y|x, q) is the distribution over the answer vocabulary. At each layer, {x} and {o} are computed using embedding matrices Xk and Ok. We use the adjacent weight tying scheme from the paper so that Xk+1 = Ok, wT = OK. X1 is also used to compute the query embedding at the first hop. For k = 1 we have x, = x, qk = q, c = 0.\n9k(i) =(x;)\n0k,k+1(i,j)=(x)'qk\nk+1 K\nK C = pz1,,ZK|x,q)f(x,z) f(x,z)=fk(x,zk) fk(x,Zk) = Ok Z1,...,ZK k=1\nNote that if f(x, z) factors over the components of z (as is the case above) then computing c onl requires evaluating the marginals p(zkx, q).\nFinally, given the context vector the prediction is made in a similar fashion to MemN2N:\np(y|x,q) = softmax(W(qK + c))\nOther training setup is similar to Sukhbaatar et al. (2015): we use stochastic gradient descent with. learning rate 0.01, which is divided by 2 every 25 epochs until 100 epochs are reached. Capacity of the memory is limited to 25 sentences. The embedding vectors are of size 20 and gradients are renormalized if the norm exceeds 40. All models implement position encoding, temporal encoding and linear start from the original paper. For linear start, the softmax() function in the attention. layer is removed at the beginning and re-inserted after 20 epochs for MemN2N, while for the CRF. models we apply a log(softmax()) layer on the q after 20 epochs. Each model is trained separately. for each task.\nOur baseline model/setup is essentially the same as that of Parikh et al. (2016). Let x1, ..., xn], [y1, ..., Ym] be the premise/hypothesis, with the corresponding input representations X1,..., Xn], y1,...,Ym]. The input representations are obtained by a linear transformation of the 300-dimensional pretrained GloVe embeddings (Pennington et al., 2014) after normalizing the GloVe embeddings to have unit norm.10 The pretrained embeddings remain fixed but the linear layer\n10We use the GloVe embeddings pretrained over the 840 billion word Common Crawl, publicly available at http://nlp.stanford.edu/proiects/qlove/\np(Zk =i|x,q) =softmax((x)'qk n ck=p(zk=i|x,q)0 i=1 E+1 q p(y|x,q) = softmax(W(qK + cK\nFor both the Unary and the Binary CRF models, the same input fact and query representations are computed (i.e. same embedding matrices with weight tying scheme). For the unary model, the potentials are parameterized as\nIn the case of the Binary CRF, to discourage the model from selecting the same fact again we additionally set 0.k+1(i, i) = -0o for all i E {1, ..., n}. Given these potentials, we compute the. marginals p(zk = i, Zk+1 = j |x, q) using the forward-backward algorithm, which is then used to. compute the context vector:\n(which is also 300-dimensional) is trained. Words not in the pretrained vocabulary are hashed to one of 100 Gaussian embeddings with mean 0 and standard deviation 1..\nWe concatenate each input representation with a convex combination of the other sentence's input representations (essentially performing inter-sentence attention), where the weights are determined through a dot product followed by a softmax,\nm exp eij exp eij eij=f(xi)`f(yj) X;= Xi; y Yi; k=1 exp eik m k=1 exp ekj\nHere f() is an MLP. The new representations are fed through another MLP g(), summed, combine with the final MLP h(.) and fed through a softmax layer to obtain a distribution over the labels l.\nn m y=g(Yj) x = g(xi) i=1 j=1\nn Xi = Xi Zki=1x)Xk k=1\nand , (and analogously yt) is used as the input to the above model. In the simple case (which. closely corresponds to the intra-sentence attention model of Parikh et al. (2016)), we have\nn exp Xi Xi Xk l=1 exp 0li n k=1\nThe word embeddings for the parsing LSTMs are also initialized with GloVe, and the parsing layer is shared between the two sentences. The forward/backward LSTMs for the parsing layer are 100 dimensional.\nAdditional training details include: batch size of 32; training for 100 epochs with Adagrad (Duchi. et al., 2011) where the global learning rate is 0.05 and sum of gradient squared is initialized tc 0.1; parameter intialization over a Gaussian distribution with mean 0 and standard deviation O.01;. gradient normalization at 5. In the pretrained scenario, pretraining is done with Adam (Kingma &. Ba, 2015) with learning rate equal to 0.01, and 1 = 0.9, , = 0.999."}, {"section_index": "13", "section_name": "8 FORWARD/BACKWARD THROUGH THE INSIDE-OUTSIDE ALGORITHM", "section_text": "Figure 6 shows the procedure for obtaining the parsing marginals from the input potentials. This corresponds to running the inside-outside version of Eisner's algorithm (Eisner, 1996). The inter- mediate data structures used during the dynamic programming algorithm are the (log) inside tables Q, and the (log) outside tables . Both a, are of size n n 2 2, where n is the sentence length First two dimensions encode the start/end index of the span (i.e. subtree). The third dimension encodes whether the root of the subtree is the left (L) or right (R) index of the span. The fourth dimension indicates if the span is complete (1) or incomplete (0). We can calculate the marginal distribution of each word's parent (for all words) in O(n3) using this algorithm.\nBackward pass through the inside-outside algorithm is slightly more involved, but still takes O(n3.. time. Figure 7 illustrates the backward procedure, which receives the gradient of the loss L with respect to the marginals, , and computes the gradient of the loss with respect to the potentials. Vf. The computations must be performed in the signed log-space semifield to handle log of negative. values. See section 3.3 and Table 1 for more details..\nAll the MLPs have 2-layers, 300 ReLU units, and dropout probability of 0.2. For structured/simple. models, we first employ the bidirectional parsing LSTM (see A.1) to obtain the scores 0j. In the structured case each word representation is simply concatenated with its soft-parent.\nFigure 6: Forward step of the syntatic attention layer to compute the marginals, using the inside-outside. algorithm (Baker, 1979) on the data structures of Eisner (1996). We assume the special root symbol is the first element of the sequence, and that the sentence length is n. Calculations are performed in log-space semifield. with = logadd and = + for numerical precision. a, b c means a c and b c. a b means. a + a b.\nocedure BACKPROPINSIDEOUTSIDE(O, p, Vf) for s,t = 1,..., n; s t do > Backpropagation uses the identity = (p 8[s,t] log p[s,t] O log Vf[s,t] > d = 1c VL,V6,logVf 00 > Initialize inside (V), outside (Vf) gradients, an. for s = 1,..., n - 1 do > Backpropagate o to for t = s + 1, ..., n do V&[s,t, R, 0],V6[s,t,R,0]8[s,t] V&[1,n,R,1] o-d[s,t] if s > 1 then V&[s,t, L, 0], Vf[s,t, L, 0] 8[t,s] V&[1,n,R,1]o-o[s,t] for k = 1,..., n do. > Backpropagate through for s = 1,..., n - k do ts+k v V[s,t,R,0] [s,t,R, 0] v, y are tempc for u = t,...,n do. V$[s,u, R,1], V[t, u, R,1]o v [s,u, R, 1] a[t,u, R, 1] if s > 1 then vVf[s,t,L,0] O [s,t,L, 0] for u = 1,..., s do. Vf[u,t,L,1], V&[u,s,L,1] o v 8 [u,t,L,1] O a[u,s,L, 1] vVf[s,t, L,1] [s,t,L,1] for u = t,..., n do. Vf[s,u,L,1], V&[t,u,L,0] o v O [s,u,L,1] O a[t, u, L, 1] for u = 1,..., s - 1 do y[u,t,R, 0]a[u,s-1, R,1] 0ut Vf[u,t, R,0], V&[u,s-1, R,1], logVf[u,t]o v O Y y[u,t, L, 0] a[u, s-1, R,1] 0tu Vf[u,t, L, 0], V&[u,s-1, R,1], logV6[t,u] o v O Y v<V6[s,t,R,1] O [s,t,R,1] for u = 1,..., s do Vf[u,t, R, 1], V&[u,s, R, 0] o v 8 [u,t, R,1] a[u,s, R, 0] for u = t + 1,..., n do y[s,u, R,0] a[t+1,u,L,1] 0su Vf[s,u,R,0],V&[t+1, u, L,1],logV6[s,u]o v O Y y<[s,u,L,0] a[t+1,u,L,1] 0us Vf[s,u, L,0], V&[t+1, u, L,1],log V6[u,s] o v O y for k = n,..., 1 do. > Backpropagate througl for s = 1,..., n - k do ts+k v V[s,t,R,1] a[s,t,R,1] for u = s + 1,...,t do V&[u,t, R, 0], V&[u,t, R,1] o v 8 a[s,u, R, 0] O a[u,t, R, 1] if s > 1 then v V&[s,t,L,1] O a[s,t,L,1] for u = s,...,t- 1 do. V&[s,u, L,1], V&[u,t, L, 0] <o v O a[s,u, L,1] a[u,t, L, 0] vV&[s,t, L, 0] a[s,t, L, 0] for u = s,...,t - 1 do. y<a[s,u,R,1] a[u+1,t,L,1] 0ts V&[s,u,R,1], V&[u+1,t,L,1],logV6[t,s] o v O Y v V[s,t,R,0] 8 a[s,t,R,0] for u = s,...,t - 1 do. ya[s,u,R,1] a[u+1,t, L,1] 0st V&[s,u,R,1], V&[u+1,t,L,1], logV6[s,t] o v O Y return signexp log V > Exponentiate log gradient, multiply by sign, anc.\nFigure 7: Backpropagation through the inside-outside algorithm to calculate the gradient with respect to the input potentials. V denotes the Jacobian of a with respect to b (so V is the gradient with respect to ) a, b c means a a c and b b c."}] |
ByvJuTigl | [{"section_index": "0", "section_name": "END-TO-END LEARNABLE HISTOGRAM FILTERS", "section_text": "Rico Jonschkowski & Oliver Brock"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Traditionally, computer scientists solve problems by designing algorithms. Recently, this practice has re- ceived competition from machine learning methods that automatically extract solutions from data. One example of this development is the field of computer vision, where the state of the art is based on deep neural networks rather than on human-designed algo- rithms (He et al.]2015).But these two approaches to problem solving-algorithms and learning-are not mutually exclusive; in fact, they can complement each other. Effective problem solving exploits all available information, whether it be encoded in algorithms or captured by data. This paper presents a step towards ightly combining these sources of information\nWe demonstrate the combination of problem-specific algorithms with generic machine learning i. he context of state estimation in robotics. The state estimation problem exhibits a clear algorithm. structure, captured in a provably optimal way by Bayes filters (Thrun et al.|2005). But the use c. such a filter requires the specification of a motion model and a measurement model that is specif. o a particular problem instance. We want to leverage the general knowledge captured in the Bay filter, while extracting the instance-specific models from data using deep learning (Goodfellow et al. 2016). We achieve this by implementing a differentiable version of the histogram filter--a specifi ype of Bayes filter that represents probability distributions with histograms-including learnabl. motion and measurement models (see Fig.1). With this implementation, we can learn these mode. end-to-end using backpropagation, while still taking advantage of the structure encoded in Baye. filters. Interestingly, this combination also enables unsupervised learning..\nOur contributions are both conceptual and technical. Our conceptual contribution is the principle of tightly combining algorithms and machine learning to balance data-efficiency and generality. Our technical contribution is the end-to-end learnable histogram filter, which enables the use of this Bayes filter variant in a more generic way. Our experiments show that our method is more. data-efficient than generic neural networks, improves performance compared to standard histogran filters, and--most importantly---enables unsupervised learning of recursive state estimation loops."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Problem-specific algorithms and generic machine learning approaches have com-. lementary strengths and weaknesses, trading-off data efficiency and generality. To find the right balance between these, we propose to use problem-specific infor-. nation encoded in algorithms together with the ability to learn details about the. oroblem-instance from data. We demonstrate this approach in the context of state. estimation in robotics, where we propose end-to-end learnable histogram filters. a differentiable implementation of histogram filters that encodes the structure of. ecursive state estimation using prediction and measurement update but allows the pecific models to be learned end-to-end, i.e. in such a way that they optimize the. oerformance of the filter, using either supervised or unsupervised learning..\nBelief over states Measurement Motion model model Observation Action\nFigure 1: End-to-end learnable histogram filters. Models are learned; algorithmic structure is given.\nEvery information that is contained in the solution to. a problem must either be provided as prior knowl-. edge (prior for short) or learned from data. Differ-. ent approaches balance these sources of information. differently. In the classic approach to computer sci-. ence, all required information is provided by a hu-. man (e.g. in the form of algorithms and models).. In the machine learning approach, only a minimal. amount of prior knowledge is provided (in form of. a learning algorithm) while most information is ex-. tracted from data (see Fig.2). When trading-off how. much and which information should be provided as a p. the entire spectrum rather than limit ourselves to these\nIn the context of robotics, for example, it is clear that the left end of this spectrum will not enable intelligent robots, because we cannot foresee and specify every detail for solving a wide range oj tasks in initially unknown environments. Robots need to collect data and learn from them. But ii we go all the way to the right end of the spectrum, we need large amounts of data, which is very difficult to obtain in robotics where data collection is slow and costly. Luckily, robotic tasks include rich structure that can be used as prior. Physics, for example, governs the interaction of any robot and its environment and physics-based priors can substantially improve learning (Scholz et al.l|2014 Jonschkowski & Brock 2015). But robotic tasks include additional structure that can be exploited.\nEvery algorithm that has proven successful in robotics implicitly encodes information aboul the structure of robotic tasks. We propose to use this robotics-specific information captured by robotic algorithms and combine it with machine learning to fill in the task-specific detail based on data. By tightly combining algorithms and machine learning, we can strike the right balance between generality and data-efficiency.\nAlgorithms and machine learning can be combined in different ways, using algorithms either 1) as fixed parts of solutions, 2) as parts of the learning process, or 3) as both. The first approach learns task-specific models in isolation and then combines them with algorithms in the solution. Examples for this approach are numerous, e.g. a Go player that applies a planning algorithm on learned models (Silver et al.|2016), a perception pipeline that combines the iterative closest point algorithm with learned object segmentation (Zeng et al.]2016), or robot control based on learned motion models (Nguyen-Tuong & Peters2011).\nThe second approach uses algorithms as teachers to generate training data. With this data, we can learn a function that generalizes beyond the capabilities of the original algorithm or that can be fine- tuned to a specific problem instance. For example, self-play in Go (using the algorithm as part of the solution) can be used to create new samples to learn from (Silver et al.]2016), training data for learning segmentation can be generated by simple algorithms such as background subtraction (Zeng et al.2016), and reinforcement learning problems can be solved using training samples generated via trajectory optimization (Levine & Koltun2013).\nlo ounnwe nennl Data Prior Classic Machin computer learnin science\nThe third approach--the one that we are focusing on in this paper---uses the same algorithms in the learning process and in the solution. The main idea is to optimize the models for the algorithms that use them rather than learning them in isolation. To achieve this, the algorithms need to be differen. tiable, such that we can compute how changes in the model affect the output of the algorithm, which allows to train the models end-to-end. This idea has been applied to different algorithms, e.g. in the form of neural Turing machines (Graves et al.]2014) and neural programmer-interpreters (Reed & de Freitas]2015). In the context of robotics, Tamar et al.(2016) have presented a differentiable planning algorithm based on value iteration. And, most directly related to our work,Haarnoja et al. 2016) have applied this idea to Kalman filters, showing that measurement models based on visual input can be learned end-to-end as part of the filter. Our work differs from this by representing the belief with a histogram rather than a Gaussian. which allows to track multiple hypotheses- a neces-\nsity for many robotic tasks. Furthermore, we focus on tasks where the robot has information abou its actions and learn both the measurement model and the motion model jointly. Our paper extends an earlier workshop submission (Jonschkowski & Brock2016)"}, {"section_index": "3", "section_name": "PRELIMINARIES: HISTOGRAM FILTERS AND OTHER BAYES FILTERS", "section_text": "A Bayes filter (Thrun et al.|2005) is an algorithm to recursively estimate a probability distributior over a latent state s (e.g. robot pose) conditioned on the history of observations o (e.g. camera images) and actions a (e.g. velocity commands). This posterior over states is also called belief Bel(st) = p(st|a1:t-1, O1:t). A histogram filter is a type of Bayes filter that represents the belief as. a histogram; a discretization of the state space with one probability value per discrete state s. One of the key assumptions in Bayes filters is the Markov property of states, from which follows that the. current belief Bel(st) summarizes all information of the entire history of observations and actions that is relevant for predicting the future.\nOther key assumptions determine how the belief is recursively updated using two alternating steps: the prediction step based on the last action at-1 and the measurement update step based on the current measurement Ot. Note that these two sources of information are separated, which results from the assumption of conditional independence of observation and action given the state\nThe prediction step assumes actions to change the state according to the known motion model p(st St-1, at-1). After performing an action at-1, the new belief for a given state st is computed by. summing over all possible ways through which state st could have come about,.\nBel(st) = ) St-1,Qt-1)Bel(St-1 St-1\nIf motion model and measurement model are unknown, we want the robot to learn these models from data. Apart from the assumptions already mentioned, learning explicit models allow us to restrict their hypothesis space according to assumptions (e.g. linear motion). Our goal is to train these models end-to-end such that we find the models that optimize state estimation performance, while preserving the useful assumptions of Bayes filters. Towards this end, we formulate the belief, the prediction, the measurement update, and the corresponding models in the deep learning framework"}, {"section_index": "4", "section_name": "5.1 END-TO-END LEARNING AND DIFFERENTIABILITY", "section_text": "If we want to use the structure of a histogram filter as a prior and fit the measurement model and the. motion model to data, we can essentially do one of two things: a) learn the models in isolation to optimize a quality measure of the model or b) learn the models end-to-end, i.e. train the models as. part of the entire system and optimize the end-to-end performance..\nIn either way, we might want to optimize the models using gradient descent, for example by com puting the gradient of the learning objective with respect to the model parameters using backpropa gation (repeated application of the chain rule). Therefore, the motion model and the measurement model need to be differentiable regardless of whether we choose option a) or option b). For b) end to-end learning, we need to backpropagate the gradient through the histogram filter algorithm (not to\nThe measurement update step assumes observations to only depend on the current state as defined by a known measurement model p(ot st). After receiving an observation Ot, the belief for every state s+ is updated using Bayes' rule,\nBel(st) p(ot st)Bel(st)\nAn end-to-end learnable histogram filter (E2E-HF) is a differentiable implementation of a histogram filter that allows both motion model and measurement model to be learned end-to-end by backprop agation through time (Werbos1990). Alternatively, we can view the E2E-HF as a new recurrent neural network architecture that implements the structure of a histogram filter (see Fig.3)\nchange the algorithm but to compute how to change the models to improve the algorithm's output Therefore, in addition to the models, the algorithm itself needs to be differentiable\nThe remainder of this section describes how histogram filters can be implemented in a differentiable way and how they can be learned in isolation or end-to-end. To comply with the deep learning framework, we will define the E2E-HF using vector and matrix operations. We will also introduce additional priors for computational or data efficiency. For the sake of readability, we assume a one-dimensional state space here. All formulas can easily be adapted to higher dimensions.\nThe histogram over states is implemented as a vector b of probabilities with one entry per bin,.\nWe can also think of the belief as a neural network layer where the activation of each unit represents the value of a histogram bin. The belief b, constitutes the output of the histogram filter at the current step t and an input at the next step t + 1-together with an action at and an observation Ot+1 (see Fig.3).\nThe most direct implementation of the prediction step (which we replace shortly) defines a learnable function f for the motion model, f : St, St-1, t-1 +> p(St St-1,at-1), and employs f in the prediction step (Eq.1). The equation can be vectorized for computa- tional efficiency by defining a [S] [S] matrix F with F (a) = f(i,j,a), such that b = F(at-1)bt-1\np(Stst-1,at-1) =p(st at-1 Vt|st| k,\nwhere st = St St-1 and k is the maximum state change. Accordingly, we define a new learnable function for the motion model, g : st, at-1 +> p(st at-1) and use g instead of f. For vectorization, we define a (2k +1)-dimensional vector g(a), whose elements g(a) = g(i- k-1, a) represent the probabilities of all positive and negative state changes up to k. We can now reformulate the prediction step (Eq.1) as a convolution (*),\nbt = bt-1* g(at-1)\nwhere the belief bt-1 is convolved with the motion kernel g(at-1) for action at-1 (see Fig.3"}, {"section_index": "5", "section_name": "5.3.1 MOTION MODEL", "section_text": "The learnable motion model g can be implemented as any feedforward network that maps s and. a to a probability. The prior that g(a) represents a probability mass function, i.e. that the elements of g(a) should be positive and sum to one, can be enforced using the softmax nonlinearity on the e9i (a) of unnormalized network outnut such that a:a.\nAnother useful prior for q is smoothness with respect to s and a, i.e. that similar combinations o. s and a lead to similar probabilities. This smoothness is the reason why (for standard feedforwarc networks), we should use s as an input rather than as index for different output dimensions. Witl additional knowledge about robot motion, we can replace smoothnes by a stronger prior. For the\nBelief over states Measurement Prediction update 1 = t+1 normaliz b 9 Ot at-1 Observation Action\nupdate t+1 normaliz Ot I at-1 2 ?tiet Action\nFigure 3: End-to-end learnable histogram. filter. Motion model (purple) and measure- ment model (green) are learned; the algo. rithmic structure is given (* : convolution, O : element-wise multiplication).\nHowever, this approach is computationally expensive because it requires [S|2 evaluations of f for a single prediction step. We can make this computation more efficient, if we assume robot motion tc be local and consistent across the state space, i.e.\nexperiments in this paper, we assumed linear motion with zero mean Gaussian noise, and therefor defined the motion model with only two learnable parameters a and .\ns- g(s,a) = e g(s,a\nAnalogously to the motion model in the prediction step, we define a learnable function h that rep resents the measurement model for the measurement update, h : St, Ot +> p(ot st). To vectorize. the update equation (Eq.2), we define a vector h(o) with elements h,(o) = h(i, o), such that the. measurement update corresponds to element-wise multiplication (O) with this vector,.\nbt = h(0) O bt lowed by a normalization, bt = bt (see Fig.3) )t.\nThe learnable function h that represents the measurement model can again be implemented by. any feedforward network. Since h corresponds to p(ot st)-a probability distribution over. observations-it needs to be normalized across observations, not across states. To realize the correct normalization, we need to compute the unnormalized likelihood vector h(o) for every observation. o and compute the softmax over the corresponding scalars in different vectors rather than over the\nFor the experiments in this paper, we represented h by a network with three hidden layers of 32 rectified linear units (Nair & Hinton]2010), followed by a linear function and a normalization as described above. Using the observation and state as input rather than output dimensions again incorporates the smoothness prior on these quantities."}, {"section_index": "6", "section_name": "5.5 LEARNING", "section_text": "We can learn the motion model q and the measurement model h using different learning objective. based on different sequences of data. We will first look at a number of supervised learning objectives that require O1:T, a1:T, S1:T, and sometimes x1:T-the underlying continuous state. Then, we will describe unsupervised learning that only needs O1:T and a1:T."}, {"section_index": "7", "section_name": "5.5.1 SUPERVISED LEARNING IN ISOLATION", "section_text": "Both models can be learned in isolation by optimizing an objective function, e.g. the cross-entropy between experienced state change / observation and the corresponding outputs of g and h,\nT 1 (st-k-1) log(g(at-1)) T t=2 T 1 Lh = ) t) log(h(0t)) T t=1\nwhere e(i) denotes a standard basis vector with all zeros except for a one at position i, that is the position that represents the experienced state change or observation.."}, {"section_index": "8", "section_name": "5.5.2 SUPERVISED END-TO-END LEARNING", "section_text": "Due to our differentiable implementation, the models can also be learned end-to-end using back propagation through time (Werbos! 1990), which we apply on several overlapping subsequences.\nof length C (in our experiments, C = 32). In the corresponding learning objectives, we compare the belief at the final time step of this subsequence with the true state. If we want to optimize the accuracy of the filter with respect to its discrete states, we can again use a cross-entropy loss,\nT 1 acc. = T -C t=C+1\nwhere x denotes a vector of the continuous values to which the discrete states correspond, such that X6"}, {"section_index": "9", "section_name": "5.5.3 UNSUPERVISED END-TO-END LEARNING", "section_text": "By exploiting the structure of the histogram filter algorithm and the differentiability, we can even. train the models without any state labels by predicting future observations, but later use the models for state estimation. Similarly to supervised end-to-end learning, we apply the filter on different. subsequences of length C, but then we follow this with D steps without performing the measure-. ment update (in our experiments, D = 32). Instead, we use the measurement model to predict the. observations. Pred(ot) = s, p(ot | st)Bel(st) = h(ot)bt. To predict the probabilities for all observations, we define a matrix H with elements H,; = h(i, j) as defined in Section|5.4| Putting. everything together, we get the following loss for unsupervised end-to-end learning:.\nT D 1 Lunsup. =- (T -C)D t=C+1 d=1\nWe consider the problem of learning to estimate the robot's state in unknown environments witl partial observations. In this problem, we compare histogram filters for which the models are learnec in isolation (HF), end-to-end learnable histogram filters (E2E-HFs), and two-layer long-short-term memory networks (LSTMs,Hochreiter & Schmidhuber1997). The models of the HFs are learnec by optimizing the loss functions Lg and Ln presented in the previous section. For the E2E-HFs and. LSTMs, we compare end-to-end learning using Lacc., Lmse, and Lunsup...\nOur results show that 1) the algorithmic prior in HFs and E2E-HFs increases data efficiency for. learning localization compared to generic LSTMs, 2) end-to-end learning improves the performance. of E2E-HFs compared to HFs, and 3) E2E-HFs are able to learn state estimation without state labels\nAn important state estimation problem in partially observable environments is localization: a robot moves through an environment by performing actions and receives partial observations, such that it needs to filter this information over time to estimate its state, i.e. its position. In our experiments, the robot does not know the environment beforehand and thus has to learn state estimation from data.\nWe performed experiments in two localization tasks: a) a hallway localization task (Thrun et al. 2005) and b) a drone localization task (see Fig.4). The tasks are similar in that they have continuous actions and binary observations (door/wall and purple/white tile), both of which are subject to 10% random error. The tasks differ in their dimensionality. In the hallway task, the robot only needs tc estimate a one-dimensional state (its position along the hallway), which for all methods is discretized into 100 states. The drone localization task has a two-dimensional state, which is discretized into 50 bins per dimension resulting in 2500 bins in total. The challenge in both tasks is that the door/tile\nC:t) denotes the final belief at time step t when the histogram filter is applied on the. where b subsequence that spans steps t - C to t. Alternatively, we might want to optimize other objectives. e.g. the mean square error with respect to the underlying continuous state,.\nT 1 L mse - T -C t=C+1\nFigure 4: Randomly sampled environments per task. Motion and measurement models are unknown\n0.9 HF 0.35 5 E2E-HF (unsup.) 0.30 E2E-HF (acc.) 0.8 + E2E-HF (mse) 0.25 LSTM (acc.) 3 0.7 LSTM (mse) 0.15 2 0.6 1 0.05 00 0.00 2000 4000 6000 8000 0 2000 4000 6000 8000 2000 4000 6000 8000 #Training samples (steps) #Training samples (steps) #Training samples (steps) (a) (b) (c)\nHallway task: We performed multiple experiments in the hallway localization task with different amounts of training data. The learning curves with respect to mean squared error for supervised learning show large differences in data efficiency (see solid lines in Fig. 5a): E2E-HFs require substantially less training samples than LSTMs to achieve good performance (2000 rather than > 800o). HFs are even more data-efficient but quickly stop improving with additional data.\nDiscussion: The priors encoded in the E2E-HF improve data efficiency because any information contained in these priors does not need to be extracted from data. This leads to better generalization.. e.g. the ability to robustly and accurately track multiple hypotheses (see Fig[6).\nNote on computational limits: The size of the histogram is exponential in the number of state dimensions. A comparison between the 1D and the 2D task suggests that data might not be the bottleneck for applying the method to higher dimensional problems, since the data requirements. were similar. However, the increased histogram size directly translates into longer training times such that computation quickly becomes the bottleneck for scaling this method to higher-dimensional problems. Addressing this problem will require to change the belief representation, e.g. to particles. or a mixture of Gaussians, which is an important direction for future work..\n(a) Hallway localization task (b) Drone localization task\nFigure 5: Hallway task, learning curves for different metrics: (a) mean squared error of estimat. ing the continuous state-lower is better, (b) accuracy of estimation the discrete state-higher is. better, (c) accuracy of predicting the next 32 observations-higher is better. The legend specifies both the architecture and the learning objective. Lines show means, shaded surfaces show standard. errors. The dashed line highlights unsupervised learning (no state labels). LSTMs trained for state. estimation cannot predict observations and therefore are not included in (c)..\nlocations, the scale of the actions, and the amount of random noise are unknown and need to be learned from data, i.e. a sequence of observations, actions, and-in the supervised setting-states produced by the robot moving randomly through the environment. More details about the tasks, the experimental setting, learning parameters, etc. can be found in Appendix|A\nDrone task: For the drone localization task, we performed an experiment using 40o0 training steps (see Table|1). Our results show that this data is sufficient for the E2E-HF (but not for the LSTM) to achieve good performance. Our method only required a similar amount of data as for the 1D hallway task, even though the histogram size had increased from 100 to 2500 bins..\nTable 1: Drone task: test performance of different methods with 40o0 training samples"}, {"section_index": "10", "section_name": "6. 3 RESULTS: OPTIMIZATION OF END-TO-END PERFORMANCE", "section_text": "Hallway task: While HFs excel with very few data, E2E-HFs surpass them if more than 2000 train ing samples are available (see gray and yellow lines in Fig.5a). For the mean squared error metric the best method is the E2E-HF with a mean squared error objective (yellow line). However, if we care about a different metric, e.g. accuracy of estimating the discrete state, the methods rank differ ently (see Fig. 5b). The best method for the previous metric (yellow line) is outperformed by HFs (gray line) and even more so by E2E-HFs that are optimized for accuracy (teal line). For yet anothe metric, i.e. accuracy of predicting future observations, HFs outperform both other approaches but are equal to E2E-HFs optimized for predicting future observations (see Fig.5c).\nDrone task: The results of the drone localization task show the same pattern (see Table[1). The bes method for every metric is the E2E-HF that optimizes this metric.\nDiscussion: E2E-HFs perform better than HFs because they optimize the models for the filtering. process (with respect to the metric they were trained for) rather than optimizing model accuracy This can be advantageous because \"inaccurate\"' models can improve end-to-end performance (com pare the HF model learned in isolation to the models learned end-to-end in Fig.6a)..\nHallway and drone tasks: In both tasks, unsupervised E2E-HFs were similar to HFs and better than all other methods for predicting future observations. Interestingly, they also had comparatively low mean squared error for state estimation even though they had never seen any state labels (see dashed green line in Fig.5|and second line in Table[1). In fact, the qualitative results for both tasks show a remarkable similarity between the learned models and the estimated belief between HFs and unsupervised E2E-HFs (compare HF and E2E-HF (unsup.) in Fig.6) and Fig.7\nDiscussion: E2E-HFs can learn state estimation purely based on observations and actions. By pre. dicting future observations using the structure of the histogram filter algorithm, the method discover a state representation that works well with this algorithm, which is surprisingly close to the \"correct models learned by HFs, although no state labels are used.."}, {"section_index": "11", "section_name": "7 CONCLUSION", "section_text": "We proposed to tightly combine prior knowledge captured in algorithms with the ability to lear from data. We demonstrated the feasibility and the advantages of this idea in the context of stat. estimation in robotics. Algorithmic priors lead to data-efficient learning, as knowledge about th problem structure encoded in the algorithm is provided explicitly and does not have to be extracte. from data. The ability to learn from data enables the use of algorithms when task-specifics ar unknown. The tight combination of both improves performance as the models are optimized for us in the algorithm. Furthermore, the explicit algorithmic structure enables unsupervised learning. W view our results as a proof of concept and are convinced that the combination of algorithms an machine learning will help solve novel problems, while balancing data efficiency and generality.."}, {"section_index": "12", "section_name": "ACKNOWLEDGMENTS", "section_text": "We gratefully acknowledge the funding provided by the Alexander von Humboldt foundation anc the Federal Ministry of Education and Research (BMBF).\nFigure 6: Hallway navigation task: (a-b) learned models for one environment (D=door state) and (c) belief evolution for a single test run in this environment. All methods used 4oo0 training samples\nHF E2E-HF (unsup.) E2E-HF (acc.) E2E-HF (mse) LSTM (acc.) LSTM (mse) Belief 0.0 0.05 0.1\nFigure 7: Drone localization task: belief evolution during single test run for different methods. Black dots/lines show the true position/trajectory of the drone. All methods used 40o0 training samples\nHF E2E-HF (unsup.) chunye E2E-HF (acc.) E2E-HF (mse) HF E2E-HF (unsup.) 1.0 E2E-HF (acc.) E2E-HF (mse) 1.0 0.8 0.8 0.6 sttte 0.6 0.4 0.4 0.2 0.2 Pqood 0.0 0.0 D DDD D D DDD D D DDD D D DDD D -10-5 0 5 10-10-5 0 5 10-10-5 0 5 10-10-5 0 5 10 State State State State State change State change State change State change (a) Learned measurement models (b) Learned motion models (for actions -1.0, 0.0, 1.0) HF E2E-HF (unsup.) E2E-HF acc. E2E-HF (mse) LSTM (acc.) LSTM (mse) D state D D D D Time Time Time Time Time Time Belief 0.0 0.2 0.4 0.6 0.8 1.0 Th\n(c) Belief over time during a test run. The true trajectory is marked by black dots."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Sander Dieleman, Jan Schluter, Colin Raffel, Eben Olson, Sren Kaae Snderby, Daniel Nouri, Danie. Maturana, Martin Thoma, et al. Lasagne: First release. August 2015..\nM. Freese E. Rohmer. S. P. N. Singh. V-REP: a Versatile and Scalable Robot Simulation Framework In Proc. of The International Conference on Intelligent Robots and Systems (IROs), 2013\nIan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. 2016\nRico Jonschkowski and Oliver Brock. Learning state representations with robotic priors. Au tonomous Robots, 39(3):407-428, July 2015\nRico Jonschkowski and Oliver Brock. Towards Combining Robotic Algorithms and Machine Learn ing: End-To-End Learnable Histogram Filters. In Workshop on Machine Learning Methods for. High-Level Cognitive Capabilities in Robotics 2016, Daejeon, South Korea, October 2016..\nDuy Nguyen-Tuong and Jan Peters. Model learning for robot control: a survey. Cognitive Process ing, 12(4):319-340, April 2011.\nAviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. Value Iteration Networks arXiv:1602.02867 [cs, stat], February 2016.\nS. Thrun, W. Burgard, and D. Fox. Probabilistic Robotics. MIT Press, Cambridge, MA, 2005\nAndy Zeng, Kuan-Ting Yu, Shuran Song, Daniel Suo, Ed Walker Jr., Alberto Rodriguez, and Jianx iong Xiao. Multi-view Self-supervised Deep Learning for 6d Pose Estimation in the Amazon Picking Challenge. arXiv:1609.09475 [cs], September 2016.\nTuomas Haarnoja, Anurag Ajay, Sergey Levine, and Pieter Abbeel. Backprop KF: Learning Dis criminative Deterministic State Estimators. arXiv preprint arXiv:1605.07148, 2016.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory.. Neural computation, 9(8): 735. 1780. 1997\nIinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann machines n1tO1 Q1A2010\nTheano Development Team. Theano: A Python framework for fast computation of mathematica expressions. arXiv e-prints, abs/1605.02688, May 2016"}, {"section_index": "14", "section_name": "A.1 HALLWAY LOCALIZATION TASK", "section_text": "The hallway has a length of 10 meters, where every full meter is either occupied by a door or by wall. At the beginning of every experiment trial, 5 doors are randomly arranged in the 10 spots in th. hallway. The binary observation of the robot senses whether the center of the robot is next to a doc or next to a wall. With probability O.1, the observation returns the wrong information, e.g. \"wall. instead of \"door' if the robot is next to a door..\nThe robot is represented as a single point. It moves with a velocity between -1 and 1 meter per. time step and stops when it reaches either end of the hallway. The action information that the robot. receives is the step that it performed as measured by odometry. This odometry measurement is. corrupted with zero mean Gaussian noise with standard deviation of 10% of it's actual movement.. Additionally, the odometry is scaled by a number between O.5 and 5.0, which is randomly sampled at the beginning of every trial, i.e. the robot does not know its exact embodiment. This makes the. exact motion model unknown, such that the robot needs to learn it from data..\nBoth during training and during testing the robot moves randomly, i.e. it randomly accelerates by a value between -0.5 and 0.5 at each time step. Apart from this acceleration, its velocity is affected by 10% friction at each time step and is set to zero when the robot reaches either end of the hallway. For each trial, the training data consists of a single random walk of the robot of length between 500 steps and 8oo0 steps. The data for the unsupervised learning, includes only the sequence of noisy observations and actions. For supervised learning, it additionally includes the groundtruth continuous and discrete state, i.e. the position of the robot."}, {"section_index": "15", "section_name": "A.2 DRONE LOCALIZATION TASK", "section_text": "The area for the drone localization task has a size of 5 times 5 meters, where every one meter tile is either purple or white. At the beginning of every experiment, the color of each tile is decided by a fair coin flip. Analogously to the hallway task, the binary observations inform the robot about the color of the tile which is directly underneath it. With probability of O.1, this observation returns the wrong color."}, {"section_index": "16", "section_name": "A.4 SOFTWARE", "section_text": "The test data consisted of 1o0o short time sequences of the robot moving in the same fashion starting from a random position. For all performance metrics, the belief was tracked for 32 steps. For the metric that measured observation prediction accuracy, the task was to predict 32 future observations given a sequence of 32 actions based on the current belief.\nThe drone is represented as a single point in 2D space. It moves with velocities between -0.5 and 0.5 meter per time step and stops when it reaches the boundary of the area. The other aspects of its movement, the noisy odometry, and the movement generation for training and test data are analogous to the hallway localization task\nTraining procedure: All methods where trained via minibatch stochastic gradient descent with. batch size 32 using Adam (Kingma & Ba2014) with learning rate O.001. The training length was determined using early stopping with patience, where 20% of the training data was used for. validation. After 100 epochs without an improvement on the validation data, the parameters that. achieved highest validation performance were returned.."}] |
BkSmc8qll | [{"section_index": "0", "section_name": "DYNAMIC NEURAL TURING MACHINE WITH CONTIN UOUS AND DISCRETE ADDRESSING SCHEMES", "section_text": "Caglar Gulcehre*, Sarath Chandar*, Kyunghyun Cho', Yoshua Bengio"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recently two promising approaches based on neural networks to this type of tasks have been proposed Memory networks (Weston et al., 2015b) explicitly store all the facts, or information, available for each episode in an external memory (as continuous vectors) and use the attention-based mechanism to index them when returning an output. On the other hand, neural Turing machines (NTM, (Grave. et al., 2014)) read each fact in an episode and decides whether to read, write the fact or do both to the external, differentiable memory.\nA crucial difference between these two models is that the memory network does not have a mechanism. to modify the content of the external memory, while the NTM does. In practice, this leads to easier. learning in the memory network. which in turn resulted in it being used more in real tasks (Bordes et al. 2015; Dodge et al., 2015). On the contrary, the NTM has mainly been tested on a series of small-scale. carefully-crafted tasks such as copy and associative recall. The NTM, however is more expressive.. precisely because it can store and modify the internal state of the network as it processes an episode.\nThe original NTM supports two modes of addressing (which can be used simultaneously.) They are content-based and location-based addressing. We notice that the location-based strategy is based or linear addressing. The distance between each pair of consecutive memory cells is fixed to a constant We address this limitation, in this paper, by introducing a learnable address vector for each memory cell of the NTM with least recently used memory addressing mechanism, and we call this variant a dynamic neural Turing machine (D-NTM).\nWe evaluate the proposed D-NTM on the full set of Facebook bAbI task (Weston et al., 2015b). using either continuous, differentiable attention or discrete, non-differentiable attention (Zaremba & Sutskever, 2015) as an addressing strategy. Our experiments reveal that it is possible to use the discrete non-differentiable attention mechanism, and in fact, the D-NTM with the discrete attention and GRU. controller outperforms the one with the continuous attention. After we published our paper on arXiv, a. new extension of NTM called DNC (Graves et al., 2016) has also provided results on bAbI task as well"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper, we extend neural Turing machine (NTM) into a dynamic neural Turing. nachine (D-NTM) by introducing a trainable memory addressing scheme. This addressing scheme maintains for each memory cell two separate vectors, content anc. address vectors. This allows the D-NTM to learn a wide variety of location-basec addressing strategies including both linear and nonlinear ones. We implemen the D-NTM with both continuous, differentiable and discrete, non-differentiable. read/write mechanisms. We investigate the mechanisms and effects for learning tc. read and write to a memory through experiments on Facebook bAbI tasks using botl a feedforward and GRU-controller. The D-NTM is evaluated on a set of Facebool oAbI tasks and shown to outperform NTM and LSTM baselines. We also provide further experimental results on sequential MNIST, associative recall and copy tasks\nDesigning general-purpose learning algorithms is one of the long-standing goals of artificial intelligence. Despite the success of deep learning in this area (see, e.g., (Goodfellow et al., 2016)) there are still a set. of complex tasks that are not well addressed by conventional neural networks. Those tasks often require a. neural network to be equipped with an explicit, external memory in which a larger, potentially unbounded.. set of facts need to be stored. They include, but are not limited to, episodic question-answering (Weston et al., 2015b; Hermann et al., 2015; Hill et al., 2015), compact algorithms (Zaremba et al., 2015),. dialogue (Serban et al.. 2016: Vinyals & Le. 2015) and video caption generation (Yao et al., 2015).\nWe also provide results on sequential-MNIST and algorithmic tasks proposed by (Graves et al., 2014. in order to investigate the ability of our model when dealing with long-term dependencies"}, {"section_index": "3", "section_name": "Our Contributions", "section_text": "The proposed dynamic neural Turing machine (D-NTM) extends the neural Turing machine (NTM (Graves et al., 2014)) which has a modular design. The NTM consists of two main modules, a controller and, a memory. The controller, which is often implemented as a recurrent neural network, issues a command to the memory so as to read, write to and erase a subset of memory cells. Although the memory was originally envisioned as an integrated module, it is not necessary, and the memory may be an external, black box (Zaremba & Sutskever, 2015)."}, {"section_index": "4", "section_name": "2.1 CONTROLLER", "section_text": "At each time step t, the controller (1) receives an input value x', (2) addresses and reads the memory and creates the content vector $t, (3) erases/writes a portion of the memory, (4) updates its own hidden state. ht, and (5) outputs a value yt (if needed.) In this paper, we use both a gated recurrent unit (GRU, (Cho. et al., 2014)) and a feedforward-controller to implement the controller such that for a GRU controller\nht = GRU(xt.ht-1.\nor for a feedforward-controller\nht =o(xt,t"}, {"section_index": "5", "section_name": "2.2 MEMORY", "section_text": "We use a rectangular matrix M E RN (dc+da) to denote N memory cells. Unlike the original NTM we partition each memory cell vector into two parts:\nThe address part a, is considered a model parameter that is updated during training. During inference the address part is not overwritten by the controller and remains constant. On the other hand, the. content part c; is both read and written by the controller both during training and inference. At the beginning of each episode, the content part of the memory is refreshed to be an all-zero matrix, Co = 0. This introduction of the learnable address portion for each memory cell allows the model to learn sophisticated location-based addressing strategies. A similar addressing mechanism is also explored in (Reed & de Freitas, 2015) in the context of learning program traces.."}, {"section_index": "6", "section_name": "2.3 MEMORY ADDRESSING", "section_text": "1. We propose a generalization of Neural Turing Machine called a dynamic neural Turing machine. (D-NTM) which employs a learnable and location-based addressing.. 2. We demonstrate the application of neural Turing machines on a more natural and less toyish task:. episodic question-answering besides the toy tasks. We provide detailed analysis of our model on this task. 3. We propose to use the discrete attention mechanism and empirically show that, it can outperform. the continuous attention based addressing for episodic QA task.. 4. We propose a curriculum strategy for our model with the feedforward controller and discrete. attention that improves our results significantly..\nM = [A;C]]\nThe first part A E RNda is a learnable address matrix, and the second C E RNdc a content matrix. In other words, each memory cell m, is now.\nm; = [a;; ci].\nMemory addressing in the D-NTM is equivalent to computing an N-dimensional address vector. The D NTM computes three such vectors for respectively reading wt E RN, erasing et E Rdc and writing ut E RN. Specifically for writing, the controller further computes a candidate memory content vector ct E\nStory Controller Memory Address 1 Content Address 2 Content Fact t-1 Address 3 Content Address 4 Content Fact t Writer Address 5 Content Address 6 Content Question Address 7 Content Reader Content Answer\nRdc based on its current hidden state of the controller ht E Rdn and the input of the controller scaled with. a scalar gate at which is a function of the hidden state and the input of the controller as well. see Ean 4\nt=(wt)Mt-1\nCt[j] = (1- etu{) O Ct-1[j] + u,ct\nwhere the subscript j in Ct[i] denotes the j-th row of the content part Ct of the memory matrix M\nNo Operation (NOP) As found in (Joulin & Mikolov, 2015), an additional NOP action might be beneficial for the controller not to access the memory once in a while. We model this situation by designating one memory cell as a NOP cell. Reading or writing from this memory cell is ignored.\nOnce the proposed D-NTM is executed, it returns the output distribution p(y|x1, ..., xT). As a result we define a cost function as the negative log-likelihood:.\nFigure 1: A graphical illustration of the proposed dynamic neural Turing machine with the recurrent-controller. The controller receives the fact as a continuous vector encoded by a recurrent neural network, computes the read and write weights for addressing the memory. If the D-NTM automatically detects that a query has been received, it returns an answer and terminates.\nt = f ct = ReLU(Wmht + a'Wxx + bm)\nReading With the read vector wt, the content vector read from the memory Rda+dc is retrieved\nN 1 C(0) = - logp(yn|x,...,x) N n=1\nwhere 0 is a set of all the parameters. As the proposed D-NTM, just like the original NTM, is fully. end-to-end differentiable, we can compute the gradient of this cost function by using backpropagation and learn the parameters of the model with a gradient-based optimization algorithm, such as stochastic. gradient descent, to train it end-to-end.."}, {"section_index": "7", "section_name": "3.1 ADDRESS VECTORS", "section_text": "Each of the address vectors (both read and write) is computed in the same way. The way they are computed are very similar to the content based addressing in (Graves et al., 2014). First, the controller. computes a key vector:\nkt =Wht+ bk\nThe address vector is then computed by\nz= tS(k, m) exp(z) , exp\nexl\nwhere the similarity function S E R0 is defined as\nAt each time-step, controller may require more than one-step for accessing to the memory. The origina NTM addresses this by implementing multiple sets of read, erase and write heads. In this paper, we explore an option of allowing each head to operate more than once at each time step, similar to the multi-hop mechanism from the end-to-end memory network (Sukhbaatar et al., 2015)."}, {"section_index": "8", "section_name": "3.3 DYNAMIC LEAST RECENTLY USED ADDRESSING", "section_text": "We introduce a memory addressing schema that can learn to put more emphasis on the least recentl used (LRU) memory locations. As observed in (Santoro et al., 2016; Rae et al., 2016), we find it easie to learn the write operations with the use of LRU addressing.\nTo learn a LRU based addressing, first we compute the exponentially moving averages of the logits (zt as Vt, Vt = 0.1vt-1 + O.9zt. We rescale the accumulated vt with Yt, such that the controller adjusts the influence of how much previously written memory locations should effect the attention weights of a particular time-step. Next, we subtract vt from zt in order to reduce the weights of previously read or written memory locations. /t is a shallow MLP with a scalar output and it is conditioned or the hidden state of the controller. yt is parametrized with the parameters u, and by,\nYt = sigmoid(u' ht + 1 Wt = softmax(Zt - YtVt-1\nThis addressing method increases the weights of the least recently used rows of the memory. The magnitude of the influence of the least-recently used memory locations is being learned and adjusted with Yt. Our LRU addressing is dynamic due to the model's ability to switch between pure content-based addressing and LRU. During the training, we do not backpropagate through vt. Due to the dynamic nature of this addressing mechanism, it can be used for both read and write operations. If needed the model will automatically learn to disable LRU while reading from the memory.\nIn this section, we describe the discrete attention based addressing strategy\nwhere Wk E RN(da+dc) and bk E Rda+dc if the read head is being computed, otherwise. W E RN xde and b E IRde if the write head weights are being computed. They can be the parameters. for a specific head (either read or write.) Also, the sharpening factor t E R1 is computed as:\nsoftplus(x) = log(exp(x) + 1) t = softplus(uht + b) + 1\nt = softplus(u ht + bs) + 1\n= tS(kt,mt)\nx:y S(x,y) = (xyl+ e)\nDiscrete Addressing Let us use w to denote an address vector (either read, write or erase) at time. t. By definition in Eq. (1O), every element in this address vector is positive and sums up to one. In other words, we can treat this vector as the probabilities of a categorical distribution C(w) with dim(w choices:\nWk=I(k=j)\nwhere j ~ C(w), and I is an indicator function.\nTraining. We use this sampling-based strategy for all the heads during training. This clearly makes. the use of backpropagation infeasible to compute the gradient, as the sampling procedure is not. differentiable. Thus, we use REINFORCE (Williams, 1992) together with the three variance reduction techniques-global baseline, input-dependent baseline and variance normalization- suggested in (Mnih. & Gregor, 2014).\nLet us define R(x) = log as a reward. We first center and re-scale the reward by\nR(x)R(x)-b(x)\nfor [x] 6] Hs(x) s(2[x-0), Otherwise,\nThen, the cost function for each training example is approximated as\nwhere J is the number of addressing steps, A is the entropy regularization coefficient, and H denotes the entropy.\nInference Once training is over, we switch to a deterministic strategy. We simply choose an elemen of w with the largest value to be the index of the target memory cell, such that.\nWe can rewrite the weights wz as in Equation 14, where it is expressed as the combination of continuous attention weights w' and discrete attention weights w' with t being a binary variable that chooses to use one of them.\n+1-wt\n= Wj\nwhere w; is the j-th element of w. We can readily sample from this categorical distribution and form an one-hot vector w such that.\nR(x) - b R(x) +e\nwhere b(x) is computed by a baseline network which takes as input x and predicts its estimated reward The baseline network is trained to minimize the Huber loss (Huber, 1964) between the true reward R(x)* and the predicted reward b(x). We use the Huber loss, which is defined by\ndue to its robustness. As a further measure to reduce the variance, we regularize the negative entropy of all those category distributions to facilitate a better exploration during training (Xu et al.. 2015)\n0) = - logp(y[X1:T, W1:J, U1:J, e1:J ->R(xn)(logp(w;|x1:r) + log p(uj|x1:r) + log p(ej|x1:T) j=1 AH `(H(w;|x1:T) + H(uj|X1:T) + H(e;|X1:T))\n-R(xn)(logp(w;|x1:r) + logp(uj|x1:r) +logp(ej|x1:T) j=1 J H (H(w;|x1:T) +H(uj|X1:T) +H(ej|X1:T)). j=1\nCurriculum Learning for the Discrete Attention. Training discrete attention with feed-forward. controller and REINFORCE is challenging. We propose to use a curriculum strategy for training with the discrete attention in order to tackle this problem. For each minibatch, we sample from a binomial distribution with the probability p', t ~ Bin(pt). The model will either use the discrete or the continuous-attention based on the t. We start the training procedure with po = 1 and during. the training p' is annealed to 0 by setting pt =. p0\nBy using this curriculum learning strategy, at the beginning of the training, the model learns to use the memory mainly with the continuous attention. As we anneal the pt, the model will rely more on the discrete attention.\nWhen the controller of D-NTM is a powerful recurrent neural network, it is important to regularize training of the D-NTM so as to avoid suboptimal solutions in which the D-NTM ignores the memory. and works as a simple recurrent neural network.\nRead-Write Consistency Regularizer One such suboptimal solution we have observed in our. preliminary experiments with the proposed D-NTM is that the D-NTM uses the address part A of. the memory matrix simply as an additional weight matrix, rather than as a means to accessing the content part C. We found that this pathological case can be effectively avoided by encouraging the read. head to point to a memory cell which has also been pointed by the write head. This can be implemented as the following regularization term:.\nIn the equations above, u is the write and w is the read weights\nNext Input Prediction as Regularization Temporal structure is a strong signal that should be exploited by the controller based on a recurrent neural network. We exploit this structure by letting the controller predict the input in the future. We maximize the predictability of the next input by the controller during training. This is equivalent to minimizing the following regularizer:\nA recurrent neural network (RNN), which is used as a controller in the proposed D-NTM, has an implicit memory in the form of recurring hidden states. Even with this implicit memory, a vanilla. RNN is however known to have difficulties in storing information for long time-spans (Bengio et al.,. 1994; Hochreiter, 1991). Long short-term memory (LSTM, (Hochreiter & Schmidhuber, 1997)) and. gated recurrent units (GRU, (Cho et al., 2014)) have been found to address this issue. However all these models based solely on RNNs have been found to be limited when they are used to solve, e.g.,. algorithmic tasks and episodic question-answering.\nIn addition to the finite random access memory of the neural Turing machine, based on which the D-NTM is designed, other data structures have been proposed as external memory for neural networks In (Sun et al., 1997; Grefenstette et al., 2015; Joulin & Mikolov, 2015), a continuous, differentiable stack was proposed. In (Zaremba et al., 2015; Zaremba & Sutskever, 2015), grid and tape storages. are used. These approaches differ from the NTM in that their memory is unbounded and can grow. indefinitely. On the other hand, they are often not randomly accessible..\nMemory networks (Weston et al., 2015b) form another family of neural networks with external memory In this class of neural networks, information is stored explicitly as it is (in the form of its continuous representation) in the memory, without being erased or modified during an episode. Memory networks and their variants have been applied to various tasks successfully (Sukhbaatar et al., 2015; Bordes et al. 2015; Dodge et al., 2015; Xiong et al., 2016). Miller et al. (2016) have also independently proposed the idea of having separate key and value vectors for memory networks.\nAnother related family of models is the attention-based neural networks. Neural networks witl continuous or discrete attention over an input have shown promising results on a variety oi challenging tasks, including machine translation (Bahdanau et al., 2015; Luong et al., 2015), speech recognition (Chorowski et al., 2015), machine reading comprehension (Hermann et al., 2015) anc image caption generation (Xu et al., 2015).\nT Rrw(w,u)=X|1- up)'w? t'=1 t=1\nRpred(W) = - logp(ft+1|ft, wt, Ut, Mt; W))\nwhere ft is the current input and ft+1 is the input at next timestep. We found this regularizer to be effective in our preliminary experiments and use it for bAbI tasks.\nThe latter two, the memory network and attention-based networks, are however clearly distinguishabl from the D-NTM by the fact that they do not modify the content of the memory.."}, {"section_index": "9", "section_name": "7 EXPERIMENTS", "section_text": "We provide experimental results to demonstrate the abilities of our model, first on Facebook bAb task (Weston et al.. 2015a). We give detailed analysis and experimental results on this task. We als. compare different variations of NTM on bAbI tasks. We have performed experiments on sequentia permuted MNIST (Le et al., 2015) and on toy tasks to compare other published models on these task with a recurrent controller. The details of our experiments are provided in the supplementary materia"}, {"section_index": "10", "section_name": "7.1 EPISODIC OUESTION-ANSWERING: BABI TASKS", "section_text": "In this section, we evaluate the proposed D-NTM on the recently proposed episodic question-answering task called Facebook bAbI. We use the dataset with 10k training examples per sub-task provided by. Facebook.1 For each episode, the D-NTM reads a sequence of factual sentences followed by a question. all of which are given as natural language sentences. The D-NTM is expected to store and retrieve relevant information in the memory in order to answer the question based on the presented facts. Exact implementation details and hyper-parameter settings are provided in the appendix.."}, {"section_index": "11", "section_name": "7.1.1 GOALS", "section_text": "The goal of this experiment is three-fold. First, we present for the first time the performance of a memory-based network that can both read and write dynamically on the Facebook bAbI tasks2. We aim to understand whether a model that has to learn to write an incoming fact to the memory, rather thar storing it as it is, is able to work well, and to do so, we compare both the original NTM and proposec D-NTM against an LSTM-RNN."}, {"section_index": "12", "section_name": "7.1.2 RESULTS AND ANALYSIS", "section_text": "In Table 1, we first observe that the NTMs are indeed capable of solving this type of episodic question-answering better than the vanilla LSTM-RNN. Although the availability of explicit memory. in the NTM has already suggested this result, we note that this is the first time neural Turing machines have been used in this specific task.\nAll the variants of NTM with the GRU controller outperform the vanilla LSTM-RNN. However, not all of them perform equally well. First, it is clear that the proposed dynamic NTM (D-NTM) using the GRU. controller outperforms the original NTM with the GRU controller (NTM, CBA only NTM vs. continuous. D-NTM, Discrete D-NTM). As discussed earlier, the learnable addressing scheme of the D-NTM allows. the controller to access the memory slots by location in a potentially nonlinear way. We expect it to help with tasks that have non-trivial access patterns, and as anticipated, we see a large gain with the D-NTM. over the original NTM in the tasks of, for instance, 12 - Conjunction and 17 - Positional Reasoning.\n1 https://research.facebook.com/researchers/1543934539189348\n2Similar experiments were done in the recently published (Graves et al., 2016), but D-NTM results for bAbI tasks were already available in arxiv by that time..\nSecond, we investigate the effect of having to learn how to write. The fact that the NTM needs to learn to write likely has adverse effect on the overall performance, when compared to, for instance end-to-end memory networks (MemN2N, (Sukhbaatar et al., 2015)) and dynamic memory network (DMN+, (Xiong et al., 2016)) both of which simply store the incoming facts as they are. We quantify this effect in this experiment. Lastly, we show the effect of the proposed learnable addressing scheme\nWe further explore the effect of using a feedforward controller instead of the GRU controller. In addition to the explicit memory, the GRU controller can use its own internal hidden state as the memory. On the other hand, the feedforward controller must solely rely on the explicit memory, as it is the only. memory available.\nAmong the recurrent variants of the proposed D-NTM, we notice significant improvements by using. discrete addressing over using continuous addressing. We conjecture that this is due to certain types of tasks that require precise/sharp retrieval of a stored fact, in which case continuous addressing is in disadvantage over discrete addressing. This is evident from the observation that the D-NTM with discrete addressing significantly outperforms that with continuous addressing in the tasks of 8 -\nTable 1: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with. the GRU and feedforward controller. FF stands for the experiments that are conducted with feedforward. controller. Let us, note that LBA* refers to NTM that uses both LBA and CBA. In this table, we. compare multi-step vs single-step addressing, original NTM with location based+content based addressing vs only content based addressing, and discrete vs continuous addressing on bAbI..\nLists/Sets and 11 - Basic Coreference. Furthermore, this is in line with an earlier observation in (Xu et al. 2015). where discrete addressing was found to generalize better in the task of image caption generatior\nWe empirically found training of the feedforward controller more difficult than that of the recurren controller. We train our feedforward controller based models four times longer (in terms of the numbe of updates) than the recurrent controller based ones in order to ensure that they are converged for mos of the tasks. On the other hand. the models trained with the GRU controller overfit on bAbI task very quickly. For example, on tasks 3 and 16 the feedforward controller based model underfits (i.e high training loss) at the end of the training, whereas with the same number of units the model witl the GRU controller can overfit on those tasks after 3,000 updates only.\nWhen our results are compared to the variants of the memory network Weston et al. (2015b) (MemN2N and DMN+), we notice a significant performance gap. We attribute this gap to the difficulty in learning to manipulate and store a complex input.\nTable 2: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with feedforward controller.\nWe also provide further experiments investigating different extensions on D-NTM in the appendix\nIn Table 2. we also observe that the D-NTM with the feedforward controller and discrete attentior performs worse than LSTM and D-NTM with continuous-attention. However, when the proposed curriculum strategy from Sec. 4 is used, the average test error drops from 68.30 to 37.79\nFF FF FF Soft Discrete Discrete* Task D-NTM D-NTM D-NTM 1 4.38 81.67 14.79 2 27.5 76.67 76.67 3 71.25 79.38 70.83 4 0.00 78.65 44.06 5 1.67 83.13 17.71 6 1.46 48.76 48.13 7 6.04 54.79 23.54 8 1.70 69.75 35.62 9 0.63 39.17 14.38 10 19.80 56.25 56.25 11 0.00 78.96 39.58 12 6.25 82.5 32.08 13 7.5 75.0 18.54 14 17.5 78.75 24.79 15 0.0 71.42 39.73 16 49.65 71.46 71.15 17 1.25 43.75 43.75 18 0.24 48.13 2.92 19 39.47 71.46 71.56 20 0.0 76.56 9.79 Avg.Err. 12.81 68.30 37.79"}, {"section_index": "13", "section_name": "7.3 NTM TOY TASKS", "section_text": "We explore the possibility of using D-NTM to solve algorithmic tasks such as copy and associative recall tasks. We train our model on the same lengths of sequences that is experimented in (Graves et al., 2014). We report our results in Table 4. We find out that D-NTM using continuous-attention can successfully learn the \"Copy' and 'Associative Recall'' tasks.\nIn Table 4, we train our model on sequences of the same length as the experiments in (Graves et al., 2014. and test the model on the sequences of the maximum length seen during the training. We consider mode to be successful on copy or associative recall if its validation cost (binary cross-entropy) is lower thar. 0.02 over the sequences of maximum length seen during the training. We set the threshold to 0.02 tc. determine whether a model is successful on a task. Because empirically we observe that the models have higher validation costs perform badly in terms of generalization over the longer sequences. \"D-NTM. discrete\" model in this table is trained with REINFORCE using moving averages to estimate the baseline\nTest Acc D-NTM discrete MAB 89.6 D-NTM discrete IB 92.3 Soft D-NTM 93.4 NTM 90.9 Sof D-N I-RNN (Le et al., 2015) 82.0 93.1 NT Zoneout (Krueger et al., 2016) LSTM (Krueger et al., 2016) 89.8 Unitary-RNN (Arjovsky et al., 2015) 91.4 Recurrent Dropout (Krueger et al., 2016) 92.5\nTable 3: Sequential pMNIST\nIn this paper we extend neural Turing machines (NTM) by introducing a learnable addressing scheme which allows the NTM to be capable of performing highly nonlinear location-based addressing This extension, to which we refer by dynamic NTM (D-NTM), is extensively tested with various configurations, including different addressing mechanisms (continuous vs. discrete) and differeni number of addressing steps, on the Facebook bAbI tasks. This is the first time an NTM-type model was tested on this task, and we observe that the NTM, especially the proposed D-NTM, performs better than vanilla LSTM-RNN. Furthermore, the experiments revealed that the discrete, discrete addressing works better than the continuous addressing with the GRU controller, and our analysis reveals that this is the case when the task requires precise retrieval of memory content.\n3Let us note that, the current state of art on this task is recurrent batch normalization with LSTM (Cooijmans. et al., 2016) with 95.6% accuracy. It is possible to use recurrent batch normalization in our model and potentially improve our results on this task as well..\nIn sequential MNIST task, the pixels of the MNIST digits are provided to the model in scan line order left to right and top to bottom (Le et al., 2015). At the end of sequence of pixels, the model predicts the label of the digit in the sequence of pixels. We experiment D-NTM on the variation of sequential MNIST where the order of the pixels is randomly shuffled, we call this task as permuted MNIST (pMNIST). An important contribution of this task to our paper, in particular, is to measure the model's ability to perform well when dealing with long-term dependencies. We report our results in Table 33, we observe improvements over other models that we compare against. In Table 3, \"discrete addressing with MAB\" refers to D-NTM model using REINFORCE with baseline computed from moving averages of the reward. Discrete addressing with IB refers to D-NTM using REINFORCE with input-based baseline\nOur experiments show that the NTM-based models can be weaker than other variants of memory networks which do not learn but have an explicit mechanism of storing incoming facts as they are. We conjecture that this is due to the difficulty in learning how to write, manipulate and delete the content of memory. Despite this difficulty, we find the NTM-based approach, such as the proposed D-NTM.\nto be a better, future-proof approach, because it can scale to a much longer horizon (where it becomes impossible to explicitly store all the experiences.).\nOn pMNIST task, we show that our model can outperform other similar type of approaches proposed to deal with the long-term dependencies. On copy and associative recall tasks, we show that our model can solve the algorithmic problems that are proposed to solve with NTM type of models.."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. arXi preprint arXiv:1511.06464, 2015.\nYoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradien descent is difficult. Neural Networks, IEEE Transactions on. 5(2):157-166. 1994\nAntoine Bordes. Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple questior answering with memory networks. arXiy preprint arXiv:1506.02075. 2015\nKyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation arXiv preprint arXiv:1406.1078, 2014.\nTim Cooijmans, Nicolas Ballas, Cesar Laurent, and Aaron Courville. Recurrent batch normalization arXiv preprint arXiv:1603.09025, 2016\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustaf Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. arXiv preprin arXiv:1506.03340, 2015.\nThe success of both the learnable address and the discrete addressing scheme suggests two future. research directions. First, we should try both of these schemes in a wider array of memory-based models as they are not specific to the neural Turing machines. Second, the proposed D-NTM needs to be. evaluated on a diverse set of applications, such as text summarization (Rush et al., 2015), visual question answering (Antol et al.. 2015) and machine translation, in order to make a more concrete conclusion.\nJan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio Attention-based models for speech recognition. arXiv preprint arXiv:1506.07503. 2015\nIan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in preparation for MI P IR\nFelix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children's books with explicit memory representations. arXiv preprint arXiv:1511.02301, 2015.\nSepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universita Miinchen, pp. 91, 1991.\nPeter J. Huber. Robust estimation of a location parameter. Ann. Math. Statist., 35(1):73-101, 03 196\nArmand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems. pp. 190-198. 2015.\nQuoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941, 2015.\nAndriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiy preprint arXiv:1402.0030. 2014\nJack W Rae, Jonathan J Hunt, Tim Harley, Ivo Danihelka, Andrew Senior, Greg Wayne, Alex Graves and Timothy P Lillicrap. Scaling memory-augmented neural networks with sparse reads and writes In Advances in NIPS. 2016\nAdam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One-sho. learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, 2016\nIulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings. of the 30th AAAI Conference on Artificial Intelligence (AAAI-16), 2016.\nOriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015\nJason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards ai-complete question answering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698. 2015a\nJason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings Of The International Conference on Representation Learning (ICLR 2015), 2015b. In Press.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8) 1735-1780, 1997.\nDavid Krueger, Tegan Maharaj, Janos Kramar, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron Courville, et al. Zoneout: Regularizing rnns by randomly preserving hidden activations. arXiv preprint arXiv:1606.01305, 2016.\nAlexander M. Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence. summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language. Processing. EMNLP 2015, Lisbon. Portugal, September 17-21. 2015. pp. 379-389. 2015\nRonald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcemen learning. Machine Learning, 8:229-256, 1992\nCaiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. CoRR, abs/1603.01417, 2016.\nWojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. CoRR abs/1505.00521, 2015\nWe use the same hyperparameters for all the tasks for a given model"}, {"section_index": "15", "section_name": "A.1.2 CONTROLLER", "section_text": "We experiment with both a recurrent and feedforward neural network as the controller that generate. the read and write weights. The controller has 180 units. We train our feed-forward controller using. noisy-tanh activation function (Gulcehre et al., 2016) since we were experiencing training difficulties. with sigmoid and tanh activation functions. We use both single-step and three-steps addressing with. Our GRU controller."}, {"section_index": "16", "section_name": "A.1.3 MEMORY", "section_text": "The memory contains 120 memory cells. Each memory cell consists of a 16-dimensional address part and 28-dimensional content part."}, {"section_index": "17", "section_name": "A.1.4 TRAINING DETAILS", "section_text": "We set aside a random 10% of the training examples as a validation set for each sub-task and use. it for early-stopping and hyperparameter search. We train one D-NTM for each sub-task, using. Adam (Kingma & Ba, 2014) with its learning rate set to 0.003 and 0.007 respectively for GRL. and Feedforward controller. The size of each minibatch is 160, and each minibatch is constructec uniform-randomly from the training set.."}, {"section_index": "18", "section_name": "A.2 MODEL AND TRAINING DETAILS FOR SEQUENTIAL MNIST", "section_text": "On sequential MNIST task we try to keep the capacity of our model to be close to our baselines. We use 100 GRU units in the controller and each content vector of size 8 and with address vectors of size 8. We use a learning rate of 1e - 3 and trained the model with adam optimizer. We did not use the. read and write consistency regularization in any of our models.."}, {"section_index": "19", "section_name": "B VISUALIZATION OF DISCRETE ATTENTION", "section_text": "We visualize the attention of D-NTM with GRU controller with discrete attention in Figure 2. From this example, we can see that D-NTM has learned to find the correct supporting fact even without. any supervision for the particular story in the visualization.\nIn Figure 3, we compare the learning curves of the continuous and discrete attention D-NTM mode with recurrent controller on Task 1. Surprisingly, the discrete attention D-NTM converges faster thar the continuous-attention model. The main difficulty of learning continuous-attention is due to the fact that learning to write with continuous-attention can be challenging.\nWe use a recurrent neural network with GRU units to encode a variable-length fact into a fixed-size vector representation. This allows the D-NTM to exploit the word ordering in each fact, unlike when. facts are encoded as bag-of-words vectors.\nOn both copy and associative recall tasks, we try to keep the capacity of our model to be close to our baselines. We use 100 GRU units in the controller and each content vector of has a size of 8 and using. address vector of size 8. We use a learning rate of 1e - 3 and trained the model with adam optimizer We did not use the read and write consistency regularization in any of our models. For the model with. the discrete attention we use REINFORCE with baseline computed using moving averages..\n9 11 27 32 37 42 102 11 Antoine is bored. Jason is hungry . Jason travelled to the kitchen. Antoine travelled to the garden. Write Read Jason got the apple there. Yann is tired Yann journeyed to the bedroom. Why did yann go to the bedroom ?\nFigure 2: An example view of the discrete attention over the memory slots for both read (left) and write heads(right). x-axis the denotes the memory locations that are being accessed and y-axis correspond. to the content in the particular memory location. In this figure, we visualize the discrete-attention mode with 3-reading steps and on task-20. It is easy to see that the NTM with discrete-attention accesses. to the relevant part of the memory. We only visualize the last-step of the 3-steps writing. Becaus. with discrete attention usually the model just reads the empty slots of the memory..\n3.0 Train nll hard attention mode. Train nll soft attention model. 2.5 2.0 1.5 1.0 0.5 0.0 0 50 100 150 200 250 300\nD A COMPARISON BETWEEN THE LEARNING CURVES OF INPUT BASED BASELINE AND REGULAR BASELINE ON pMNIS\nIn Table 5, we provide results investigating the effects of using discrete attention model at the test-time for a model trained with feed-forward controller and continuous attention. Discrete* D-NTM mode bootstraps the discrete attention with the continuous attention, using the curriculum method that we have\nFigure 3: A visualization for the learning curves of continuous and discrete D-NTM models trained on Task 1 using 3 steps. In most tasks, we observe that the discrete attention model with GRU controller does converge faster than the continuous-attention model\nIn Figure 4, we show the learning curves of input-based-baseline (ibb) and regular REINFORCE with. moving averages baseline (mab) on the pMNIST task. We observe that input-based-baseline in general. is much easier to optimize and converges faster as well. But it can quickly overfit to the task as well\nTRAINING WITH CONTINUOUS-ATTENTION AND TESTING WITH DISCRETE-ATTENTION\n2.5 validation learning curve of ibb validation learning curve of mab 2.0 training learning curve of ibb. training learning curve of mab. 1.5 1.0 0.5 0.0 0 20 40 60 80 100\n2.5 validation learning curve of ibb validation learning curve of mab 2.0 training learning curve of ibb. training learning curve of mab. 1.5 1.0 0.5 0.0 0 20 40 60 80 100\nintroduced in Section \"Curriculum Learning for the Discrete Attention\". Discrete' D-NTM model is the continuous-attention model which uses discrete-attention at the test time. We observe that the Discrete! D-NTM model which is trained with continuous-attention outperforms Discrete D-NTM model.\ncontinuous Discrete Discrete Discrete Task D-NTM D-NTM D-NTM D-NTM 1 4.38 81.67 14.79 72.28 2 27.5 76.67 76.67 81.67 3 71.25 79.38 70.83 78.95 4 0.00 78.65 44.06 79.69 5 1.67 83.13 17.71 68.54 6 1.46 48.76 48.13 31.67 7 6.04 54.79 23.54 49.17 8 1.70 69.75 35.62 79.32 9 0.63 39.17 14.38 37.71 10 19.80 56.25 56.25 25.63 11 0.00 78.96 39.58 82.08 12 6.25 82.5 32.08 74.38 13 7.5 75.0 18.54 47.08 14 17.5 78.75 24.79 77.08 15 0.0 71.42 39.73 73.96 16 49.65 71.46 71.15 53.02 17 1.25 43.75 43.75 30.42 18 0.24 48.13 2.92 11.46 19 39.47 71.46 71.56 76.05 20 0.0 76.56 9.79 13.96 Avg 12.81 68.30 37.79 57.21\nFigure 4: We compare the learning curves of our D-NTM model using discrete attention on pMNIST task with input-based baseline and regular REINFORCE baseline. The x-axis is the loss and y-axis is the number of epochs.\nTable 5: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with the feedforward controller. Discrete* D-NTM model bootstraps the discrete attention with the continuous attention, using the curriculum method that we have introduced in Section 4. Discrete' D-NTM model is the continuous-attention model which uses discrete-attention at the test time."}, {"section_index": "20", "section_name": "F D-NTM WITH BOW FACT REPRESENTATION", "section_text": "Soft Discrete Soft Discrete Task D-NTM(1-step) D-NTM(1-step) D-NTM(3-steps) D-NTM(3-steps) 1 0.00 0.00 0.00 0.00 2 61.04 59.37 56.87 55.62 3 55.62 57.5 62.5 57.5 4 27.29 24.89 26.45 27.08 5 13.55 12.08 15.83 14.78 6 13.54 14.37 21.87 13.33 7 8.54 6.25 8.75 14.58 8 1.69 1.36 3.01 3.02 9 17.7 16.66 37.70 17.08 10 26.04 27.08 26.87 23.95 11 20.41 3.95 2.5 2.29 12 0.41 0.83 0.20 4.16 13 3.12 1.04 4.79 5.83 14 62.08 58.33 61.25 60.62 15 31.66 26.25 0.62 0.05 16 54.47 48.54 48.95 48.95 17 43.75 31.87 43.75 30.62 18 33.75 39.37 36.66 36.04 19 64.63 69.21 67.23 65.46 20 1.25 0.00 1.45 0.00 Avg 27.02 24.98 26.36 24.05\nTable 6: Test error rates (%) on the 20 bAbI QA tasks for models using 10k training examples with the GRU controller and representations of facts are obtained with BoW using positional encoding\nIn Table 6, we provide results for D-NTM using BoW with positional encoding (PE) Sukhbaatar et al (2015) as the representation of the input facts. The facts representations are provided as an input to the GRU controller. In agreement to our results with the GRU fact representation, with the BoW fact representation we observe improvements with multi-step of addressing over single-step and discrete addressing over continuous addressing."}] |
SJJKxrsgl | [{"section_index": "0", "section_name": "EMERGENCE OF FOVEAL IMAGE SAMPLING FROM LEARNING TO ATTEND IN VISUAL SCENES", "section_text": "Brian Cheung, Eric Weiss, Bruno Olshausen\n{bcheung, eaweiss,baolshausen}@berkeley.edu\nWe describe a neural attention model with a learnable retinal sampling lattice. The model is trained on a visual search task requiring the classification of an object em bedded in a visual scene amidst background distractors using the smallest number of fixations. We explore the tiling properties that emerge in the model's retinal sampling lattice after training. Specifically, we show that this lattice resembles the eccentricity dependent sampling lattice of the primate retina, with a high reso lution region in the fovea surrounded by a low resolution periphery. Furthermore we find conditions where these emergent properties are amplified or eliminated providing clues to their function."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The commonly accepted explanation for this eccentricity dependent sampling is that it provides us with both high resolution and broad coverage of the visual field with a limited amount of neural re. sources. The human retina contains 1.5 million ganglion cells, whose axons form the sole output of the retina. These essentially constitute about 300,000 distinct samples of the image due to the mul- tiplicity of cell types coding different aspects such as on vs. off channels (Van Essen & Anderson 1995). If these were packed uniformly at highest resolution (120 samples/deg, the Nyquist-dictated sampling rate corresponding to the spatial-frequencies admitted by the lens), they would subtend an image area spanning just 5x5 deg?. Thus we would have high-resolution but essentially tunnel vi- sion. Alternatively if they were spread out uniformly over the entire monocular visual field spanning roughly 150 deg? we would have wide field of coverage but with very blurry vision, with each sam- ple subtending 0.25 deg (which would make even the largest letters on a Snellen eye chart illegible) Thus, the primate solution makes intuitive sense as a way to achieve the best of both of these worlds. However we are still lacking a quantitative demonstration that such a sampling strategy emerges as the optimal design for subserving some set of visual tasks.\nHere, we explore what is the optimal retinal sampling lattice for an (overt) attentional system per. forming a simple visual search task requiring the classification of an object. We propose a learnable. retinal sampling lattice to explore what properties are best suited for this task. While evolutionary. pressure has tuned the retinal configurations found in the primate retina, we instead utilize gradi ent descent optimization for our in-silico model by constructing a fully differentiable dynamically. controlled model of attention."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "A striking design feature of the primate retina is the manner in which images are spatially sampled by retinal ganglion cells. Sample spacing and receptive fields are smallest in the fovea and then increase linearly with eccentricity, as shown in Figure[1 Thus, we have highest spatial resolution at the center of fixation and lowest resolution in the periphery, with a gradual fall-off in resolution as one proceeds from the center to periphery. The question we attempt to address here is, why is the retina designed in this manner - i.e., how is it beneficial to vision?\nOur choice of visual search task follows a paradigm widely used in the study of overt attention in humans and other primates (Geisler & Cormack!2011). In many forms of this task, a single target is randomly located on a display among distractor objects. The goal of the subject is to find the target as rapidly as possible.Itti & Koch|(200o) propose a selection mechanism based on manually.\n313Wv#0 100 07313 50 0 2 3 4 5 6 7 8 9 10 11 12 13 14 ECCENTRICITY m\nFigure 1: Receptive field size (dendritic field diameter) as a function of eccentricity of Retinal Ganglion Cells from a macaque monkey (taken from|Perry et al.(1984)).\ndefined low level features of real images to locate various search targets. Here the neural network must learn what features are most informative for directing attention..\nWhile neural attention models have been applied successfully to a variety of engineering applications Bahdanau et al.| 2014] Jaderberg et al. 2015 Xu et al. 2015 Graves et al.I 2014), there has beer little work in relating the properties of these attention mechanisms back to biological vision. An important property which distinguishes neural networks from most other neurobiological models is their ability to learn internal (latent) features directly from data\nBut existing neural network models specify the input sampling lattice a priori. Larochelle & Hinton (2010) employ an eccentricity dependent sampling lattice mimicking the primate retina, and Mnih et al.(2014) utilize a multi scale glimpse window' that forms a piece-wise approximation of this scheme. While it seems reasonable to think that these design choices contribute to the good perfor mance of these systems, it remains to be seen if this arrangement emerges as the optimal solution.\nWe further extend the learning paradigm of neural networks to the structural features of the glimps. mechanism of an attention model. To explore emergent properties of our learned retinal configura tions, we train on artificial datasets where the factors of variation are easily controllable. Despite this departure from biology and natural stimuli, we find our model learns to create an eccentricity dependent layout where a distinct central region of high acuity emerges surrounded by a low acuity periphery. We show that the properties of this layout are highly dependent on the variations presen. in the task constraints. When we depart from physiology by augmenting our attention model witl the ability to spatially rescale or zoom on its input, we find our model learns a more uniform layou which has properties more similar to the glimpse window proposed in Jaderberg et al.(2015); Gre gor et al.(2015). These findings help us to understand the task conditions and constraints in whicl. an eccentricity dependent sampling lattice emerges.\nwd 150 100 07319 A 50 0 1 Z 3 4 5 6 7 8 9 10 11 12 13 14 ECCENTRICITY mm\nAttention in neural networks may be formulated in terms of a differentiable feedforward function. This allows the parameters of these models to be trained jointly with backpropagation. Most for-. mulations of visual attention over the input image assume some structure in the kernel filters. For. example, the recent attention models proposed by Jaderberg et al.(2015); Mnih et al. (2014); Gre- gor et al.(2015); Ba et al.(2014) assume each kernel filter lies on a rectangular grid. To create a learnable retinal sampling lattice, we relax this assumption by allowing the kernels to tile the image independently.\nWe interpret a glimpse as a form of routing where a subset of the visual scene U is sampled to form. a smaller output glimpse G. The routing is defined by a set of kernels k[ ](s), where each kernel i. specifies which part of the input U| will contribute to a particular output G[i]. A control variable s\nFigure 2: Diagram of single kernel filter parameterized by nean and variance o.\nis used to control the routing by adjusting the position and scale of the entire array of kernels. With. this in mind, many attention models can be reformulated into a generic equation written as\nwhere m and n index input pixels of U and i indexes output glimpse features. The pixels in the input image U are thus mapped to a smaller glimpse G.\nThe centers of each kernel filter [i] are calculated with respect to control variables sc and sz and learnable offset [i|. The control variables specify the position and zoom of the entire glimpse [i] and o[i] specify the position and spread respectively of an individual kernel k[-, -, i]. These parameters are learned during training with backpropagation. We describe how the control variables are computed in the next section. The kernels are thus specified as follows:\nu[i] = (sc- [i])sz 6[i] = o[i]sz k[m, n, i](s) = N(m; x[i], 6[i])N(n; y[i], 6[i]\nWe assume kernel filters factorize between the horizontal m and vertical n dimensions of the input image. This factorization is shown in equation|4] where the kernel is defined as an isotropic gaussian N. For each kernel filter, given a center [i] and scalar variance 6[i], a two dimensional gaussian is defined over the input image as shown in Figure [2] These gaussian kernel filters can be thought of as a simplified approximation to the receptive fields of retinal ganglion cells in primates (Van Essen & Anderson1995).\nWhile this factored formulation reduces the space of possible transformations from input to output it can still form many different mappings from an input U to output G. Figure[3B shows the possible windows which an input image can be mapped to an output G. The yellow circles denote the centra location of a particular kernel while the size denotes the standard deviation. Each kernel maps tc one of the outputs G[i].\nPositional control sc can be considered analogous to the motor control signals which executes sac. cades of the eye, whereas sz would correspond to controlling a zoom lens in the eye (which has no counterpart in biology). In contrast, training defines structural adjustments to individual ker nels which include its position in the lattice as well as its variance. These adjustments are only possible during training and are fixed afterwards.Training adjustments can be considered analagous to the incremental adjustments in the layout of the retinal sampling lattice which occur over many generations, directed by evolutionary pressure in biology.\nH W G[i] =U[n,m]k[m,n,i](s] n m\nu[i]= (sc- [i])sz 6[i] = o[i]sz n,n, i](s) = N(m; x[i], 6[iDW(n; y[i], 6[i]\nA. Optimizing the Retinal Lattice B. Controlling the Retinal Lattice Control Control Control Sc,t; Szt Sc,t1; S2,t+1 Recurrent Recurrent Recurrent Training h ht+1 A Initial Layout. Glimpse Glimpse Glimpse Final Layout. G G++1 G++2 A t+1 t+2 Time\nFigure 3: A: Starting from an initial lattice configuration of a uniform grid of kernels, we learn an optmized configuration from data. B: Attentional fixations generated during inference in the model,. shown unrolled in time (after training)\nAbility Fixed Lattice. Translation Only. Translation and Zoom Translate retina via sc.t Learnable [i], o[i] Zoom retina via Sz,t.\nA glimpse at a specific timepoint, Gt, is processed by a fully-connected recurrent network frnnO\nIn this work, we investigate three variants of the proposed recurrent model:\nPrior to training, the kernel filters are initialized as a 12x12 grid (144 kernel filters), tiling uniformly. over the central region of the input image and creating a retinal sampling lattice as shown in Figure 5|before training. Our recurrent network, frnn is a two layer traditional recurrent network with 512-. 512 units. Our control network, fcontrol is a fully-connected network with 512-3 units (x,y,zoom).\nB. Controlling the Retinal Lattice Control Control Control S.t; s S z,t+1 C,t' Z,t Ct+1iS C,t+2; S Z,t+2 Recurrent Recurrent Recurrent h h h t+1 t+2 Glimpse Glimpse Glimpse G G++1 Gt+2 + t+1 t+2 Time\nTable 1: Variants of the neural attention model\nht = frnn(Gt,ht-1 Sc.t; Sz.t]= fcontrol\nThe global center sc,t and zoom sz,t are predicted by the control network fcontrol Q which is param eterized by a fully-connected neural network.\nFixed Lattice: The kernel parameters [i] and o[i] for each retinal cell are not learnable The model can only translate the kernel filters sc,t = fcontrol(ht) and the global zoom is. fixed Sz.t = 1. Translation Only: Unlike the fixed lattice model, [i] and o[i] are learnable (via back. propagation). Translation and Zoom: This model follows equation|6|where it can both zoom and trans-. late the kernels.\nP aaasse Daaasse 4\nFigure 4: Top Row: Examples from our variant of the cluttered MNIST dataset (a.k.a Dataset 1) Bottom Row: Examples from our dataset with variable sized MNIST digits (a.k.a Dataset 2).\nin each layer. Similarly, our prediction networks are fully-connected networks with 512-10 units fo predicting the class. We use ReLU non-linearities for all hidden unit layers\nOur model as shown in Figure 3C are differentiable and trained end-to-end via backpropagation. through time. Note that this allows us to train the control network indirectly from signals backprop agated from the task cost. For stochastic gradient descent optimization we use Adam (Kingma &. Ba] 2014) and construct our models in Theano (Bastien et al.2012)."}, {"section_index": "3", "section_name": "4.1 MODIFIED CLUTTERED MNIST DATASET", "section_text": "Example images from of our dataset are shown in Figure4] Handwritten digits from the original. MNIST datasetLeCun & Cortes(1998) are randomly placed over a 100x100 image with varying. amounts of distractors (clutter). Distractors are generated by extracting random segments of non-. target MNIST digits which are placed randomly with uniform probability over the image. In contrast to the cluttered MNIST dataset proposed in|Mnih et al.(2014), the number of distractors for each. image varies randomly from O to 20 pieces. This prevents the attention mode1 from learning a. solution which depends on the number 'on' pixels in a given region. In addition, we create another. dataset (Dataset 2) with an additional factor of variation: the original MNIST digit is randomly resized by a factor of 0.33x to 3.0x. Examples of this dataset are shown in the second row of Figure 4"}, {"section_index": "4", "section_name": "4.2 VISUAL SEARCH TASK", "section_text": "We define our visual search task as a recognition task in a cluttered scene. The recurrent attentior model we propose must output the class c of the single MNIST digit appearing in the image via the prediction network fpredict(). The task loss, L, is specified in equation 8 To minimize the classification error, we use cross-entropy cost:\nn. redict N T c= cnlog(Ct,n n t\nAnalolgous to the visual search experiments performed in physiological studies, we pressure our attention model to accomplish the visual search as quickly as possible. By applying the task loss to every timepoint, the model is forced to accurately recognize and localize the target MNIST digit in as few iterations as possible. In our classification experiments, the model is given T = 4 glimpses.\nFigure 5: The sampling lattice shown at four different stages during training for a Translation Only model, from the initial condition (left) to final solution (right). The radius of each dot corresponds to the standard deviation : of the kernel.\nTranslation Only Dataset 1) Translation Only (Dataset 2). Translation and Zoom (Dataset 1) Translation and Zoom (Dataset 2) 0.25 0.20 0.15 bu 0.05 0.00 0.12 50.08 0.04 0.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Distance from Center (Eccentricity)\nFigure 6: Top: Learned sampling lattices for four different model configurations. Middle: Resolutior (sampling interval) and Bottom: kernel standard deviation as a function of eccentricity for each model configuration."}, {"section_index": "5", "section_name": "5 RESULTS", "section_text": "Figure 5shows the layouts of the learned kernels for a Translation Only model at different stages during training. The filters are smoothly transforming from a uniform grid of kernels to an eccen tricity dependent lattice. Furthermore, the kernel filters spread their individual centers to create a sampling lattice which covers the full image. This is sensible as the target MNIST digit can appear anywhere in the image with uniform probability.\nWhen we include variable sized digits as an additional factor in the dataset, the translation only. model shows an even greater diversity of variances for the kernel filters. This is shown visually in the. first row of Figure 6 Furthermore, the second row shows a highly dependent relationship betweer. the sampling interval and standard deviatoin of the retinal sampling lattice and eccentricity from the center. This dependency increases when training on variable sized MNIST digits (Dataset 2). This.\nt=1 t=2 t=3 t=4 passse daassse\nFigure 7: Temporal rollouts of the retinal sampling lattice attending over a test image from Clutterec MNIST (Dataset 2) after training."}, {"section_index": "6", "section_name": "relationship has also been observed in the primate visual system (Perry et al.]1984) Van Essen & Anderson1995).", "section_text": "When the proposed attention model is able to zoom its retinal sampling lattice, a very different layout emerges. There is much less diversity in the distribution of kernel filter variances as evi- denced in Figure[6 Both the sampling interval and standard deviation of the retinal sampling lattice have far less of a dependence on eccentricity. As shown in the last column of Figure [6[ we also trained this model on variable sized digits and noticed no significant differences in sampling lattice confi guration.\nFigure|7|shows how each model variant makes use of its retinal sampling lattice after training. Th strategy each variant adopts to solve the visual search task helps explain the drastic difference ij lattice configuration. The translation only variant simply translates its high acuity region to recog nize and localize the target digit. The translation and zoom model both rescales and translates it sampling lattice to fit the target digit. Remarkably, Figure 7 shows that both models detect the dig. early on and make minor corrective adjustments in the following iterations.\nTable 2 compares the classification performance of each model variant on the cluttered MNIST dataset with fixed sized digits (Dataset 1). There is a significant drop in performance when the retinal sampling lattice is fixed and not learnable, confirming that the model is benefitting from learning the high-acuity region. The classification performance between the Translation Only and Translation and Zoom model is competitive. This supports the hypothesis that the functionality of a high acuity region with a low resolution periphery is similar to that of zoom.\nTable 2: Classification Error on Cluttered MNIST\nSampling Lattice Model DatasetI(%o Dalaselz( Fixed Lattice. 11.8 31.9 Translation Only. 5.1 24.4 Translation and Zoom. 4.0 24.1\nWhen constrained to a glimpse window that can translate only, similar to the eye, the kernels con. verge to a sampling lattice similar to that found in the primate retina (Curcio & Allen|1990fVan Es- sen & Anderson, [1995). This layout is composed of a high acuity region at the center surrounded by a wider region of low acuity.Van Essen & Anderson[(1995) postulate that the linear relationship. between eccentricity and sampling interval leads to a form of scale invariance in the primate retina Our results from the Translation Only model with variable sized digits supports this conclusion.. Additionally, we observe that zoom appears to supplant the need to learn a high acuity region for. the visual search task. This implies that the high acuity region serves a purpose resembling that of. a zoomable sampling lattice. The low acuity periphery is used to detect the search target and the. high acuity fovea' more finely recognizes and localizes the target. These results, while obtained on an admittedly simplified domain of visual scenes, point to the possibility of using deep learning as a tool to explore the optimal sample tiling for a retinal in a data driven and task-dependent manner. Exploring how or if these results change for more challenging tasks in naturalistic visual scenes is a. future goal of our research."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visual attention. arXiv preprint arXiv:1412.7755, 2014.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.\nLaurent Itti and Christof Koch. A saliency-based search mechanism for overt and covert shifts o visual attention. Vision research, 40(10):1489-1506, 2000.\nWe would like to acknowledge everyone at the Redwood Center for their helpful discussion and comments. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPUs used for this research..\nMax Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Ad vances in Neural Information Processing Systems, pp. 2008-2016, 2015.\nann LeCun and Corinna Cortes. The mnist database of handwritten digits, 1998\nKelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044, 2015.\nHugo Larochelle and Geoffrey E Hinton. Learning to combine foveal glimpses with a third-order boltzmann machine. In Advances in neural information processing systems, pp. 1243-1251, 2010.\nVolodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In Advances in Neural Information Processing Svstems.. . pp. 2204-2212. 2014\nVH Perry, R Oehler, and A Cowey. Retinal ganglion cells that project to the dorsal lateral geniculate nucleus in the macaque monkey. Neuroscience, 12(4):1101-1123, 1984.\nDavid C Van Essen and Charles H Anderson. Information processing strategies and pathways in the 2:45-76.1995"}] |
ByIAPUcee | [{"section_index": "0", "section_name": "FRUSTRATINGLY SHORT ATTENTION SPANS IN NEURAL LANGUAGE MODELING", "section_text": "Michal Daniluk, Tim Rocktaschel, Johannes Welbl & Sebastian Riedel\nUniversity College London\nNeural language models predict the next token using a latent representation of. the immediate token history. Recently, various methods for augmenting neural language models with an attention mechanism over a differentiable memory have been proposed. For predicting the next token, these models query information from. a memory of the recent history which can facilitate learning mid- and long-range. dependencies. However, conventional attention mechanisms used in memory. augmented neural language models produce a single output vector per time step. This vector is used both for predicting the next token as well as for the key and. value of a differentiable memory of a token history. In this paper, we propose a. neural language model with a key-value attention mechanism that outputs separate. representations for the key and value of a differentiable memory, as well as for. encoding the next-word distribution. This model outperforms existing memory. augmented neural language models on two corpora. Yet, we found that our method mainly utilizes a memory of the five most recent output representations. This led to the unexpected main finding that a much simpler model based only on the concatenation of recent output representations from previous time steps is on par. with more sophisticated memory-augmented neural language models.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "At the core of language models (LMs) is their ability to infer the next word given a context. This requires representing context-specific dependencies in a sequence across different time scales. On the one hand, classical N-gram language models capture relevant dependencies between words in short time distances explicitly, but suffer from data sparsity. Neural language models, on the other hand, maintain and update a dense vector representation over a sequence where time dependencies are captured implicitly (Mikolov et al., 2010). A recent extension of neural sequence models are attention mechanisms (Bahdanau et al., 2015), which can capture long-range connections more directly. However, we argue that applying such an attention mechanism directly to neural language models requires output vectors to fulfill several purposes at the same time: they need to (i) encode a distribution for predicting the next token, (ii) serve as a key to compute the attention vector, as well as (iii) encode relevant content to inform future predictions.\nWe hypothesize that such overloaded use of output representations makes training the model difficult. and propose a modification to the attention mechanism which separates these functions explicitly. inspired by Miller et al. (2016); Ba et al. (2016); Reed & de Freitas (2015); Gulcehre et al. (2016) Specifically, at every time step our neural language model outputs three vectors. The first is used to. encode the next-word distribution, the second serves as key, and the third as value for an attention mechanism. We term the model key-value-predict attention and show that it outperforms existing. memory-augmented neural language models on the Children's Book Test (CBT, Hill et al., 2016) and. a new corpus of 7500 Wikipedia articles. However, we observed that this model pays attention mainly. to the previous five memories. We thus also experimented with a much simpler model that only uses. a concatenation of output vectors from the previous time steps for predicting the next token. This. simple model is on par with more sophisticated memory-augmented neural language models. Thus.. our main finding is that modeling short attention spans properly works well and provides notable."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "01 a a4 O5 O5 V V2 V3 V4 V5 V6 V7 h1 h2 h3 h4 h5 h6 h7 K2 K3 K4 K5 K6 K7 C1 C2 C3 C4 C5 C6 C7 C2 C3 C4 C5 C6 C7 X1 X2 X3 X4 X5 X6 X7 X1 X2 X3 X4 X5 X6 X7 (a) Neural language model with attention.. (b) Key-value separation. h h21 n61 h11 h31 h41 h71 h12 h22 h32 h42 h52 h62 h72 OA O h13 h23 h33 h53 h63 h73 h14 h24 h34 h44 V4 h54 h64 h74 V1 V2 V3 V5 V6 V7 h25 h45 h55 h65 P6 h 5 h35 h75 P1 P2 P3 P4 P5 P7 h16 h26 h36 h46 h56 h66 K1 KA K5 K6 h76 K K C1 C2 C4 C5 C6 C1 C2 C3 C4 C5 C6 C7 C3 C7 X1 x2 X3 X4 X5 X6 X7 X1 X2 X3 X4 X5 X6 X7 (c) Key-value-predict separation.. (d) Concatenation of previous output representations\nFigure 1: Memory-aug. gmented neural language modelling architectures\nIn this paper, we investigate various memory-augmented neural language models and compare them against previous architectures. Our contributions are threefold: (i) we propose a key-value attention mechanism that uses specific output representations for querying a sliding-window memory of previous token representations, (ii) we demonstrate that while this new architecture outperforms previous memory-augmented neural language models, it mainly utilizes a memory of the previous five representations, and finally (iii) based on this observation we experiment with a much simpler but effective model that uses the concatenation of three previous output representations to predict the next word."}, {"section_index": "3", "section_name": "2 METHODS", "section_text": "mprovements over a neural language model with attention. Conversely, it seems to be notoriously nard to train neural language models to leverage long-range dependencies.\nIn the following, we discuss methods for extending neural language models with differentiable memory. We first present a standard attention mechanism for language modeling ($2.1). Subsequently we introduce two methods for separating the usage of output vectors in the attention mechanism: (i) using a dedicated key and value ($2.2), and (ii) further separating the value into a memory value and a representation that encodes the next-word distribution ($2.3). Finally, we describe a very simple method that concatenates previous output representations for predicting the next token ($2.4).\nAugmenting a neural language model with attention (Bahdanau et al., 2015) is straight-forward. W simply take the previous L output vectors as memory Y = [ht-L ..: ht-1] E RkL where k is the. output dimension of a Long Short-Term Memory (LSTM) unit (Hochreiter & Schmidhuber, 1997 This memory could in principle contain all previous output representations, but for practical reason. we only keep a sliding window of the previous L outputs. Let ht E R be the output representatio. at time step t and 1 E RL be a vector of ones.\nThe attention weights a E R are computed from a comparison of the current and previous LSTM outputs. Subsequently, the context vector rt E Rk is calculated from a sum over previous output. vectors weighted by their respective attention value. This can be formulated as.\nMt = tanh(WYY +(Whht)1 E JRkxL Qt = softmax(wT Mt) E R1xL rt = YQT E Rk\nwhere WY, Wh E Rk k are trainable projection matrices and w E R is a trainable vector. The final. representation that encodes the next-word distribution is computed from a non-linear combination of. the attention-weighted representation r+ of previous outputs and the final output vector h+ via"}, {"section_index": "4", "section_name": "2.2 KEY-VALUE ATTENTION", "section_text": "Inspired by Miller et al. (2016); Ba et al. (2016): Reed & de Freitas (2015); Gulcehre et al. (2016), we introduce a key-value attention model that separates output vectors into keys used for calculating the attention distribution at, and a value part used for encoding the next-word distribution and context representation. This model is depicted in Figure 1b. Formally, we rewrite Equations 1-4 as follows:\nKt n Ut Mt = tanh(WY[kt-L ... kt-1] + (Whkt)1T Qt = softmax(wT Mt) XI rt = [Vt-L..Vt-1]QT E IRk h* = tanh(W^rt + W*vt E Rk\nIn essence, Equation 7 compares the key at time step t with the previous L keys to calculate the attention distribution Qt which is then used in Equation 9 to obtain a weighted context representation from values associated with these keys\nEven with a key-value separation, a potential problem is that the same representation v is still usec. both for encoding the probability distribution of the next word and for retrieval from the memory vi the attention later. Thus, we experimented with another extension of this model where we furthe. separate h into a key, a value and a predict representation where the latter is only used for encoding. the next-word distribution (see Figure 1c). To this end, equations 6 and 10 are replaced by.\nMt = tanh(WYY +(Whht)1T EXI Qt = softmax(wI Mt) XI rt = YaT\nE RVI\nkt Ut = ht E R3k Pt h* = tanh(W*rt + W*pt"}, {"section_index": "5", "section_name": "2.4 N-GRAM RECURRENT NEURAL NETWORK", "section_text": "Neural language models often work best in combination with traditional N-gram models (Mikolov. et al., 2011: Chelba et al., 2013: Williams et al., 2015: Ji et al., 2016: Shazeer et al., 2015), since the former excel at generalization while the latter ensure memorization. In addition, from initial. experiments with memory-augmented neural language models, we found that usually only the. previous five output representations are utilized. This is in line with observations by Tran et al. (2016). Hence, we experiment with a much simpler architecture depicted in Figure 1d. Instead of an. attention mechanism, the output representations from the previous N - 1 time steps are directly used. to calculate next-word probabilities. Specifically, at every time step we split the LSTM output into. N - 1 vectors [ht, :.., hN-1] and replace Equation 4 with\nht h* = tanh WN N-1 N+1\nwhere WN E Rk(N-1)k is a trainable projection matrix. This model is related to higher-order. RNNs (Soltani & Jiang, 2016) with the difference that we do not incorporate output vectors from the. previous steps into the hidden state but only use them for predicting the next word. Furthermore, note. that at time step t the first part of the output vector h? will contribute to predicting the next word the second part h? will contribute to predicting the second word thereafter, and so on. As the output. vectors from the N - 1 previous time-steps are used to score the next word, we call the resulting. model an N-gram RNN.\nEarly attempts of using memory in neural networks have been undertaken by Taylor (1959) anc. Steinbuch & Piske (1963) by performing nearest-neighbor operations on input vectors and fitting. parametric models to the retrieved sets. The dedicated use of external memory in neural architecture. has more recently witnessed increased interest. Weston et al. (2015) introduced Memory Networks t. explicitly segregate memory storage from the computation of the neural network, and Sukhbaata. et al. (2015) trained this model end-to-end with an attention-based memory addressing mechanisn The Neural Turing Machines by Graves et al. (2014) add an external differentiable memory witl. read-write functions to a controller recurrent neural network, and has shown promising results ii. simple sequence tasks such as copying and sorting. These models make use of external memory whereas our model directly uses a short sequence from the history of tokens to dynamically populate. an addressable memory.\nIn sequence modeling, RNNs such as LSTMs (Hochreiter & Schmidhuber, 1997) maintain an internal memory state as they process an input sequence. Attending over previous state outputs on top of an RNN encoder has improved performances in a wide range of tasks, including machine translation (Bahdanau et al., 2015), recognizing textual entailment (Rocktaschel et al., 2016), sentence summarization (Rush et al., 2015), image captioning (Xu et al., 2015) and speech recognition (Chorowski et al., 2015).\nRecently, Cheng et al. (2016) proposed an architecture that modifies the standard LSTM by replacing the memory cell with a memory network (Weston et al., 2015). Another proposal for conditioning. on previous output representations are Higher-order Recurrent Neural Networks (HORNNs, Soltan & Jiang, 2016). Soltani & Jiang found it useful to include information from multiple preceding RNN states when computing the next state. This previous work centers around preceding state. vectors, whereas we investigate attention mechanisms on top of RNN outputs, i.e. the vectors usec for predicting the next word. Furthermore, instead of pooling we use attention vectors to calculate a. context representation of previous memories.\nMore precisely, the output vector h, is divided into three equal parts: key, value and predict. In our implementation we simply split the output vector ht into kt, vt and pt. To this end the hidden. dimension of the key-value-predict attention model needs to be a multiplicative of three. Consequently,. the dimensions of kt, vt and pt are 100 for a hidden dimension of 300..\nYang et al. (2016) introduced a reference-aware neural language model where at every position a latent variable determines from which source a target token is generated, e.g., by copying entries. from a table or referencing entities that were mentioned earlier..\nAnother class of models that include memory into sequence modeling are Recurrent Memory Net works (RMNs) (Tran et al., 2016). Here, a memory block accesses the most recent input words to selectively attend over relevant word representations from a global vocabulary. RMNs use a globa memory with two input word vector look-up tables for the attention mechanism, and consequently have a large number of trainable parameters. Instead, we proposed models that need much fewer parameters by producing the vectors that will be attended over in the future, which can be seen as a memory that is dynamically populated by the language model.\nFinally, the functional separation of look-up keys and memory content has been found useful fo. Memory Networks (Miller et al., 2016), Neural Programmer-Interpreters (Reed & de Freitas, 2015) Dynamic Neural Turing Machines (Gulcehre et al., 2016), and Fast Associative Memory (Ba et al. 2016). We apply and extend this principle to neural language models.."}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "We evaluate models on two different corpora for language modeling. The first is a subset of the. Wikipedia corpus.' It consists of 7500 English Wikipedia articles (dump from 6 Feb 2015) belonging. to one of the following categories: People, Cities, Countries, Universities, and Novels. We chose. these categories as we expect articles in these categories to often contain references to previously. mentioned entities. Subsequently, we split this corpus into a train, development, and test part. resulting in corpora of 22.5M words, 1.2M and 1.2M words, respectively. We map all numbers. to a dedicated numerical symbol N and restrict the vocabulary to the 77K most frequent words encompassing 97% of the training vocabulary. All other words are replaced by the UNK symbol. The average length of sentences is 25 tokens. In addition to this Wikipedia corpus, we also run. experiments on the Children's Book Test (CBT Hill et al., 2016). While this corpus is designed for. cloze-style question-answering, in this paper we use it to test how well language models can exploit. wider linguistic context."}, {"section_index": "7", "section_name": "4.1 TRAINING PROCEDURE", "section_text": "We use ADAM (Kingma & Ba, 2015) with an initial learning rate of 0.001 and a mini-batch size of 64 for optimization. Furthermore, we apply gradient clipping at a gradient norm of 5 (Pascanu et al., 2013). The bias of the LSTM's forget gate is initialized to 1 (Jozefowicz et al., 2016), while other parameters are initialized uniformly from the range (-0.1, 0.1). Backpropagation Through Time (Rumelhart et al., 1985; Werbos, 1990) was used to train the network with 20 steps of unrolling We reset the hidden states between articles for the Wikipedia corpus and between stories for CBT respectively. We take the best configuration based on performance on the validation set and evaluate it on the test set."}, {"section_index": "8", "section_name": "5 RESULTS", "section_text": "In the first set of experiments we explore how well the proposed models and Tran et al.'s Recurrent memory Model can make use of histories of varying lengths. Perplexity results for different attention window sizes on the Wikipedia corpus are summarized in Figure 2a. The average attention these models pay to specific positions in the history is illustrated in Figure 3. We observed that although our models attend over tokens further in the past more often than the Recurrent-memory Model attending over a longer history does not significantly improve the perplexity of any attentive model\nThe much simpler N-gram RNN model achieves comparable results (Figure 2b) and seems to work best with a history of the previous three output vectors (4-gram RNN). As a result, we choose the 4-gram model for the following N-gram RNN experiments.\nThe wikipedia corpus is available at https : / /goo. g1/ s8cyYa.\nFigure 2: Perplexities of memory-augmented neural language models on the Wikipedia corpus (a-c and accuracies on the CBT test set (d).\n(a) Test perplexity of different attention architectures with varying attention window sizes. Best perplexity per model is italic.\nModel Attention Window Size. 1 5 10 15 RM(+tM-g) (Tran et al., 2016) 83.5 80.5 80.3 80.1 Attention 82.2 82.2 82.0 82.8 Key-Value 78.7 79.0 78.2 78.9 Key- Value-Predict. 76.1 75.8 76.0 75.8\n(c) Summary of models with best attention window size a. The total number of model parameters, including word representations, is denoted by Ow+M (without word representations 0m).\nRNN 300 307 47.0M 23.9M 121.7 125.7 LSTM 300 300 47.0M 23.9M 83.2 85.2 FOFE HORNN (3-rd order) (Soltani & Jiang, 2016) 300 303 47.0M 23.9M 116.7 120.5 Gated HORNN (3-rd order) (Soltani & Jiang, 2016) 300 297 47.0M 23.9M 93.9 97.1 RM(+tM-g) (Tran et al., 2016) 300 300 15 93.7M 70.6M 78.2 80.1 Attention 300 296 10 47.0M 23.9M 80.6 82.0 Key- Value 300 560 10 47.0M 23.9M 77.1 78.2 Key- Value-Predict 300 834 5 47.0M 23.9M 74.2 75.8 4-gram RNN 300 968 47.0M 23.9M 74.8 75.9 - (d) Results on CBT; those marked with * are taken from Hill et al. (2016). Model Named Entities Common Nouns Verbs Prepositions Humans (context+query) 0.816 0.816 0.828 0.708 Kneser-Ney LM + 0.390 0.544 0.778 0.768 Kneser-Ney LM + cache 0.439 0.577 0.772 0.679 LSTM (context+query) # 0.418 0.560 0.818 0.791 Memory Network + 0.666 0.630 0.690 0.703 AS Reader, avg ensemble (Kadlec et al., 2016). 0.706 0.689 AS Reader, greedy ensemble (Kadlec et al., 2016) 0.710 0.675 QANN, 4 hops, GloVe (Weissenborn, 2016). 0.729 AoA Reader, single model (Cui et al., 2016a). 0.720 0.694 CAS Reader, mode avg (Cui et al., 2016b) 0.692 0.657 GA Reader. ensemble (Dhingra et al.. 2016). 0.719 0.694 EpiReader, ensemble (Trischler et al., 2016) 0.718 0.706 FOFE HORNN (3-rd order) (Soltani & Jiang, 2016) 0.465 0.497 0.774 0.741 Gated HORNN (3-rd order) (Soltani & Jiang, 2016) 0.508 0.547 0.790 0.774 RM(+tM-g) (Tran et al., 2016) 0.525 0.597 0.817 0.797 LSTM 0.523 0.604 0.819 0.786 Attention 0.538 0.595 0.826 0.803 Key- Value 0.528 0.601 0.822 0.813 Key-Value-Predict 0.528 0.599 0.829 0.803 4-gram RNN 0.532 0.598 0.815 0.800"}, {"section_index": "9", "section_name": "5.1 COMPARISON WITH STATE-OF-THE-ART MODELS", "section_text": "(b) Comparison of N-gram neural language models w denotes the input size, k the hidden size and OM the total number of model parameters..\nModel k 0M W Dev Test 2-gram RNN 300 564 23.9M 76.0 77.1 3-gram RNN 300 786 23.9M 74.9 75.9 4-gram RNN 300 968 23.9M 74.8 75.9 5-gram RNN 300 1120 23.9M 76.0 77.3\nModel W k a 0w+M 0M Dev Test RNN 300 307 47.0M 23.9M 121.7 125.7 LSTM 300 300 - 47.0M 23.9M 83.2 85.2 FOFE HORNN (3-rd order) (Soltani & Jiang. 2016) 300 303 - 47.0M 23.9M 116.7 120.5 Gated HORNN (3-rd order) (Soltani & Jiang, 2016) 300 297 47.0M 23.9M - 93.9 97.1 RM(+tM-g) (Tran et al., 2016) 300 300 15 93.7M 70.6M 78.2 80.1 Attention 300 296 10 47.0M 23.9M 80.6 82.0 Key- Value 300 560 10 47.0M 23.9M 77.1 78.2 Key- Value-Predict 300 834 5 47.0M 23.9M 74.2 75.8 4-gram RNN 300 968 47.0M 23.9M 74.8 75.9 - (d) Results on CBT: those. tolzen from Hill et al. (2016)\nIn the next set of experiments, we compared our proposed models against a variety of state-of-the-art. models on the Wikipedia and CBT corpora. Results are shown in Figure 2c and 2d, respectively Note that the models presented here do not achieve state-of-the-art on CBT as they are language. models and not tailored towards cloze-sytle question answering. Thus, we merely use this corpus for comparing different neural language model architectures. We reimplemented the Recurrent-Memory model by Tran et al. (2016) with the temporal matrix and gating composition function (RM+tM-g)\nTo ensure a comparable number of parameters to a vanilla LSTM model, we adjusted the hidden size of all models to have roughly the same total number of model parameters. The attention window size N for the N-gram RNN model was set to 4 according to the best validation set perplexity on the Wikipedia corpus. Below we discuss the results in detail.\nAttention By using a neural language model with an attention mechanism over a dynamicall populated memory, we observed a 3.2 points lower perplexity over a vanilla LSTM on Wikipedia, bu only notable differences for predicting verbs and prepositions in CBT. This indicates that incorporating mechanisms for querying previous output vectors is useful for neural language modeling\n1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 -15 -14 -13 -12 -11-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 Position in the history (a) M(+tM-g) 1.6 2.1 2.0 2.1 2.3 2.3 2.6 2.9 3.5 4.1 4.9 5.9 8.6 15.2 ue-Predict 4.0 4.0 4.0 4.1 4.2 4.4 4.6 4.9 5.4 5.9 6.7 7.8 9.5 12.5 Key-Value 4.2 4.2 4.3 4.4 4.5 4.7 5.0 5.3 5.8 6.3 7.1 8.0 9.5 11.8 Attention 4.6 4.6 4.7 4.7 4.9 5.0 5.2 5.5 5.8 6.2 6.8 7.5 8.8 11.1 -15 -14 -13 -12 -11 -10 -9 -8 1 -6 -5 .4 -3 -2 Position in the history 0.00 0.04 0.08 0.12 0.16 0.20 0.24 0.28 0.32 0.36 (b)\nRM(+tM-g) 1.6 2.1 2.0 2.1 2.3 2.3 2.6 2.9 3.5 4.1 4.9 5.9 8.6 15.2 39.9 Key-Value-Predict 4.0 4.0 4.0 4.1 4.2 4.4 4.6 4.9 5.4 5.9 6.7 7.8 9.5 12.5 17.8 Key-Value 4.2 4.2 4.3 4.4 4.5 4.7 5.0 5.3 5.8 6.3 7.1 8.0 9.5 11.8 14.9 Attention 4.6 4.6 4.7 4.7 4.9 5.0 5.2 5.5 5.8 6.2 6.8 7.5 8.8 11.1 14.7 -15 -14 -13 -12 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 Position in the history\nFigure 3: Attention weights of the Key-Value-Predict model on a randomly sampled Wikipedia article (a) and average attention weight distribution on the whole Wikipedia test set for RM(+tM-g),. Attention, Key-Value and Key-Value-Predict models (b). The rightmost positions represent the most. recent history.\nKey-Value Decomposing the output vector into a key-value paired memory improves the perplexity by 7.0 points compared to a baseline LSTM, and by 1.9 points compared to the RM(+tM-g) model Again, for CBT we see only small improvements..\nKey-Value-Predict By further separating the output vector into a key, value and next-word pre. diction part, we get the lowest perplexity and gain 9.4 points over a baseline LSTM, a 4.3 points. compared to RM(+tM-g), and 2.4 points compared to only splitting the output into a key and value For CBT, we see an accuracy increase of 1.0 percentage points for verbs, and 1.7 for prepositions. As. stated earlier, the performance of the Key-Value-Predict model does not improve significantly wher. increasing the attention window size. This leads to the conclusion that none of the attentive models investigated in this paper can utilize a large memory of previous token representations. Moreover. none of the presented methods differ significantly for predicting common nouns and named entities. in CBT.\nIn this paper, we observed that using an attention mechanism for neural language modeling where w. separate output vectors into a key, value and predict part outperform simpler attention mechanism. on a Wikipedia corpus and the Children Book Test (CBT, Hill et al., 2016). However, we found tha. all attentive neural language models mainly utilize a memory of only the most recent history and fai. to exploit long-range dependencies. In fact, a much simpler N-gram RNN model, which only use. a concatenation of output representations from the previous three time steps, is on par with mor. sophisticated memory-augmented neural language models. Training neural language models that tak. long-range dependencies into account seems notoriously hard and needs further investigation. Thu for future work we want to investigate ways to encourage attending over a longer history, for instanc. by forcing the model to ignore the local context and only allow attention over output representation. further behind the local history."}, {"section_index": "10", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work was supported by Microsoft Research and the Engineering and Physical Sciences Research Council through PhD Scholarship Programmes, an Allen Distinguished Investigator Award, and a Marie Curie Career Integration Award."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Jimmy Ba, Geoffrey E Hinton, Volodymyr Mnih, Joel Z Leibo, and Catalin Ionescu. Using fas weights to attend to the recent past. In NIPS, pp. 4331-4339, 2016.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. In ICLR, 2015..\nJianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. In EMNLP, pp. 551-561, 2016.\nJan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio Attention-based models for speech recognition. In NIPS, pp. 577-585, 2015.\nYiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over-attentio neural networks for reading comprehension. arXiv preprint arXiv:1607.04423, 2016a.\nN-gram RNN Our main finding is that the simple modification of using output vectors from the previous time steps for the next-word prediction leads to perplexities that are on par with or better than more complicated neural language models with attention. Specifically, the 4-gram RNN achieves only slightly worse perplexities than the Key-Value-Predict architecture.\nBhuwan Dhingra, Hanxiao Liu, William W Cohen, and Ruslan Salakhutdinov. Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549, 2016.\nCaglar Gulcehre, Sarath Chandar, Kyunghyun Cho, and Yoshua Bengio. Dynamic neural turing machine with soft and hard addressing schemes. arXiv preprint arXiv:1607.00036, 2016..\nFelix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children's books with explicit memory representations. In ICLR, 2016.\nRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring th limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015\nAlexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Ja son Weston. Key-value memory networks for directly reading documents. arXiv preprint arXiv:1606.03126. 2016\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In ICML, pp. 1310-1318, 2013.\nTim Rocktaschel, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunsom Reasoning about entailment with neural attention. In ICLR, 2016.\nAlexander M. Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. In EMNLP, pp. 379-389, 2015\nRohollah Soltani and Hui Jiang.. Higher order recurrent neural networks.. arXiv preprini arXiv:1605.00064, 2016.\nTomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurrent neural network based language model. In Interspeech, volume 2, pp. 3, 2010.\nTomas Mikolov, Anoop Deoras, Stefan Kombrink, Lukas Burget, and Jan Cernocky. Empirical evaluation and combination of advanced language modeling techniques. In Interspeech, 2011.\nDavid E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning internal representations by error propagation. Technical report, DTIC Document. 1985..\nNoam Shazeer, Joris Pelemans, and Ciprian Chelba. Sparse non-negative matrix language modeling for skip-grams. In Interspeech, pp. 1428-1432, 2015.\nSainbayar Sukhbaatar, Jason Weston, and Rob Fergus. End-to-end memory networks. In NIPS, pp 2440-2448, 2015.\nPaul J Werbos. Backpropagation through time: what it does and how to do it. Proceedings of th. IEEE, 78(10):1550-1560, 1990.\nJason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In ICLR, 2015.\nZichao Yang, Phil Blunsom, Chris Dyer, and Wang Ling. Reference-aware language models. arXi preprint arXiv:1611.01628, 2016.\nWK Taylor. Pattern recognition by means of automatic analogue apparatus. Proceedings of the IEE-Part B: Radio and Electronic Engineering. 106(26):198-209. 1959\nAdam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. arXiv preprint arXiv:1606.02270, 2016."}] |
r1kGbydxg | [{"section_index": "0", "section_name": "LEARNING LOCOMOTION SKILLS USING DEEPRL DOES THE CHOICE OF ACTION SPACE MATTER?", "section_text": "Xue Bin Peng & Michiel van de Panne\nDepartment of Computer Science University of the British Columbia Vancouver, Canada."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The introduction of deep learning models to reinforcement learning (RL) has enabled policies to op erate directly on high-dimensional, low-level state features. As a result, deep reinforcement learning (DeepRL) has demonstrated impressive capabilities, such as developing control policies that cai map from input image pixels to output joint torques (Lillicrap et al.]2015). However, the qualit and robustness often falls short of what has been achieved with hand-crafted action abstractions e.g.,Coros et al.(2011);Geijtenbeek et al.(2013). While much is known about the learning of state representations, the choice of action parameterization is a design decision whose impact is not ye well understood.\nJoint torques can be thought of as the most basic and generic representation for driving the move. ment of articulated figures, given that muscles and other actuation models eventually result in joint. torques. However this ignores the intrinsic embodied nature of biological systems, particularly the. synergy between control and biomechanics. Passive-dynamics, such as elasticity and damping from. muscles and tendons, play an integral role in shaping motions: they provide mechanisms for energy. storage, and mechanical impedance which generates instantaneous feedback without requiring any. explicit computation. Loeb coins the term preflexes (Loeb||1995) to describe these effects, and their impact on motion control has been described as providing intelligence by mechanics (Blickhan et al. 2007). This can also be thought of as a kind of partitioning of the computations between the control.\n2007). This can also be thought of as a kind of partitioning of the computations between the contro and physical system"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The use of deep reinforcement learning allows for high-dimensional state descrip- tors, but little is known about how the choice of action representation impacts the learning difficulty and the resulting performance. We compare the impact of four different action parameterizations (torques, muscle-activations, target joint angles, and target joint-angle velocities) in terms of learning time, policy robust- ness, motion quality, and policy query rates. Our results are evaluated on a gait- cycle imitation task for multiple planar articulated figures and multiple gaits. We demonstrate that the local feedback provided by higher-level action parameteriza- tions can significantly impact the learning, robustness, and quality of the resulting policies.\nIn this paper we explore the impact of four different actuation models on learning to control dynamic. articulated fgure locomotion: (1) torques (Tor); (2) activations for musculotendon units (MTU); (3) target joint angles for proportional-derivative controllers (PD); and (4) target joint velocities (Vel).. Because Deep RL methods are capable of learning control policies for all these models, it now becomes possible to directly assess how the choice of actuation model affects the learning difficulty.. We also assess the learned policies with respect to robustness, motion quality, and policy query rates.. We show that action spaces which incorporate local feedback can significantly improve learning. speed and performance, while still preserving the generality afforded by torque-level control. Such. parameterizations also allow for more complex body structures and subjective improvements in. motion quality."}, {"section_index": "3", "section_name": "2 BACKGROUND", "section_text": "Our task will be structured as a standard reinforcement problem where an agent interacts with its environment according to a policy in order to maximize a reward signal. The policy (s, a) = p(a|s represents the conditional probability density function of selecting action a E A in state s E S. At each control step t, the agent observes a state st and samples an action at from . The environment in turn responds with a scalar reward rt, and a new state st = St+1 sampled from its dynamics p(s'[s, a). For a parameterized policy e(s, a), the goal of the agent is learn the parameters 0 which maximizes the expected cumulative reward\nwith y E [0, 1] as the discount factor, and T as the horizon. The gradient of the expected reward VeJ(e) can be determined according to the policy gradient theorem (Sutton et al.2001), which provides a direction of improvement to adjust the policy parameters 0.\nVeJ(e) = Velog(e(s, a))A(s, a)da ds\nwhere de(s) = Ss t=o 7'po(so)p(so > s|t, e)dso is the discounted state distribution, where. Po(s) represents the initial state distribution, and p(so -> s[t, e) models the likelihood of reaching. state s by starting at so and following the policy e(s, a) for t steps (Silver et al.2014). A(s, a) represents a generalized advantage function. The choice of advantage function gives rise to a family of policy gradient algorithms, but in this work, we will focus on the one-step temporal difference advantage function (Schulman et al.2015)\n1 L($)= E Yt. Yt =rt +yVo(s+ St,Tt,St\nte and V can be trained in tandem using an actor-critic framework (Konda & Tsitsiklis| 2000)\nIn this work, each policy will be represented as a gaussian distribution with a parameterized mear e(s) and fixed covariance matrix = diag{o?}, where o; is manually specified for each actior parameter. Actions can be sampled from the distribution by applying gaussian noise to the mear action\nat = e(st) +N(0,\nVeJ (Te) Vee(s)-- (a- e(s)) A(s, a)da d\nOur specific contributions are: (1) We introduce a DeepRL framework for motion imitation tasks; (2) We evaluate the impact of four different actuation models on learned control policies according to four criteria; and (3) We propose an optimization approach that combines policy learning and actuator optimization, allowing neural networks to effective control complex muscle models\nT J(e) =E TTe t=0\nV(st)= E [rt+yV(sD)[st,e rt,S\nA parameterized value function Vs(s), with parameters , can be learned iteratively in a manner similar to Q-Learning by minimizing the Bellman loss,.\nwhich can be interpreted as shifting the mean of the action distribution towards actions that lead to higher than expected rewards, while moving away from actions that lead to lower than expected rewards."}, {"section_index": "4", "section_name": "3.2 STATES", "section_text": "To define the state of the agent, a feature transformation (q, q) is used to extract a set of feature from the reduced-coordinate pose q and velocity q. The features consist of the height of the roo (pelvis) from the ground, the position of each link with respect to the root, and the center of mas. velocity of each link. When training a policy to imitate a cyclic reference motion {q* }, knowledge o the motion phase can help simplify learning. Therefore, we augment the state features with a set o target features (q*, qt*), resulting in a combined state represented by st = ((qt, qt), (q* , qt*)) Similar results can also be achieved by providing a single motion phase variable as a state feature as we show in Figure|15|(supplemental material)."}, {"section_index": "5", "section_name": "3.3 ACTIONS", "section_text": "We train separate policies for each of the four actuation models, as described below. Each actuatior model also has related actuation parameters, such as feedback gains for PD-controllers and muscu lotendon properties for MTUs. These parameters can be manually specified, as we do for the PD and Vel models, or they can be optimized for the task at hand, as for the MTU models. Table1 provides a list of actuator parameters for each actuation model.\nTorques (Tor): Each action directly specifies torques for every joint, and constant torques are ap. plied for the duration of a control step. Due to torque limits, actions are bounded by manually specified limits for each joint. Unlike the other actuation models, the torque model does not require additional actuator parameters, and can thus be regarded as requiring the least amount of domain knowledge. Torque limits are excluded from the actuator parameter set as they are common for all parameterizations.\nMuscle Activations (MTU): Each action specifies activations for a set of musculotendon unit (MTU). Detailed modeling and implementation information are available in Wang et al.(2012) Each MTU is modeled as a contractile element (CE) attached to a serial elastic element (SE and parallel elastic element (PE). The force exerted by the MTU can be calculated according tc FmTu = FsE = Fce + Fpe. Both Fse and Fpe are modeled as passive springs, while Fce is actively controlled according to FCE = amTu Fofi(lcE)f(vcE), with amTu being the muscle activation, Fo the maximum isometric force, lcE and vcE being the length and velocity of the con tractile element. The functions fi(lcE) and f.(vcE) represent the force-length and force-velocity relationships, modeling the variations in the maximum force that can be exerted by a muscle as : function of its length and contraction velocity. Analytic forms are available in Geyer et al.[(2003] Activations are bounded between [0, 1]. The length of each contractile element lcE are includec as state features. To simplify control and reduce the number of internal state parameters per MTU the policies directly control muscle activations instead of indirectly through excitations (Wang et al. 2012).\nIn our task, the goal of a policy is to imitate a given reference motion {q*} which consists of a. sequence of kinematic poses q* in reduced coordinates. The reference velocity q* at a given time. t using a recorded simulation result from a preexisting controller (\"Sim'), or via manually-authored keyframes. Since hand-crafted reference motions may not be physically realizable, the goal is to closely reproduce a motion while satisfying physical constraints..\nTarget Joint Angles (PD): Each action represents a set of target angles q, where q' specifies the target angles for joint i. q is applied to PD-controllers which compute torques according to t' =. kt(q - q') + k'(q' q'), where q' = 0, and k3 and k' are manually-specified gains..\nTarget Joint Velocities (Vel): Each action specifies a set of target velocities q which are used to compute torques according to t' = k(q' - q'), where the gains k'y are specified to be the same as those used for target angles.\nMuscle Activations (MTU)\nTable 1: Actuation models and their respective actuator parameters"}, {"section_index": "6", "section_name": "3.4 REWARD", "section_text": "The reward function consists of a weighted sum of terms that encourage the policy to track a refer- ence motion.\nWpose =0.5, Wvel = 0.05, Wend = 0.15, wroot = 0.1, wcom = 0.2\nDetails of each term are available in the supplemental material. rpose penalizes deviation of the character pose from the reference pose, and rvel penalizes deviation of the joint velocities. rend and rroot accounts for the position error of the end-effectors and root. rcom penalizes deviations in the center of mass velocity from that of the reference motion."}, {"section_index": "7", "section_name": "3.5 INITIAL STATE DISTRIBUTION", "section_text": "We design the initial state distribution, po(s), to sample states uniformly along the reference trajec tory. At the start of each episode, q* and q* are sampled from the reference trajectory, and used tc initialize the pose and velocity of the agent. This helps guide the agent to explore states near the target trajectory."}, {"section_index": "8", "section_name": "ACTOR-CRITIC LEARNING ALGORITHM", "section_text": "Instead of directly using the temporal difference advantage function, we adapt a positive tempor. difference (PTD) update as proposed byVan Hasselt|(2012).\nd > 0 A(s,a)=I[s>0 otherwise\nUnlike more conventional policy gradient methods, PTD is less sensitive to the scale of the ad vantage function and avoids instabilities that can result from negative TD updates. For a Gaussian policy, a negative TD update moves the mean of the distribution away from an observed action effectively shifting the mean towards an unknown action that may be no better than the current mean action (Van Hasselt2012). In expectation, these updates converges to the true policy gradi ent, but for stochastic estimates of the policy gradient, these updates can cause the agent to adopt undesirable behaviours which affect subsequent experiences collected by the agent. Furthermore. we incorporate experience replay, which has been demonstrated to improve stability when training neural network policies with Q-learning in discrete action spaces. Experience replay often requires off-policy methods, such as importance weighting, to account for differences between the policy being trained and the behavior policy used to generate experiences (WawrzynSki & Tanwani]2013) However, we have not found importance weighting to be beneficial for PTD.\nStochastic policies are used during training for exploration, while deterministic policy are deployed for evaluation at runtime. The choice between a stochastic and deterministic policy can be specified. by the addition of a binary indicator variable X E [0. 1.\nat = e(st) + XW(0,)\nActuation Model Actuator Parameters Target Joint Angles (PD) proportional gains kp, derivative gains kd Target Joint Velocities (Vel) derivative gains kd Torques (Tor) none Muscle Activations (MTU) optimal contractile element length, serial elastic element rest length, maximum isometric force, pennation, moment arm. maximum moment arm joint orientation, rest joint orientation\n,a)=I[8>0] otherwi 8=r+xV(s)-V(s\n8=r+yV(s)-V(s\nwhere X = 1 corresponds to a stochastic policy with exploration noise, and X = 0 corresponds to a deterministic policy that always selects the mean of the distribution. Noise from a stochastic policy will result in a state distribution that differs from that of the deterministic policy at runtime. To imitate this discrepancy, we incorporate e-greedy exploration in addition to the original Gaussian exploration. During training, X is determined by a Bernoulli random variable ~ Ber(e), where X = 1 with probability e E [0, 1]. The exploration rate e is annealed linearly from 1 to 0.2 over 500k iterations, which slowly adjusts the state distribution encountered during training to better resemble the distribution at runtime. Since the policy gradient is defined for stochastic policies, only tuples recorded with exploration noise (i.e. X = 1) can be used to update the actor, while the critic can be updated using all tuples.\nTraining proceeds episodically, where the initial state of each episode is sampled from po(s), anc the episode duration is drawn from an exponential distribution with a mean of 2s. To discourage falling, an episode will also terminate if any part of the character's trunk makes contact with the ground for an extended period of time, leaving the agent with zero reward for all subsequent steps Algorithm[1|in the supplemental material summarizes the complete learning process.\nMTU Actuator Optimization: Actuation models such as MTUs are defined by further parameters. whose values impact performance (Geijtenbeek et al.]2013).Geyer et al.(2003) uses existing anatomical estimates for humans to determine MTU parameters, but such data is not be available for more arbitrary creatures. Alternatively, Geijtenbeek et al.[(2013) uses covariance matrix adaptation (CMA), a derivative-free evolutionary search strategy, to simultaneously optimize MTU and policy. parameters. This approach is limited to policies with reasonably low dimensional parameter spaces,. and is thus ill-suited for neural network models with hundreds of thousands of parameters. To avoid manual-tuning of actuator parameters, we propose a heuristic approach that alternates between policy learning and actuator optimization. as detailed in the supplemental material"}, {"section_index": "9", "section_name": "5 RESULTS", "section_text": "The motions are best seen in the supplemental video https://youtu.be/L3vDo3nLI98. We evalu ate the action parameterizations by training policies for a simulated 2D biped, dog, and raptor as shown in Figure[1] Depending on the agent and the actuation model, our systems have 58-214 state dimensions, 6-44 action dimensions, and 0-282 actuator parameters, as summarized in Table 3 (supplemental materials). The MTU models have at least double the number of action parameters because they come in antagonistic pairs. As well, additional MTUs are used for the legs to more accurately reflect bipedal biomechanics. This includes MTUs that span multiple joints.\nFigure 1: Simulated articulated figures and their state representation. Revolute joints connect all links. From left to right: 7-link biped; 19-link raptor; 21-link dog; State features: root height relative position (red) of each link with respect to the root and their respective linear velocity (green)\nEach policy is represented by a three layer neural network, as illustrated in Figure[8](supplemental material) with 512 and 256 fully-connected units, followed by a linear output layer where the number of output units vary according to the number of action parameters for each character and actuation model. ReLU activation functions are used for both hidden layers. Each network has approximately 200k parameters. The value function is represented by a similar network, except having a single linear output unit. The policies are queried at 60Hz for a control step of about O.0167s. Each network is randomly initialized and trained for about 1 million iterations, requiring 32 million tuples the equivalent of approximately 6 days of simulated time. Each policy requires about 10 hours for the biped, and 20 hours for the raptor and dog on an 8-core Intel Xeon E5-2687W.\nBiped : Walk Biped : March Biped : Run Raptor : Run (Sim) 0.8 0.8 0.8 0.8 WWw Z 0.6 0.6 0.6 0.6 ewwr 0.4 Tor 0.4 0.4 0.4 Vel R 0.2 PD 0.2 0.2 0.2 MTU 0 0 :5 8 2 6 0 2 4 6 10 0 2 4 6 8 10 0 8 10 0 2 4 6 8 10 Iterations X 105 X 105 X 105 X 105 Raptor:Run Dog : Bound (Sim) Dog : Rear-Up 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.2 0 04 0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10 X 105 X 105 X 105\nBiped : Walk Biped : March Biped : Run Raptor : Run (Sim) MW Y 0.8 0.8 0.8 0.8 Reee ereee 0.6 0.6 0.6 0.6 0.4 Tor 0.4 0.4 0.4 Vel 0.2 PD 0.2 0.2 0.2 MTU 0 0 0 2 0 2 4 6 8 10 0 2 4 6 8 10 0 4 6 8 10 0 2 4 6 8 10 Iterations X 105 X 105 X 105 x 105\nFigure 2: Learning curves for each policy during 1 million iterations\nOnly the actuator parameters for MTUs are optimized with Algorithm[2] since the parameters for the other actuation models are few and reasonably intuitive to determine. The initial actuator parameter. Vo are manually specified, while the initial policy parameters Oo are randomly initialized. Each pas. optimizes using CMA for 250 generations with 16 samples per generation, and 0 is trained fo. 250k iterations. Parameters are initialized with values from the previous pass. The expected value. of each CMA sample of is estimated using the average cumulative reward over 16 rollouts with a duration of 10s each. Separate MTU parameters are optimized for each character and motion. Eacl set of parameters is optimized for 6 passes following Algorithm|2l requiring approximately 50 hours. Figure 5 illustrates the performance improvement per pass. Figure 6 compares the performance of MTUs before and after optimization. For most examples, the optimized actuator parameters. significantly improve learning speed and final performance. For the sake of comparison, after a set o. actuator parameters has been optimized, a new policy is retrained with the new actuator parameters. and its performance compared to the other actuation models..\nPolicy Performance and Learning Speed: Figure 2shows learning curves for the policies and. the performance of the final policies are summarized in Table |4] Performance is evaluated using the normalized cumulative reward (NCR), calculated from the average cumulative reward over 32 episodes with lengths of 10s, and normalized by the maximum and minimum cumulative reward. possible for each episode. No discounting is applied when calculating the NCR. The initial state of. each episode is sampled from the reference motion according to p(so). To compare learning speeds. we use the normalized area under each learning curve (AUC) as a proxy for the learning speed of. a particular actuation model, where O represents the worst possible performance and no progress during training, and 1 represents the best possible performance without requiring training..\nPolicy Robustness: To evaluate robustness, we recorded the NCR achieved by each policy when subjected to external perturbations. The perturbations assume the form of random forces applied\nPD performs well across all examples, achieving comparable-to-the-best performance for all mo. tions. PD also learns faster than the other parameterizations for 5 of the 7 motions. The final per. formance of Tor is among the poorest for all the motions. Differences in performance appear more. pronounced as characters become more complex. For the simple 7-link biped, most parameteriza-. tions achieve similar performance. However, for the more complex dog and raptor, the performance of Tor policies deteriorate with respect to other policies such as PD and Vel. MTU policies often exhibited the slowest learning speed, which may be a consequence of the higher dimensional action spaces, i.e., requiring antagonistic muscle pairs, and complex muscle dynamics. Nonetheless, once optimized, the MTU policies produce more natural motions and responsive behaviors as compared to other parameterizations. We note that the naturalness of motions is not well captured by the reward, since it primarily gauges similarity to the reference motion, which may not be representa-. tive of natural responses when perturbed from the nominal trajectory. A sensitivity analysis of the. policies' performance to variations in network architecture and hyperparameters are available in the\nto the trunk of the characters. Figure3|illustrates the performance of the policies when subjectec to perturbations of different magnitudes. The magnitude of the forces are constant, but direction varies randomly. Each force is applied for 0.1 to 0.4s, with 1 to 4s between each perturbation Performance is estimated using the average over 128 episodes of length 20s each. For the biped walk, the Tor policy is significantly less robust than those for the other types of actions, while the MTU policy is the least robust for the raptor run. Overall, the PD policies are among the most robust for all the motions. In addition to external forces, we also evaluate robustness over randomly generated terrain consisting of bumps with varying heights and slopes with varying steepness. We evaluate the performance on irregular terrain (Figure 12] supplemental material). There are few discernible patterns for this test. The Vel and MTU policies are significantly worse than the Tor and PD policies for the dog bound on the bumpy terrain. The unnatural jittery behavior of the dog Tor policy proves to be surprisingly robust for this scenario. We suspect that the behavior prevents the trunk from contacting the ground for extended periods for time, and thereby escaping our system's fall detection.\nFigure 3: Performance when subjected to random perturbation forces of different magnitudes\nQuery Rate: Figure4|compares the performance of different parameterizations for different policy query rates. Separate policies are trained with queries of 15Hz, 30Hz, 60Hz, and 120Hz. Actuatior models that incorporate low-level feedback such as PD and Vel, appear to cope more effectively tc lower query rates, while the Tor degrades more rapidly at lower query rates. It is not yet obviou to us why MTU policies appear to perform better at lower query rates and worse at higher rates Lastly, Figure[14|shows the policy outputs as a function of time for the four actuation models, fo a particular joint, as well as showing the resulting joint torque. Interestingly, the MTU action is visibly smoother than the other actions and results in joint torques profiles that are smoother thar those seen for PD and Vel.\nBiped : Walk. Dog : Bound (Sim) 0.8 0.8 Reee eerrrr 0.6 Ree eerrer 0.6 0.4 0.4 o Tor 0- Tor 0.2 O Vel 0.2 0 Vel 0 pD 0-pD MTU MTUE 0 0 0 20 40 60 80 100 120 0 20 40 60 80 100 120 Query Rate (Hz) Query Rate (Hz)\n0.8 Reee) peerea 0.6 0.4 0.2 0 0\nFigure 4: Performance of policies with different query rates for the biped (left) and dog (right) Separate policies are trained for each query rate..\nDeepRL has driven impressive recent advances in learning motion control, i.e., solving for continuous-action control problems using reinforcement learning. All four of the actions types that we explore have seen previous use in the machine learning literature. WawrzynSki & Tanwani (2013) use an actor-critic approach with experience replay to learn skills for an octopus arm (actu- ated by a simple muscle model) and a planar half cheetah (actuated by joint-based PD-controllers).\nBiped : Walk. Raptor : Run (Sim) Dog : Bound (Sim) 1 0.8 0.8 0.80 0.6 0.6 0.6 0.4 Tor 0.4 0.4 Vel 0.2 PD 0.2 0.2 MTU 0 0 50 100 150 200 0 100 200 300 400 0 0 100 200 300 400 Perturbation (N) Perturbation (N)\nBiped : Walk Raptor:Run (Sim) Dog:Bound (Sim) 0.8 ree eereR 0.8 0.8c 0.6 0.6 0.6 0.4 Tor 0. 0.4 - Vel 0.2 -PD 0.2 0.2 MTU 0 0 0 50 100 150 200 0 100 200 300 400 0 100 200 300 400 Perturbation (N) Perturbation (N)\nBiped : Walk Dog : Bound (Sim) 1 1 QQ 0.8 0.8 R 0.6 0.6 0.4 0.4 Tor 0Tor 0.2 O Vel 0.2 o- Vel PD 0-pD MTU MTUE 0 0 0 20 40 60 80 100 120 0 20 40 60 80 100 120 Query Rate (Hz) Query Rate Hz)\nRecent work on deterministic policy gradients (Lillicrap et al.] 2015) and on RL benchmarks, e.g.. OpenAI Gym, generally use joint torques as the action space, as do the test suites in recent work (Schulman et al.]2015) on using generalized advantage estimation. Other recent work uses: the. PR2 effort control interface as a proxy for torque control (Levine et al.]2015); joint velocities (Gu. et al.|2016); velocities under an implicit control policy (Mordatch et al. 2015); or provide abstract actions (Hausknecht & Stone| 2015). Our learning procedures are based on prior work using actor-. critic approaches with positive temporal difference updates (Van Hasselt|2012).\nWork in biomechanics has long recognized the embodied nature of the control problem and the. view that musculotendon systems provide \"preflexes\" (Loebf 1995) that effectively provide a form. intelligence by mechanics (Blickhan et al.[2007), as well as allowing for energy storage. The control. strategies for physics-based character simulations in computer animation also use all the forms of. actuation that we evaluate in this paper. Representative examples include quadratic programs that. solve for joint torques (de Lasa et al.|20i0), joint velocities for skilled bicycle stunts (Tan et al.. 2014), muscle models for locomotion 7Wang et al.2012 Geijtenbeek et al.2013, mixed use of feed-forward torques and joint target angles (Coros et al.2011), and joint target angles computed. by learned linear (time-indexed) feedback strategies (Liu et al.f2016). Lastly, control methods in. robotics use a mix of actuation types, including direct-drive torques (or their virtualized equivalents). series elastic actuators, PD control, and velocity control. These methods often rely heavily on model based solutions and thus we do not describe these in further detail here."}, {"section_index": "10", "section_name": "7 CONCLUSIONS", "section_text": "Our experiments suggest that action parameterizations that include basic local feedback, such as PD. target angles, MTU activations, or target velocities, can improve policy performance and learning. speed across different motions and character morphologies. Such models more accurately reflec the embodied nature of control in biomechanical systems, and the role of mechanical components. in shaping the overall dynamics of motions and their control. The difference between low-level. and high-level action parameterizations grow with the complexity of the characters, with high-level. parameterizations scaling more gracefully to complex characters. As a caveat, there may well be. tasks, such as impedance control, where lower-level action parameterizations such as Tor may prove. advantageous. We believe that no single action parameterization will be the best for all problems. However, since objectives for motion control problems are often naturally expressed in terms of. kinematic properties, higher-level actions such as target joint angles and velocities may be effective. for a wide variety of motion control problems. We hope that our work will help open discussions. around the choice of action parameterizations.\nOur results have only been demonstrated on planar articulated figure simulations; the extension to 3D currently remains as future work. Furthermore, our current torque limits are still large as compared to what might be physically realizable. Tuning actuator parameters for complex actuation models such as MTUs remains challenging. Though our actuator optimization technique is able to improve performance as compared to manual tuning, the resulting parameters may still not be optimal for the desired task. Therefore, our comparisons of MTUs to other action parameterizations may not be reflective of the full potential of MTUs with more optimal actuator parameters. Furthermore, our actuator optimization currently tunes parameters for a specific motion, rather than a larger suite of motions, as might be expected in nature.\nSince the reward terms are mainly calculated according to joint positions and velocities, it may seen. that it is inherently biased in favour of PD and Vel. However, the real challenges for the contro policies lie elsewhere, such as learning to compensate for gravity and ground-reaction forces, an learning foot-placement strategies that are needed to maintain balance for the locomotion gaits. Th. reference pose terms provide little information on how to achieve these hidden aspects of motior. control that will ultimately determine the success of the locomotion policy. While we have yet t provide a concrete answer for the generalization of our results to different reward functions, w believe that the choice of action parameterization is a design decision that deserves greater attentior. regardless of the choice of reward function..\nFinally, it is reasonable to expect that evolutionary processes would result in the effective co-desigr of actuation mechanics and control capabilities. Developing optimization and learning algorithms to allow for this kind of co-design is a fascinating possibility for future work."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Matthew J. Hausknecht and Peter Stone. Deep reinforcement learning in parameterized action space CoRR, abs/1511.04143, 2015.\nVijay Konda and John Tsitsiklis. Actor-critic algorithms. In SIAM Journal on Control and Opt mization, pp. 1008-1014. MIT Press, 2000.\nSergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo motor policies. CoRR, abs/1504.00702, 2015.\nTimothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. CoRR abs/1509.02971. 2015.\nHado Van Hasselt. Reinforcement learning in continuous state and action spaces. In Reinforcemen Learning. pp. 207-251. Springer. 2012\nStelian Coros, Andrej Karpathy, Ben Jones, Lionel Reveret, and Michiel van de Panne. Locomotion. skills for simulated quadrupeds. ACM Transactions on Graphics, 30(4):Article TBD, 2011.\nThomas Geijtenbeek, Michiel van de Panne, and A. Frank van der Stappen. Flexible muscle-based locomotion for bipedal creatures. ACM Transactions on Graphics. 32(6). 2013..\nHartmut Geyer, Andre Seyfarth, and Reinhard Blickhan. Positive force feedback in bouncing gaits? DrocRovalS 0):2173_2183.2003\nShixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. Deep reinforcement learning for robotic manipulation. arXiv preprint arXiv:1610.00633, 2016.\nohn Schulman, Philipp Moritz, Sergey Levine, Michael I. Jordan, and Pieter Abbeel. High dimensional continuous control using generalized advantage estimation. CoRR, abs/1506.02438 2015.\nAlgorithm 1 Actor-critic Learning Using Positive Temporal Differences\n1: 0 random weights 2: random weights 3: while not done do 4: for step = 1, ..., m do 5: s start state 6: \\ Ber(et) 7: a e(s) + XW(0, ) 8: Apply a and simulate forward 1 step 9: s' end state 10: r reward 11: T (s,a,r, s', \\) 12: store t in replay memory 13: if episode terminated then 14: Sample so from po(s) 15: Reinitialize state s to So 16: end if 17: end for 18: Update critic: 19: Sample minibatch of n tuples {Ti = (si, ai, ri, Ai, s')} from replay memory 20: for each T; do 21: oiri +yVg(si)- Vg(si) 22: $$+QV8iVgVg(si) 23: end for 24: Update actor: 25: Sample minibatch of n tuples {Tj = (sj, aj, rj, Aj, s})} from replay memory where A = 26: for each T; do 27: oj+ rj + yVg(sj)- Vp(sj) 28: if &; > 0 then 29: Vaj + aj - e(Sj) 30: Vaj BoundActionGradient(Vaj, e(sj) 31: 00+aVee(sj)-1Vaj 32: end if 33: end for 34: end while\nAlgorithm 2 Alternating Actuator Optimization"}, {"section_index": "12", "section_name": "MTU Actuator Optimizations", "section_text": "The actuator parameters can be interpreted as a parameterization of the dynamics of the systen. p(s' [s, a, ). The expected cumulative reward can then be re-parameterized according tc\nJ(e,V) = do(s) ne(s, a)A(s, a)da ds s\nwhere de(s|) = Ss T t=o Y'po(so)p(so > s|t, e, )dso. 0 and are then learned in tandem. following Algorithm[2] This alternating method optimizes both the control and dynamics in order. to maximize the expected value of the agent, as analogous to the role of evolution in biomechanics During each pass, the policy parameters 0 are trained to improve the agent's expected value for a. fixed set of actuator parameters . Next, is optimized using CMA to improve performance while. keeping 0 fixed. The expected value of each CMA sample of is estimated using the average. cumulative reward over multiple rollouts..\nFigure 5|illustrates the improvement in performance during the optimization process, as applied tc motions for three different agents. Figure 6|compares the learning curves for the initial and final MTU parameters, for the same three motions..\nBiped : Run. Raptor : Run Dog : Sim Bound 1 0.8 0.8 0.8 0.6 0.6 0.6 . 0.4 0.4 0.4 0.2 0.2 0.2 0 0 0 0 2 4 6 0 2 4 6 0 2 4 6 Passes\nFigure 5: Performance of intermediate MTU policies and actuator parameters per pass of actuator optimization following Algorithm2\nBiped : Run Raptor : Run Dog : Bound (Sim) MTU Init 0.8 MTU Opt 0.8 0.8 (NCR) 0.6 0.6 0.6 aerrrp 0.4 0.4 0.4 0.2 0.2 0.2 0 0 0 0 2 4 8 10 0 2 4 6 8 10 0 2 4 6 8 10 Iterations x 105 x 105 x 105\nFigure 6: Learning curves comparing initial and optimized MTU parameters\nBiped : Run Raptor : Run. Dog : Sim Bound 7 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.2 0 0 09 0 2 4 6 0 2 4 6 0 2 4 6 Passes\nBiped : Run Raptor : Run Dog : Bound (Sim) 1 1 MTU Init 0.8 MTU Opt 0.8 0.8 r INC) 0.6 0.6 0.6 Reerrp 0.4 0.4 0.4 0.2 0.2 0.2 0 0 0 0 2 4. 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10 Iterations x 105 x 105 X 105"}, {"section_index": "13", "section_name": "Bounded Action Space", "section_text": "Properties such as torque and neural activation limits result in bounds on the range of values that. can be assumed by actions for a particular parameterization. Improper enforcement of these bounds can lead to unstable learning as the gradient information outside the bounds may not be reliable. (Hausknecht & Stone2015). To ensure that all actions respect their bounds, we adopt a method. similar to the inverting gradients approach proposed by Hausknecht & Stone(2015). Let Va = (a - (s))A(s, a) be the empirical action gradient from the policy gradient estimate of a Gaussiar. policy. Given the lower and upper bounds [l', u'] of the ith action parameter, the bounded gradient. Of the ith action determined according to\nUnlike the inverting gradients approach, which scales all gradients depending on proximity to the. bounds, this method preserves the empirical gradients when bounds are respected, and alters the gradients only when bounds are violated.."}, {"section_index": "14", "section_name": "Reward", "section_text": "The terms of the reward function are defined as follows\nq and q* denotes the character pose and reference pose represented in reduced-coordinates, while q and q* are the respective joints velocities. W is a manually-specified per joint diagonal weighting matrix. hroot is the height of the root from the ground, and xcom is the center of mass velocity..\nli_'(s), '(s)<l and Va <0 ui _u / '(s) >u' and a' > 0 otherwise\nrpose = exp (-l|q* - q||lw) rvel = exp-l|q* q|lw rend = exp r*- x rroot = exp (-10(h*oot ) rcom = exp (-10|*om xcom!\nrpose = exp (-l|q* q||lW) rvel = exp(-||q* q||w) rend = exp = exp(-10(h*. rcom = exp (-10|*om.\n40 > H|x* - xe rend = exp e\nreference trajectory reference trajectory rollouts rollouts initial state initial state S S\nFigure 7: Left: fixed initial state biases agent to regions of the state space near the initial state. particular during early iterations of training. Right: initial states sampled from reference trajectory allows agent to explore state space more uniformly around reference trajectory..\n512 256 S a Rectified Linear Linear\nFigure 8: Neural Network Architecture. Each policy is represented by a three layered network, wit 512 and 256 fully-connected hidden units, followed by a linear output layer.\nreference trajectory reference trajecton rollouts rollouts initial state initial state S S\nS a Rectified Linear Linear\nParameter Value Description 0.9 cumulative reward discount factor 0.001 actor learning rate 0.01 QV critic learning rate momentum 0.9 stochastic gradient descent momentum weight decay 0 L2 regularizer for critic parameters 0 weight decay 0.0005 L2 regularizer for actor parameters minibatch size 32 tuples per stochastic gradient descent step replay memory size 500000 number of the most recent tuples stored for future updates\nTable 2: Training hyperparameters\nCharacter + Actuation Mode. State Parameters Action Parameters. Actuator Parameters. Biped + Tor 58 6 0 Biped + Vel 58 6 6 Biped + PD 58 6 12 Biped + MTU 74 16 114 Raptor + Tor 154 18 0 Raptor + Vel 154 18 18 Raptor + PD 154 18 36 Raptor + MTU 194 40 258 Dog + Tor 170 20 0 Dog + Vel 170 20 20 Dog + PD 170 20 40 Dog + MTU 214 44 282\nTable 3: The number of state, action, and actuation model parameters for different characters an actuation models.\nRear-Up Rear-Up Rear-Up Rear-Up\nTable 4: Performance of policies trained for the various characters and actuation models. Perfor mance is measured using the normalized cumulative reward (NCR) and learning speed is represented. by the normalized area under each learning curve (AUC). The best performing parameterizations for. each character and motion are in bold..\nCharacter + Actuation Motion Performance (NCR) Learning Speed (AUC) Biped + Tor Walk 0.7662 0.3117 0.4788 Biped + Vel Walk 0.9520 0.0034 0.6308 Biped + PD Walk 0.9524 0.0034 0.6997 Biped + MTU Walk 0.9584 0.0065 0.7165 Biped + Tor March 0.9353 0.0072 0.7478 Biped + Vel March 0.9784 0.0018 0.9035 Biped + PD March 0.9767 0.0068 0.9136 Biped + MTU March 0.9484 0.0021 0.5587 Biped + Tor Run 0.9032 0.0102 0.6938 Biped + Vel Run 0.9070 0.0106 0.7301 Biped + PD Run 0.9057 0.0056 0.7880 Biped + MTU Run 0.8988 0.0094 0.5360 Raptor + Tor Run (Sim) 0.7265 0.0037 0.5061 Raptor + Vel Run (Sim) 0.9612 0.0055 0.8118 Raptor + PD Run (Sim) 0.9863 0.0017 0.9282 Raptor + MTU Run (Sim) 0.9708 0.0023 0.6330 Raptor + Tor Run 0.6141 0.0091 0.3814 Raptor + Vel Run 0.8732 0.0037 0.7008 Raptor + PD Run 0.9548 0.0010 0.8372 Raptor + MTU Run 0.9533 0.0015 0.7258 Dog + Tor Bound (Sim) 0.7888 0.0046 0.4895 Dog + Vel Bound (Sim) 0.9788 0.0044 0.7862 Dog + PD Bound (Sim) 0.9797 0.0012 0.9280 Dog + MTU Bound (Sim) 0.9033 0.0029 0.6825 Dog + Tor Rear-Up 0.8151 0.0113 0.5550 Dog + Vel Rear-Up 0.7364 0.2707 0.7454 Dog + PD Rear-Up 0.9565 0.0058 0.8701 Dog + MTU Rear-Up 0.8744 0.2566 0.7932"}, {"section_index": "15", "section_name": "Sensitivity Analysis", "section_text": "We further analyze the sensitivity of the results to different initializations and design decision Figure |9 compares the learning curves from multiple policies trained using different random ini tializations of the networks. Four policies are trained for each actuation model. The results for particular actuation model are similar across different runs, and the trends between the various ac tuation models also appear to be consistent. To evaluate the sensitivity to the amount of exploratio noise applied during training, we trained policies where the standard deviation of the action distribu tion is twice and half of the default values. Figure[10jillustrates the learning curves for each policy Overall, the performance of the policies do not appear to change significantly for the particular rang of values. Finally, Figure [11compares the results using different network architectures. The net work variations include doubling the number of units in both hidden layers, halving the number c hidden units, and inserting an additional layer with 512 units between the two existing hidden layers The choice of network structure does not appear to have a noticeable impact on the results, and th differences between the actuation models appear to be consistent across the different networks.\nDog : Bound (Sim) 0.8 eeR eeereR 0.6 0.4 Tor Vel 0.2 PD MTU 0 0 2 4 6 8 10 Iterations X 105\nFigure 9: Learning curves from different random network initializations. Four policies are trained for each actuation model.\nDog : Bound (Sim) Dog : Bound (Sim) 0.8 0.8 Reee) eeerrR 0.6 0.6 0.4 0.4 Tor PD 0.2 0.2 Tor 2x exp PD 2x Exp Tor 1/2x exp PD 1/2x Exp 0 0 0 2 4 6 8 10 0 2 4 6 8 10 Iterations x 105 x 105\nFigure 10: Learning curves comparing the effects of scaling the standard deviation of the actio distribution by 1x, 2x, and 1/2x\nDog : Bound (Sim) Dog : Bound (Sim) 1 0.8 0.8 Raee) eeeree 0.6 0.6 0.4 0.4 Tor PD Tor 2x units 0.2 PD 2x units 0.2 Tor 1/2x units PD 1/2x units Tor +1 layer PD +1 layer 0 0 0 2 4 6 8 10 0 2 4 6 8 10 Iterations X 105 X 105\nFigure 11: Learning curves for different network architectures. The network structures include. doubling the number of units in each hidden layer, halving the number of units, and inserting an additional hidden layer with 512 units between the two existing hidden layers..\nDog : Bound (Sim) Biped : March. Dog : Bound (Sim) 0.8 0.8 0.8 (NCR) (NC) 0.6 0.6 0.6 0.4 Tor 0.4 Tor 0.4 - Vel Vel r 0.2 o-PD r 0.2 -PD 0.2 MTU MTU 0 0 0 0 0.02 0.04 0.06 0 0.2 0.4 0.6 0 0.2 0.4 0.6 Max Bump Height (m) Max Steepness (m)\nDog : Bound (Sim) Biped : March Dog : Bound (Sim) 0.8 0.8 0.8q (NCR) Reee) eereR 0.6 0.6 0.6 eerrrd 0.4 - Tor 0.4 Tor 0.4 Vel Vel 0.2 oPD 0.2 PD 0.2 MTU MTU 0 0 0 0.02 0.04 0.06 0 0.2 0.4 0.6 0 0.2 0.4 0.6 Max Bump Height (m) Max Steepness (m)\n-\nFigure 13: Simulated Motions Using the PD Action Representation. The top row uses an MTU action space while the remainder are driven by a PD action space.\nFigure 12: Performance of different action parameterizations when traveling across randomly gen. erated irregular terrain. (left) Dog running across bumpy terrain, where the height of each bump varies uniformly between O and a specified maximum height. (middle) and (right) biped and dog. traveling across randomly generated slopes with bounded maximum steepness..\nFigure 14: Policy actions over time and the resulting torques for the four action types. Data is from. one biped walk cycle (1s). Left: Actions (60 Hz), for the right hip for PD, Vel, and Tor, and the right gluteal muscle for MTU. Right: Torques applied to the right hip joint, sampled at 600 Hz..\nBiped : Walk Dog : Bound (Sim) 0.8 0.8 Reee) peeree 0.6 0.6 0.4 0.4 PD Target State PD Target State 0.2 0.2 PD Phase PD Phase PD No Target State PD No Target State 0 2 4 6 8 10 0 2 4 6 8 10 Iterations X 105 x 105\nyor 9O0 0.8 0.8 Reee) perre 0.6 0.6 0.4 0.4 PD Target State PD Target State 0.2 0.2 PD Phase PD Phase PD No Target State PD No Target State 0 0 0 2 4 6 8 10 0 2 4 6 8 10 Iterations x 105 X 105\nFigure 15: Learning curves for different state representations including state + target state, state + phase, and only state.\nTor Tor 200 200 100 100 bu 0 100 -100 -200 0 0.2 0.4 0.6 0.8 0 0.2 1 0.4 0.6 0.8 1 Phase Phase Vel Vel 20 200 100 10 200 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 0 PD PD 200 100 200 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 MTU MTU 200 100 0.5 0 -200 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1\nTor Tor 200 200 100 100 Aeion enbue 100 -100 -200 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Phase Phase\nPD PD 200 100 O -200 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1\nMTO MTO 200 100 0.5 10 0 -200 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1"}] |
ryT4pvqll | [{"section_index": "0", "section_name": "IMPROVING POLICY GRADIENT BY EXPLORING UNDER-APPRECIATED REWARDS", "section_text": "Ofir Nachum*. Mohammad Norouzi, Dale Schuurmans\n{ofirnachum, mnorouzi, schuurmans}@google.com\nThis paper presents a novel form of policy gradient for model-free reinforce ment learning (RL) with improved exploration properties. Current policy-based methods use entropy regularization to encourage undirected exploration of the reward landscape, which is ineffective in high dimensional spaces with sparse rewards. We propose a more directed exploration strategy that promotes explo- ration of under-appreciated reward regions. An action sequence is considered under-appreciated if its log-probability under the current policy under-estimates its resulting reward. The proposed exploration strategy is easy to implement, requir- ing small modifications to the REINFORCE algorithm. We evaluate the approach on a set of algorithmic tasks that have long challenged RL methods. Our approach reduces hyper-parameter sensitivity and demonstrates significant improvements over baseline methods. The proposed algorithm successfully solves a benchmark multi-digit addition task and generalizes to long sequences, which, to our knowl- edge, is the first time that a pure RL method has solved addition using only reward feedback."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Humans can reason about symbolic objects and solve algorithmic problems. After learning to coun. and then manipulate numbers via simple arithmetic, people eventually learn to invent new algorithm. and even reason about their correctness and efficiency. The ability to invent new algorithms is funda. mental to artificial intelligence (AI). Although symbolic reasoning has a long history in AI (Russel. et al.|2003), only recently have statistical machine learning and neural network approaches begur. to make headway in automated algorithm discovery (Reed & de Freitas|2016f Kaiser & Sutskever. 2016f Neelakantan et al.|2016), which would constitute an important milestone on the path to AI. Nevertheless, most of the recent successes depend on the use of strong supervision to learn a map. ping from a set of training inputs to outputs by maximizing a conditional log-likelihood, very mucl like neural machine translation systems (Sutskever et al.]2014]Bahdanau et al.]2015). Such a de pendence on strong supervision is a significant limitation that does not match the ability of peopl. to invent new algorithmic procedures based solely on trial and error..\nBy contrast, reinforcement learning (RL) methods (Sutton & Barto1998) hold the promise of searching over discrete objects such as symbolic representations of algorithms by considering much weaker feedback in the form of a simple verifier that tests the correctness of a program execution on a given problem instance. Despite the recent excitement around the use of RL to tackle Atari games (Mnih et al.2015) and Go (Silver et al.]2016), standard RL methods are not yet able to consistently and reliably solve algorithmic tasks in all but the simplest cases (Zaremba & Sutskever 2014). A key property of algorithmic problems that makes them challenging for RL is reward spar- sity. i.e.. a policy usually has to get a long action sequence exactly right to obtain a non-zero reward"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We believe one of the key factors limiting the effectiveness of current RL methods in a sparse rewar. setting is the use of undirected exploration strategies (Thrun1992), such as e-greedy and entropy. regularization (Williams & Peng1991). For long action sequences with delayed sparse reward, it i hopeless to explore the space uniformly and blindly. Instead, we propose a formulation to encourage. exploration of action sequences that are under-appreciated by the current policy. Our formulatior considers an action sequence to be under-appreciated if the model's log-probability assigned to ar action sequence under-estimates the resulting reward from the action sequence. Exploring under appreciated states and actions encourages the policy to have a better calibration between its log. probabilities and observed reward values, even for action sequences with negligible rewards. Thi. effectively increases exploration around neglected action sequences..\nWe term our proposed technique under-appreciated reward exploration (UREX). We show that the objective given by UREX is a combination of a mode seeking objective (standard REINFORCE) and a mean seeking term, which provides a well motivated trade-off between exploitation and ex- ploration. To empirically evaluate our method, we take a set of algorithmic tasks such as sequence reversal, multi-digit addition, and binary search. We choose to focus on these tasks because, al- though simple, they present a difficult sparse reward setting which has limited the success of stan- dard RL approaches. The experiments demonstrate that UREX significantly outperforms baseline RL methods, such as entropy regularized REINFORCE and one-step Q-learning, especially on the more difficult tasks, such as multi-digit addition. Moreover, UREX is shown to be more robust to changes of hyper-parameters, which makes hyper-parameter tuning less tedious in practice. In ad- dition to introducing a new variant of policy gradient with improved performance, our paper is the first to demonstrate strong results for an RL method on algorithmic tasks. To our knowledge, the addition task has not been solved by any model-free reinforcement learning approach. We observe that some of the policies learned by UREX can successfully generalize to long sequences; e.g., in 2 out of 5 random restarts, the policy learned by UREX for the addition task correctly generalizes to addition of numbers with 2000 digits with no mistakes, even though training sequences are at most 33 digits long.\nAlthough research on using neural networks to learn algorithms has witnessed a surge of recent. interest, the problem of program induction from examples has a long history in many fields, includ- ing program induction, inductive logic programming (Lavrac & Dzeroski] 1994), relational learn-. ing (Kemp et al.]2007) and regular language learning (Angulin[1987). Rather than presenting a comprehensive survey of program induction here, we focus on neural network approaches to algo rithmic tasks and highlight the relative simplicity of our neural network architecture..\nMost successful applications of neural networks to algorithmic tasks rely on strong supervision. where the inputs and target outputs are completely known a priori. Given a dataset of examples, one. learns the network parameters by maximizing the conditional likelihood of the outputs via back. propagation (e.g., Reed & de Freitas(2016);Kaiser & Sutskever(2016); Vinyals et al.(2015)) However, target outputs may not be available for novel tasks, for which no prior algorithm is known. to be available. A more desirable approach to inducing algorithms, followed in this paper, advocates. using self-driven learning strategies that only receive reinforcement based on the outputs produced. Hence, just by having access to a verifier for an algorithmic problem, one can aim to learn an algo. rithm. For example, if one does not know how to sort an array, but can check the extent to which ar. array is sorted, then one can provide the reward signal necessary for learning sorting algorithms\nWe formulate learning algorithms as an RL problem and make use of model-free policy gradien. methods to optimize a set parameters associated with the algorithm. In this setting, the goal is tc. learn a policy e that given an observed state st at step t, estimates a distribution over the next action. at, denoted e(at st). Actions represent the commands within the algorithm and states represent. the joint state of the algorithm and the environment. Previous work in this area has focused on aug. menting a neural network with additional structure and increased capabilities (Zaremba & Sutskever. 2015,[Graves et al.[[2016). In contrast, we utilize a simple architecture based on a standard recurrent. neural network (RNN) with LSTM cells (Hochreiter & Schmidhuber|1997) as depicted in Figure|1 At each episode, the environment is initialized with a latent state h, unknown to the agent, which. determines s1 and the subsequent state transition and reward functions. Once the agent observes s1\ndt a a a a 7 2 3 t-1 LSTM LSTM LSTM LSTM s S h Environment\nFigure 1: The agent's RNN architecture that represents a policy. The environment is initialized witl. a latent vector h. At time step t, the environment produces a state s, and the agent takes as input s. and the previously sampled action at-1 and produces a distribution over the next action e(atst) Then, we sample a new action a and apply it to the environment..\nas the input to the RNN, the network outputs a distribution e(a1 s1), from which an action a1 is sampled. This action is applied to the environment, and the agent receives a new state observatior S2. The state s2 and the previous action a1 are then fed into the RNN and the process repeats until the end of the episode. Upon termination, a reward signal is received.\nThe goal is to learn a policy e that, given an observed state st at step t, estimates a distribution over. the next action at, denoted (at |st). The environment is initialized with a latent vector, h, which. determines the initial observed state s1 = g(h), and the transition function St+1 = f(st, at h).. Note that the use of nondeterministic transitions f as in Markov decision processes (MDP) may. be recovered by assuming that h includes the random seed for the any nondeterministic functions Given a latent state h, and S1:T = (s1, ..., ST), the model probability of an action sequence a1:T = (a1, . .., aT) is expressed as,\nT te(a1:Th) = ne(atst), where S1=g(h), St+1=f(st,ath) for 1<t<T t=1\nThe environment provides a reward at the end of the episode, denoted r(a1:T h). For ease c readability we drop the subscript from a1:T and simply write e(a h) and r(a h)\nBecause the space of possible actions A is large, enumerating over all of the actions to compute this. gradient is infeasible. Williams (1992) proposed to compute the stochastic gradient of the expected\nThe objective used to optimize the policy parameters, 0, consists of maximizing expected reward under actions drawn from the policy, plus an optional maximum entropy regularizer. Given a dis- tribution over initial latent environment states p(h). we express the regularized expected reward as\nORL(0;T) = En~p(h) e(a|h) r(a|h) - tlogre(a|h aEA\nd d RL0;Th)=) e(ah log re(a | h) [r(a | h) - tlogne(a | h) - d0 d0 aEA\nreward by using Monte Carlo samples. Using Monte Carlo samples, one first draws N i.i.d. samples from the latent environment states {h(n) 1 N 1. e(a | h(n) to approximate the gradient of (I) by using (2) as,\nUnfortunately, directly maximizing expected reward (i.e., when t = O) is prone to getting trapped in a local optimum. To combat this tendency, Williams & Peng (1991) augmented the expected reward objective by including a maximum entropy regularizer ( > O) to promote greater exploration. We will refer to this variant of REINFORCE as MENT (maximum entropy exploration)."}, {"section_index": "3", "section_name": "UNDER-APPRECIATED REWARD EXPLORATION (UREX", "section_text": "To explain our novel form of policy gradient, we first note that the optimal policy *, which globally maximizes ORL(0; T | h) in (1) for any t > 0, can be expressed as,\nOrL(0;h) =- DkL (e(h) *(h))\nThe KL divergence DkL (e || *) is known to be mode seeking (Murphy2012 Section 21.2.2 even with entropy regularization (r > 0). Learning a policy by optimizing this direction of the KL is prone to falling into a local optimum resulting in a sub-optimal policy that omits some of the modes of *. Although entropy regularization helps mitigate the issues as confirmed in our experiments, i is not an effective exploration strategy as it is undirected and requires a small regularization coeffi cient t to avoid too much random exploration. Instead, we propose a directed exploration strategy that improves the mean seeking behavior of policy gradient in a principled way.\nWe start by considering the alternate mean seeking direction of the KL divergence, DkL (* || e) Norouzi et al.(2016) considered this direction of the KL to directly learn a policy by optimizing\nORAML(0;T) = Eh~p(h) *(a h) log ne(a h) aEA\nORAmL(0;h) = -r DkL (*(h) e(h)) + cOnst\nNorouzi et al.(2016) argue that in some structured prediction problems when one can draw samples from *, optimizing (7) is more effective than (1), since no sampling from a non-stationary policy is required. If is a log-linear model of a set of features, ORAmL is convex in 0 whereas ORL is not, even in the log-linear case. Unfortunately, in scenarios that the reward landscape is unknown or computing the normalization constant Z(h) is intractable. sampling from * is not straightforward\nIn RL problems, the reward landscape is completely unknown, hence sampling from * is in. tractable. This paper proposes to approximate the expectation with respect to * by using self-. normalized importance sampling (Owen2013), where the proposal distribution is e and the ref- erence distribution is n*. For importance sampling, one draws K i.i.d. samples {a(k)}K-, from.\nN K d 1 d (a(k) | h(n)) -tloge(ak ogea d0 N K d0 n=1 k=J\nr(a(k) h) =r(a(k) [h) -b(h)\n1 r*(a|h) = exp Z(h)\nK w(a(k) |h) ORAML(0;T|h) ~T loge(a(k) a(m) k=1\n(a(k) |h) = exp W- R h) - log ne(a(k\nIn practice, we have found that just using the importance sampling RAML objective in (9) does not always yield promising solutions. Particularly, at the beginning of training, when e is still far away from *, the variance of importance weights is too large, and the self-normalized im- portance sampling procedure results in poor approximations. To stabilize early phases of training and ensure that the model distribution e achieves large expected reward scores, we combine the expected reward and RAML objectives to benefit from the best of their mode and mean seeking behaviors. Accordingly, we propose the following objective that we call under-appreciated reward exploration (UREX).\nOUREX(0;T) = Eh~p(h) 7e(a h) r(ah) + t *(a h) loge(a\nwhich is the sum of the expected reward and RAML objectives. In our preliminary experiments, we considered a composite objective of ORL + ORAML, but we found that removing the entropy term is beneficial. Hence, the Ourex objective does not include entropy regularization. Accordingly, the optimum policy for Ourex is no longer *, as it was for OrL and OrAmL. Appendix |A|derives the optimal policy for Ourex as a function of the optimal policy for OrL. We find that the optimal policy of UREX is more sharply concentrated on the high reward regions of the action space, which may be an adyantage for UREX. but we leave more analysis of this behavior to future work"}, {"section_index": "4", "section_name": "5 RELATED WORK", "section_text": "Before presenting the experimental results, we briefly review some pieces of previous work th closely relate to the UREX approach.\nReward-Weighted Regression. Both RAML and UREX objectives bear some similarity to a. method in continuous control known as Reward-Weighted Regression (RWR) (Peters & Schaal 2007Wierstra et a1. 2008). Using our notation, the RWR objective is expressed as,.\nORwR(0;h) log *(ah)e(a | h) aEA *(a|h)e(a|h) q(a|h) log > q(a|h) aEA\nOne can view these importance weights as evaluating the discrepancy between scaled rewards r/T and the policy's log-probabilities log e. Among the K samples, a sample that is least appreciated by the model, i.e., has the largest r /- - log e, receives the largest positive feedback in (9)..\nEx(0;r) = Eh~p(h) ne(a h) r(a h) + t n*(a | h) logne(a | h\nK N d de N d0 2 n=1k=1\nTo optimize the RWR objective, one formulates the gradient as\nd 2RwR(0;r|h) = log ne(a h) d0 d0 aEA\nK d 1 n d ORwR(0;T |h) d0 K ) de k=1\nwhere u(a(k) | h) = exp{r(a(k) | h)}. There is some similarity between (16) and (9) in that. they both use self-normalized importance sampling, but note the critical difference that (16) and (9) estimate the gradients of two different objectives, and hence the importance weights in (16) do not correct for the sampling distribution e(a|h) as opposed to (9)..\nBeyond important technical differences, the optimal policy of Orwr is a one hot distribution with all probability mass concentrated on an action sequence with maximal reward, whereas the optimal policies for RAML and UREX are everywhere nonzero, with the probability of different action sequences being assigned proportionally to their exponentiated reward (with UREX introducing an additional re-scaling; see Appendix A). Further, the notion of under-appreciated reward exploration evident in OuREx, which is key to UREX's performance, is missing in the RWR formulation.\nExploration. The RL literature contains many different attempts at incorporating exploration thai. may be compared with our method. The most common exploration strategy considered in value based RL is e-greedy Q-learning, where at each step the agent either takes the best action according. to its current value approximation or with probability e takes an action sampled uniformly at random.. Like entropy regularization, such an approach applies undirected exploration, but it has achieved. recent success in game playing environments (Mnih et al.]2013] Van Hasselt et al.]2016] Mnih. et al.2016).\nProminent approaches to improving exploration beyond e-greedy in value-based or model-basec. RL have focused on reducing uncertainty by prioritizing exploration toward states and actions. where the agent knows the least. This basic intuition underlies work on counter and recency meth ods (Thrun1992), exploration methods based on uncertainty estimates of values (Kaelbling]1993 Tokic2010), methods that prioritize learning environment dynamics (Kearns & Singh2002|Stadie et al.|2015), and methods that provide an intrinsic motivation or curiosity bonus for exploring un known states (Schmidhuber2006Bellemare et al.]2016).\nIn contrast to value-based methods, exploration for policy-based RL methods is often a by-produci of the optimization algorithm itself. Since algorithms like REINFORCE and Thompson sampling. choose actions according to a stochastic policy, sub-optimal actions are chosen with some non-zero probability. The Q-learning algorithm may also be modified to sample an action from the softmax. of the Q values rather than the argmax (Sutton & Barto][1998).\nAsynchronous training has also been reported to have an exploration effect on both value- and policy-based methods. Mnih et al.(2016) report that asynchronous training can stabilize training by reducing the bias experienced by a single trainer. By using multiple separate trainers, an agent is less likely to become trapped at a policy found to be locally optimal only due to local conditions. In the same spirit, Osband et al.[(2016) use multiple Q value approximators and sample only one to act for each episode as a way to implicitly incorporate exploration..\nBy relating the concepts of value and policy in RL, the exploration strategy we propose tries to bridge the discrepancy between the two. In particular, UREX can be viewed as a hybrid combination of value-based and policy-based exploration strategies that attempts to capture the benefits of each\nTo optimize Orwr,Peters & Schaal (2007) propose a technique inspired by the EM algorithm to. maximize a variational lower bound in (14) based on a variational distribution q(a h). The RwR. objective can be interpreted as a log of the correlation between r* and e. By contrast, the RAML and UREX objectives are both based on a KL divergence between * and e..\nBornschein & Bengio (2014) apply the same trick to optimize the log-likelihood of latent variable models\nPer-step Reward. Finally, while we restrict ourselves to episodic settings where a reward is as. sociated with an entire episode of states and actions, much work has been done to take advan tage of environments that provide per-step rewards. These include policy-based methods such as. actor-critic (Mnih et al.]2016] Schulman et al.]2016) and value-based approaches based on Q learning (Van Hasselt et al.]2016Schaul et al.]2016). Some of these value-based methods have. proposed a softening of Q-values which can be interpreted as adding a form of maximum-entropy. regularizer (Asadi & Littman][2016]|Azar et al.[2012)|Fox et al.]2016]Ziebart2010). The episodic total-reward setting that we consider is naturally harder since the credit assignment to individua actions within an episode is unclear..\nThe OpenAI Gym provides an additional harder task called ReversedAddition3, which involves. adding three numbers. We omit this task, since none of the methods make much progress on it.\nFor these tasks, the input sequences encountered during training range from a length of 2 to 3. haracters. A reward of 1 is given for each correct emission. On an incorrect emission, a smal. enalty of -0.5 is incurred and the episode is terminated. The agent is also terminated and penalize with a reward of -1 if the episode exceeds a certain number of steps. For the experiments usin. UREX and MENT, we associate an episodic sequence of actions with the total reward, defined a. the sum of the per-step rewards. The experiments using Q-learning, on the other hand, used th. er-step rewards. Each of the Gym tasks has a success threshold, which determines the require. average reward over 100 episodes for the agent to be considered successful..\nWe also conduct experiments on an additional algorithmic task described below.\nWe also conduct experiments on an additional algorithmic task described below:. 6. BinarySearch: Given an integer n, the environment has a hidden array of n distinct numbers stored in ascending order. The environment also has a query number x unknown to the agent that is contained somewhere in the array. The goal of the agent is to find the query number in. the array in a small number of actions. The environment has three integer registers initialized at. (n, 0, O). At each step, the agent can interact with the environment via the four following actions:. : INC(i): increment the value of the register i for i E {1, 2, 3}.. . DIV(i): divide the value of the register i by 2 for i E {1, 2, 3}.. . AVG(i): replace the value of the register i with the average of the two other registers.. CMP(i): compare the value of the register i with x and receive a signal indicating which. value is greater. The agent succeeds when it calls CMP on an array cell holding the value x..\nWe assess the effectiveness of the proposed approach on five algorithmic tasks from the OpenAI Gym (Brockman et al.|2016), as well as a new binary search problem. Each task is summarized below, with further details available on the Gym websitelor in the corresponding open-source code|3 In each case, the environment has a hidden tape and a hidden sequence. The agent observes the sequence via a pointer to a single character, which can be moved by a set of pointer control actions. Thus an action at is represented as a tuple (m, w, o) where m denotes how to move, w is a boolean denoting whether to write. and o is the output symbol to write.\n. Copy: The agent should emit a copy of the sequence. The pointer actions are move left and right. 2. DuplicatedInput: In the hidden tape, each character is repeated twice. The agent must dedupli. cate the sequence and emit every other character. The pointer actions are move left and right.. 3. RepeatCopy: The agent should emit the hidden sequence once, then emit the sequence in the reverse order, then emit the original sequence again. The pointer actions are move left and right. 4. Reverse: The agent should emit the hidden sequence in the reverse order. As before, the pointe actions are move left and right.. 5. ReversedAddition: The hidden tape is a 2 n grid of digits representing two numbers in base. 3 in little-endian order. The agent must emit the sum of the two numbers, in little-endian order. The allowed pointer actions are move left, right, up, or down..\nWe set the maximum number of steps to 2n+1 to allow the agent to perform a full linear search. A policy performing full linear search achieves an average reward of 5, because x is chosen uniformly. at random from the elements of the array. A policy employing binary search can find the number x in at most 2 log2 n + 1 steps. If n is selected uniformly at random from the range 32 n 512 binary search yields an optimal average reward above 9.55. We set the success threshold for this. task to an average reward of 9."}, {"section_index": "5", "section_name": "7.1 ROBUSTNESS TO HYPER-PARAMETERS", "section_text": "Hyper-parameter tuning is often tedious for RL algorithms. We found that the proposed UREX method significantly improves robustness to changes in hyper-parameters when compared to MENT For our experiments, we perform a careful grid search over a set of hyper-parameters for both MENT and UREX. For any hyper-parameter setting, we run the MENT and UREX methods 5 times with different random restarts. We explore the following main hyper-parameters:\nIn all of the experiments, both MENT and UREX are treated exactly the same. In fact, the change of implementation is just a few lines of code. Given a value of r, for each task, we run 60 training jobs comprising 3 learning rates, 4 clipping values, and 5 random restarts. We run each algorithr for a maximum number of steps determined based on the difficulty of the task. The training jobs for Copy, DuplicatedInput, RepeatCopy, Reverse, ReversedAddition, and BinarySearch are run for 2K 500, 50K, 5K, 50K, and 2K stochastic gradient steps, respectively. We find that running a trainer job longer does not result in a better performance. Our policy network comprises a single LSTM layer with 128 nodes. We use the Adam optimizer (Kingma & Ba]2015) for the experiments.\nTable[1shows the percentage of 60 trials on different hyper-parameters (n, c) and random restarts which successfully solve each of the algorithmic tasks. It is clear that UREX is more robust thar\nTable 1: Each cell shows the percentage of 60 trials with different hyper-parameters (n, c) and random restarts that successfully solve an algorithmic task. UREX is more robust to hyper-parameter changes than MENT. We evaluate MENT with a few temperatures and UREX with =0.1.\nREINFORCE7 MENT UREX T = 0.0 T = 0.005 T = 0.01 T = 0.1 T = 0.1 Copy 85.0 88.3 90.0 3.3 75.0 DuplicatedInput 68.3 73.3 73.3 0.0 100.0 RepeatCopy 0.0 0.0 11.6 0.0 18.3 Reverse 0.0 0.0 3.3 10.0 16.6 ReversedAddition 0.0 0.0 1.6 0.0 30.0 BinarySearch 0.0 0.0 1.6 0.0 20.0\nThe agent is terminated when the number of steps exceeds a maximum threshold of 2n+1 steps and recieves a reward of 0. If the agent finds x at step t, it recieves a reward of 10(1 -t/(2n+1))\nWe compare our policy gradient method using under-appreciated reward exploration (UREX) against two main RL baselines: (1) REINFORCE with entropy regularization termed MENT (Williams & Peng 1991), where the value of t determines the degree of regularization. When 7 = 0, standard REINFORCE is obtained. (2) one-step double Q-learning based on bootstrapping one step future rewards.\nThe learning rate denoted n chosen from a set of 3 possible values n E {0.1, 0.01, 0.001} The maximum L2 norm of the gradients, beyond which the gradients are clipped. This parame- ter, denoted c, matters for training RNNs. The value of c is selected from c E {1, 10, 40, 100} The temperature parameter t that controls the degree of exploration for both MENT and UREX For MENT, we use E {0,0.005,0.01,0.1}. For UREX, we only consider = 0.1, which consistently performs well across the tasks.\nMENT to changes in hyper-parameters, even though we only report the results of UREX for a single temperature. See AppendixB|for more detailed tables on hyper-parameter robustness."}, {"section_index": "6", "section_name": "7.2 RESULTS", "section_text": "Copy DuplicatedInput RepeatCopy 35 16 100 30 14 80 12 25 10 60 20 8 15 6 40 10 4 20 5 2 0 0 0 0 125 250 375 500 0 125 250 375 500 0 25000 50000 Reverse ReversedAddition BinarySearch 35 35 10 30 30 8 25 25 6 20 20 15 15 4 10 10 2 5 5 0 0 0 0 2500 5000 0 25000 50000 0 1000 2000\nFigure 2: Average reward during training for MENT (green) and UREX (blue). We find the best hyper-parameters for each method, and run each algorithm 5 times with random restarts. The curves present the average reward as well as the single standard deviation region clipped at the min and max."}, {"section_index": "7", "section_name": "7.3 GENERALIZATION TO LONGER SEOUENCES", "section_text": "To confirm whether our method is able to find the correct algorithm for multi-digit addition, we investigate its generalization to longer input sequences than provided during training. We evaluate. the trained models on inputs up to a length of 2000 digits, even though training sequences were at. most 33 characters. For each length, we test the model on 100 randomly generated inputs, stopping. when the accuracy falls below 100%. Out of the 60 models trained on addition with UREX, we find that 5 models generalize to numbers up to 2000 digits without any observed mistakes. On the best UREX hyper-parameters, 2 out of the 5 random restarts are able to generalize successfully For more detailed results on the generalization performance on 3 different tasks including Copy.\nTable 2 presents the number of successful attempts (out of 5 random restarts) and the expected. reward values (averaged over 5 trials) for each RL algorithm given the best hyper-parameters. One- step Q-learning results are also included in the table. We also present the training curves for MENT. and UREX in Figure2 It is clear that UREX outperforms the baselines on these tasks. On the. more difficult tasks, such as Reverse and ReverseAddition, UREX is able to consistently find an ap-. propriate algorithm, but MENT and Q-learning fall behind. Importantly, for the BinarySearch task. which exhibits many local maxima and necessitates smart exploration, UREX is the only method. that can solve it consistently. The Q-learning baseline solves some of the simple tasks, but it makes little headway on the harder tasks. We believe that entropy regularization for policy gradient and e-. greedy for Q-learning are relatively weak exploration strategies in long episodic tasks with delayed. rewards. On such tasks, one random exploratory step in the wrong direction can take the agent off. the optimal policy, hampering its ability to learn. In contrast, UREX provides a form of adaptive and smart exploration. In fact, we observe that the variance of the importance weights decreases as the. agent approaches the optimal policy, effectively reducing exploration when it is no longer necessary;. see AppendixE\nNum. of successful attempts out of 5 Expected reward Q-learning MENT UREX Q-learning MENT UREX Copy 5 5 5 31.2 31.2 31.2 DuplicatedInput 5 5 5 15.4 15.4 15.4 RepeatCopy 1 3 4 39.3 69.2 81.1 Reverse 0 2 4 4.4 21.9 27.2 ReversedAddition 0 1 5 1.1 8.7 30.2 BinarySearch 0 1 4 5.2 8.6 9.1\nDuplicatedInput, and ReversedAddition, see Appendix C During these evaluations, we take the. action with largest probability from e(a h) at each time step rather than sampling randomly\nWe also looked into the generalization of the models trained on the BinarySearch task. We founc that none of the agents perform proper binary search. Rather, those that solved the task perform hybrid of binary and linear search: first actions follow a binary search pattern, but then the agen switches to a linear search procedure once it narrows down the search space; see Appendix D|fo some execution traces for BinarySearch and ReversedAddition. Thus, on longer input sequences the agent's running time complexity approaches linear rather than logarithmic. We hope that future work will make more progress on this task. This task is especially interesting because the reward signal should incorporate both correctness and efficiency of the algorithm."}, {"section_index": "8", "section_name": "7.4 IMPLEMENTATION DETAILS", "section_text": "In all of the experiments, we make use of curriculum learning. The environment begins by only. providing small inputs and moves on to longer sequences once the agent achieves close to maximal. reward over a number of steps. For policy gradient methods including MENT and UREX, we only. provide the agent with a reward at the end of the episode, and there is no notion of intermediate. reward. For the value-based baseline, we implement one-step Q-learning as described in Mnih. et al.(2016)-Alg. 1, employing double Q-learning with e-greedy exploration. We use the same. RNN in our policy-based approaches to estimate the Q values. A grid search over exploration rate,. exploration rate decay, learning rate, and sync frequency (between online and target network) is conducted to find the best hyper-parameters. Unlike our other methods, the Q-learning baseline. uses intermediate rewards, as given by the OpenAI Gym on a per-step basis. Hence, the Q-learning. baseline has a slight advantage over the policy gradient methods..\nIn all of the tasks except Copy, our stochastic optimizer uses mini-batches comprising 400 policy samples from the model. These 400 samples correspond to 40 different random sequences drawn from the environment, and 10 random policy trajectories per sequence. In other words, we set K = 10 and N = 40 as defined in (3) and (12). For MENT, we use the 10 samples to subtract the mean of the coefficient of d log e(a| h) which includes the contribution of the reward and entropy regularization. For UREX, we use the 10 trajectories to subtract the mean reward and normalize the importance sampling weights. We do not subtract the mean of the normalized importance weights For the Copy task, we use mini-batches with 200 samples using K = 10 and N = 20. Experiments are conducted using Tensorflow (Abadi et al.|2016).\nTable 2: Results on several algorithmic tasks comparing Q-learning and policy gradient based on. MENT and UREX. We find the best hyper-parameters for each method, and run each algorithm 5. times with random restarts. Number of successful attempts (out of 5) that achieve a reward threshold is reported. Expected reward computed over the last few iterations of training is also reported..\nWe present a variant of policy gradient, called UREX, which promotes the exploration of action. sequences that yield rewards larger than what the model expects. This exploration strategy is the. result of importance sampling from the optimal policy. Our experimental results demonstrate that UREX significantly outperforms other value and policy based methods, while being more robust\nto changes of hyper-parameters. By using UREX, we can solve algorithmic tasks like multi-digit addition from only episodic reward, which other methods cannot reliably solve even given the best hyper-parameters. We introduce a new algorithmic task based on binary search to advocate more research in this area, especially when the computational complexity of the solution is also of interest Solving these tasks is not only important for developing more human-like intelligence in learning algorithms, but also important for generic reinforcement learning, where smart and efficient explo ration is the key to successful methods.\nWe thank Sergey Levine, Irwan Bello, Corey Lynch, George Tucker, Kelvin Xu, Volodymyr Mnih and the Google Brain team for insightful comments and discussions.."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large scale machine learning. arXiv:1605.08695, 2016.\nKavosh Asadi and Michael L Littman. A new softmax operator for reinforcement learning. arXi preprint arXiv:1612.05628, 2016.\nMohammad Gheshlaghi Azar, Vicenc Gomez, and Hilbert J Kappen. Dynamic policy programming Journal of Machine Learning Research, 13(Nov):3207-3245, 2012.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. ICLR, 2015.\nMarc G. Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. NIPs. 2016\nJorg Bornschein and Yoshua Bengio. Reweighted wake-sleep. arXiv:1406.2751, 2014\nGreg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv:1606.01540, 2016.\nGene Golub. Some modified matrix eigenvalue problems. SIAM Review, 1987.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Comput., 1997\nLeslie Pack Kaelbling. Learning in embedded systems. MIT press, 1993.\nLukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. ICLR, 2016.\nMichael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Ma chine Learning, 2002.\nDana Angulin. Learning regular sets form queries and counterexamples. Information and Compu tation, 1987.\nCharles Kemp, Noah Goodman, and Joshua Tenebaum. Learning and using relational theories NIPS, 2007.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015\nVolodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tin Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. ICML, 2016.\nKevin P. Murphy. Machine Learning: A Probabilistic Perspective. MIT Press, 2012\nArvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. ICLR, 2016\nArt B. Owen. Monte Carlo theory. methods and examples. 2013\nScott E. Reed and Nando de Freitas. Neural pro rammer-interpreters. ICLR, 2016\nJurgen Schmidhuber. Optimal artificial curiosity, creativity, music, and the fine arts. Connection Science, 2006.\nJohn Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. Higl dimensional continuous control using generalized advantage estimation. ICLR, 2016.\nDavid Silver, Aja Huang, et al. Mastering the game of Go with deep neural networks and tree search Nature, 2016.\nIlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks NIPS, 2014.\nRichard S. Sutton and Andrew G. Barto. Introduction to Reinforcement Learning. MIT Press, 1998\nSebastian B Thrun. Efficient exploration in reinforcement learning. Technical report, 1992\nMichel Tokic. Adaptive e-greedy exploration in reinforcement learning based on value differences AAAI, 2010.\nHado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q learning. AAAI, 2016.\nStuart Jonathan Russell, Peter Norvig, John F Canny, Jitendra M Malik, and Douglas D Edwards Artificial intelligence: a modern approach, volume 2. Prentice hall Upper Saddle River, 2003..\nTom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. ICLR 2016.\nWojciech Zaremba and Ilya Sutskever. Learning to execute. arXiv:1410.4615, 2014."}, {"section_index": "10", "section_name": "OPTIMAL POLICY FOR THE UREX OBJECTIVE", "section_text": "e(a) r(a) + t *(a) logne(a) aEA\naEA\nde a d0 Te aEA\nT TT* (a for all a E A Q -\nTables [38|provide more details on different cells of Table [1 Each table presents the results of MENT using the best temperature t vs. UREX with t = 0.1 on a variety of learning rates and clipping values. Each cell is the number of trials out of 5 random restarts that succeed at solving the task using a specific n and c.\nTable 3: Copy - number of successful attempts out of 5\nMENT (T 0.01) = UREX (T = 0.1) n = 0.1 n = 0.01 n = 0.001 n = 0.1 n = 0.01 n = 0.001 c = 1 3 5 5 5 5 2 c = 10 5 4 5 5 5 3 c = 40 3 5 5 4 4 1 c = 100 4 5 5 4 5 2\nTable 4: DuplicatedInput - number of successful attempts out of 5\nTable 5: RepeatCopy - number of successful attempts out of 5\nMENT t = 0.01) UREX ( = 0.1) n = 0.1 n = 0.01 n = 0.001 n = 0.1 n = 0.01 n = 0.001 c = 1 0 1 0 0 2 0 c = 10 0 0 2 0 4 0 c = 40 0 0 1 0 2 0 c = 100 0 0 3 0 3 0\nMENT (t = 0.01) UREX ( =0.1) n = 0.1 n = 0.01 n = 0.001 n. = 0.1 n = 0.01 n = 0.001 3 5 3 5 c = 1 5 5 c = 10 2 5 3 5 5 5 c = 40 4 5 3 5 5 5 c = 100 2 5 4 5 5 5\nTable 7: ReversedAddition - number of successful attempts out of 5.\nMENT (T 0.01) UREX (T 0.1) = n = 0.1 n = 0.01 n = 0.001 n = 0.1 n = 0.01 n = 0.001 c = 1 0 0 0 0 0 4 = 10 0 0 0 0 3 2 C c = 40 0 0 0 0 0 5 100 0 0 1 0 1 3 c =\nTable[9lprovides a more detailed look into the generalization performance of the trained models on. Copy, DuplicatedInput, and ReversedAddition. The tables show how the number of models which can solve the task correctly drops off as the length of the input increases..\nTable 9: Generalization results. Each cell includes the number of runs out of 60 different hype parameters and random initializations that achieve 100% accuracy on input of length up to the spe ified length. The bottom row is the maximal length ( 2000) up to which at least one model achiev. 100% accuracy.\nCopy DuplicatedInput ReversedAddition MENT UREX MENT UREX MENT UREX 30 54 45 44 60 1 18 100 51 45 36 56 0 6 500 27 22 19 25 0 5 1000 3 2 12 17 0 5 2000 0 0 6 9 0 5 Max 1126 1326 2000 2000 38 2000\nTable 6: Reverse - number of successful attempts out of 5\nMENT ( =0.1) UREX ( = 0.1) n = 0.1 n = 0.01 n = 0.001 n = 0.1 n = 0.01 n = 0.001 c =1 1 1 0 0 0 0 c = 10 0 1 0 0 4 0 c = 40 0 2 0 0 2 1 c = 100 1 0 0 0 2 1\nTable 8: BinarySearch - number of successful attempts out of 5\nMENT (t = 0.01) UREX ( = 0.1) n = 0.1 n = 0.01 n = 0.001 n = 0.1 n = 0.01 n = 0.001 c = 1 0 0 0 0 4 0 c = 10 0 1 0 0 3 0 c = 40 0 0 0 0 3 0 c = 100 0 0 0 0 2 0\nFigure 3: A graphical representation of a trained addition agent. The agent begins at the top left corner of a 2 n grid of ternary digits. At each time step, it may move to the left, right, up, or down (observing one digit at a time) and optionally write to output..\n2 Q ? 1 2 2 0 2 1 0 0. 0 2 1 1 2 0 1 2 2 1 - 1 2 1 1 2 2 2 1 0 0 0 1\n2 Q O 1 1 2! 1 2 2 2 0 0 0 1 2 1 2 2 1 1 2 1 1 2 2 2 1 0 0 0 1\nTable 10: Example trace on the BinarySearch task where n = 512 and the number to find is at position 100. At time t the agent observes s from the environment and samples an action a. We also include the inferred range of indices to which the agent has narrowed down the position of x.. We see that the first several steps of the agent follow a binary search algorithm. However, at some. point the agent switches to a linear search.\nRo R1 R2 St at Inferred range 512 0 0 AVG(2) 0,512) 512 0 256 - CMP(2) 0,512 512 0 256 < DIV(0) 0, 256) 256 0 256 AVG(2) 0, 256) 256 0 128 - CMP(2) 0,256) 256 0 128 < DIV(0) 0,128) 128 0 128 AVG(2) - 0,128 128 0 64 - CMP(2) [0,128) 128 0 64 > AVG(1) 64,128 128 96 64 CMP(1) - (64, 128 128 96 64 > AVG(2) 96, 128 128 96 112 CMP(2) - (96, 128 128 96 112 AVG(1) < (96,112) 128 120 112 CMP(2) - (96,112) 128 120 112 < DIV(1) 96, 112 128 60 112 AVG(2) - (96, 112) 128 60 94 CMP(2) - (96,112) 128 60 94 > AVG(1) (96, 112 128 111 94 - CMP(1) 96, 112 128 111 94 < INC(1) 96, 111) 128 112 94 - INC(2) (96, 111) 128 112 95 CMP(2) - (96, 111) 128 112 95 > INC(2) (96, 111) 128 112 96 - CMP(2) (96, 111) 128 112 96 > INC(2) (96, 111) 128 112 97 CMP(2) - (96,111) 128 112 97 > INC(2) (97,111) 128 112 98 CMP(2) - (97,111) 128 112 98 > INC(2) (98,111) 128 112 99 CMP(2) - (98,111) 128 112 99 > INC(2) (99,111) 128 112 100 CMP(2) - (99,111) 128 112 100\nFigure 4: This plot shows the variance of the importance weights in the UREX updates as well as. the average reward for two successful runs. We see that the variance starts off high and reaches nea zero towards the end when the optimal policy is found. In the first plot, we see a dip and rise in the variance which corresponds to a plateau and then increase in the average reward..\n1.0 0.8 0.6 0.4 0.2 0.0 0 50 100 150 200\nFigure 5: In this plot we present the average performance of UREX (blue) and MENT (green over 100 repeats of a bandit-like task after choosing optimal hyperparameters for each method. Ir the task, the agent chooses one of 10,o00 actions at each step and receives a payoff corresponding to the entry in a reward vector r = (r1,.., r1o,ooo) such that r, = u, where ui E [0, 1) has been sampled randomly and independently from a uniform distribution. We parameterize the policy with a weight vector 0 E R30 such that e(a) x exp((a) . 0), where the basis vectors (a) R30 for each action are sampled from a standard normal distribution. The plot shows the average rewards obtained by setting = 8 over 100 experiments, consisting of 10 repeats (where r anc each repeat (keeping r and fixed but reinitializing 0). Thus, this task presents a relatively simple problem with a large action space, and we again see that UREX outperforms MENT.\n0.10 35 0.14 30 0.12 30 25 0.08 0.10 25 20 0.08 0.06 yannnee 20 15 0.06 15 10 0.04 0.04 10 5 0.02 0.02 5 0.00 0 0.00 0 -0.02 5 0 500 1000 1500 2000 2500 0 1000 2000 3000 4000 5000 Step Step"}] |
HJWzXsKxx | [{"section_index": "0", "section_name": "TRAINING LONG SHORT-TERM MEMORY WITH SPAR- SIFIED STOCHASTIC GRADIENT DESCENT", "section_text": "Maohua Zhu. Yuan Xie\nDepartment of Electrical and Computer Engineerin? University of California, Santa Barbara. Santa Barbara. CA 93106. USA"}, {"section_index": "1", "section_name": "Minsoo Rhu, Jason Clemons, Stephen W. Keckler NVIDIA Research", "section_text": "mrhu, iclemons, skeckler}@nvidia.com\nPrior work has demonstrated that exploiting the sparsity can dramatically improv. the energy efficiency and reduce the memory footprint of Convolutional Neu ral Networks (CNNs). However, these sparsity-centric optimization technique. might be less effective for Long Short-Term Memory (LSTM) based Recurren Neural Networks (RNNs), especially for the training phase, because of the signif. icant structural difference between the neurons. To investigate if there is possibl sparsity-centric optimization for training LSTM-based RNNs, we studied severa applications and observed that there is potential sparsity in the gradients gener. ated in the backward propagation. In this paper, we investigate why the sparsit. exists and propose a simple yet effective thresholding technique to induce furthe. more sparsity during the LSTM-based RNN training. The experimental result show that the proposed technique can increase the sparsity of linear gate gradi ents to more than 80% without loss of performance, which makes more than 50%. multiply-accumulate (MAC) operations redundant for the entire LSTM training process. These redundant MAC operations can be eliminated by hardware tech. niques to improve the energy efficiency and the training speed of LSTM-basec. RNNs."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Deep neural networks have achieved state-of-the-art performance in many different tasks, such as computer vision (Krizhevsky et al.] 2012) (Simonyan & Zisserman]2015), speech recognition, and natural language processing (Karpathy et al.|2016). The underlying representational power of these. neural networks comes from the huge parameter space, which results in an extremely large amount of. computation operations and memory footprint. To reduce the memory usage and accelerate the train- ing process, the research community has strived to eliminate the redundancy in the deep neural net-. works (Han et al.|2016b). Exploiting the sparsity in both weights and activations of Convolutional Neural Networks (CNNs), sparsity-centric optimization techniques (Han et al.]2016a) (Albericio. et al.|[2016) have been proposed to improve the speed and energy efficiency of CNN accelerators..\nThese sparsity-centric approaches can be classified into two categories: (1) pruning unimportan weight parameters and (2) skipping zero values in activations to eliminate multiply-accumulate (MAC) operations with zero operands. Although both categories have achieved promising results for CNNs, it remains unclear if they are applicable to training other neural networks, such as LSTM based RNNs. The network pruning approach is not suitable for training because it only benefits the inference phase of neural networks by iteratively pruning and re-training. The approach that exploits the sparsity in the activations can be used for training because the activations are involved in both"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "The sparsity in CNN activations mostly comes from the Rectified Linear Unit (ReLU) activatior. function, which sets all negative values to zero. However, Long Short-Term Memory, one of the most popular RNN cells, does not adopt the ReLU function. Therefore, LSTM should exhibit much. less sparsity in activations than CNNs, intuitively. Furthermore, the structure of an LSTM cell is. much more complicated than neurons in convolutional layers or fully connected layers of a CNN..\nTo explore additional opportunities to apply sparsity-centric optimization to LSTM-based RNNs we conducted an application characterization on several LSTM-based RNN applications, including. character-based language model, image captioning, and machine translation. Although the experi- mental results of the application characterization show that there is little sparsity in the activations. we observed potential sparsity in backward propagation of the LSTM training process. The acti-. vation values of the gates (input gate, forget gate, and output gate) and the new cell state exhibil a skewed distribution due to their functionality. That is, a large fraction of the activation values of. these Sigmoid-based gates are either close to 1 or close to O (for the Tanh-based new cell activations. values are close to -1 or 1). This skewed distribution will lead to a considerable amount of very small values in the LSTM backward propagation since there is a term o(x)(1 - o(x)) in the gradients of. the Sigmoid-based gates (tanh(x)(1 - tanh(x)) for the gradients of the new cell gradients), which will be zero given o(x) = 0 or o(x) = 1 (tanh(x) = -1 or tanh(x) = 1 for the new cell gradi-. ents). In real-world implementations, these very small values might be clamped to zero as they are. in the form of floating-point numbers, of which the precision is limited. Therefore, there is potential. sparsity in the gradients of the backward propagation of LSTM training..\nTo ensure that there is non-trivial amount of sparsity for hardware designers to exploit, we pro. pose \"sparsified\"' SGD, a rounding to zero technique to induce more sparsity in the gradients. This. approach can be seen as a stochastic gradient descent (SGD) learning algorithm with sparsifying. which strips the precision of floating point numbers for unimportant small gradients. Experiment results show that with proper thresholds, we can make 80% of the gradients of the gate inputs to. zero without performance loss for all applications and datasets we tested so far. As the sparse gradi-. ents of the gate inputs are involved in 67% matrix multiplications, more than 50% MAC operations are redundant in the entire LSTM training process. Eliminating these ineffectual MAC operations. with hardware techniques, the energy efficiency and training speed of LSTM-based RNNs will be. improved significantly.\nIn this section, we first review some of the prior work on sparsity-centric optimization techniques for neural networks, and then illustrate the application characterization example as the motivation for our research."}, {"section_index": "4", "section_name": "2. 1 SPARSITY-CENTRIC OPTIMIZATION FOR NEURAL NETWORKS", "section_text": "It has been demonstrated that there is significant redundancy in the parameterization of deep neura networks (Denil et al.]2013). Consequently, the over-sized parameter space results in sparsity ii the weight parameters of a neural network. Besides the parameters, there is also sparsity in th activations of each layer in a network, which comes from two sources: (1) the sparsity in weigh parameters and (2) the activation function of neurons, such as ReLU.\nAs the sparsity in weight parameters do not depend on the input data, it is often referred to as static. sparsity. On the other hand, the sparsity in the activations depend on not only the weight values but also the input data. Therefore, we refer to the sparsity in the activations as dynamic sparsity\nExploiting sparsity can dramatically reduce the network size and thus improve the computing perfor-. mance and energy efficiency. For example, Deep Compression (Han et al.] 2016b) applied network pruning to CNNs to significantly reduce the footprint of the weights, which enables us to store all the weights on SRAM. However, the static sparsity can only help the inference phase but not train- ing because weight parameters are adjusted during training. Fortunately, leveraging the dynamic. sparsity can benefit both inference and training of neural networks. Recent publications (Han et al.\nWf UJ Xt -1 Ug tanh tanh Wg Xt g Wi Wo U0 Xt Xt Figure 1: Basic LSTM cell\nwf Uf Xt 1 Ug tanh tanh Wg Xt 9 Wi Wo Uo Xt Xt\n2016a) (Albericio et al.][2016) have proposed various approaches to eliminate ineffectual MAC op erations with zero operands. Although these sparsity-centric optimization approaches have achievec. promising results on CNNs, much less attention has been paid to LSTM-based RNNs, because ther is a common belief that the major source of sparsity is the ReLU function, which is widely use. in the convolutional layers but not in LSTM-based RNNs. To accelerate LSTM-based RNNs an improve the energy efficiency, we investigate opportunities to exploit sparsity in the LSTM-base. RNN training process. As an initial step, in this paper we focus on the basic LSTM cell withou. peephole or other advanced features, as shown in Figure[1"}, {"section_index": "5", "section_name": "2.2 APPLICATION CHARACTERIZATION", "section_text": "To reveal if there is sparsity in LSTM training, we conduct an application characterization study. We start with a character-based language model as described in (Karpathy et al.||2016). This character based language model takes a sequence of characters as input and predicts the next character of this sequence. The characters are represented in one-hot vectors, which are transformed into distributed vectors by a word2vec layer. Then the distributed vectors feed into an RNN model based on LSTM cells, followed by a linear classifier.\nThe LSTM cells used in this character-based language model are all basic LSTM cells. For eacl cell, the forward propagation flow is as below:.\nit = W'x+. U'ht-1 +b ft = o(Wfx+ Ot =o(Wxt+Uht-1+ bo It =tanh(W9xt+ U9ht-1+ b9 Ct =ft O Ct-1+it O gt ht = Ot o tanh(ct\nft =o(Wfxt Ot = o(Wxt+Uht-1+ bo gt =tanh(W9xt + U9ht-1+ b9 Ct =ft O Ct-1+it O Jt ht = Ot 0 tanh(ct)\nCt =ft O Ct-1+it O Jt\nht = Ot o tanh(ct\nAs shown in Figure 1] it, ft, and ot stand for input gate, forget gate, and output gate, respectively. These sigmoid-based gates (o stands for sigmoid) are used to prevent irrelevant input from affecting. the memory cell (ct). The new cell state (gt) is a preliminary summary of the current input from the. previous layer and the previous status of current layer. The final hidden status ht is the output of the LSTM cell if it is seen as a black box..\nSince the gates are introduced to prevent irrelevant inputs from affecting the memory cell ct, we have a hypothesis that a large fraction of the activations of these gates should be either close to 1 or close to O, representing the control signal on or off, respectively. Similarly, the tanh-based new cell. status is active if its activation is 1 or inactive if it is -1. There should also be a considerable portion of the activations close to 1 or -1.\nTo validate our hypothesis, we extracted the activations of the sigmoid-based gates and tanh-based. new cell state from several model snapshots during training the character-based language model Figure|2 shows the histogram of the activation values of the gates and the new cell. The red curves. represent the activation values generated by a snapshot model which is O.5% trained (in terms of. total number of iterations) while the bars represent the activation values generated by a fully trained.\nkrrnneney Input Gate Forget Gate Output Gate New Cell\nFigure 2: Values of gates and new cell activations of LSTM. For the three sigmoid-based gates, the. range of x-axis is from O to 1. For the tanh-based new cell activation, the range is from -1 to 1.\nmodel. We can observe skewed distributions from each gate (and new cell) for both the O.5% trained. snapshot model and the fully trained model. Furthermore, the fully trained model shows a distribu- tion that is more skewed to the leftmost and the rightmost. Additionally, other un-shown snapshots. demonstrate that the distribution becomes consistently more skewed as the training process goes on. We also observed that after 10% of the training process, the distribution becomes steady, almost the. same as the fully trained model.\nBesides the character-based language model, we also conducted the same characterization to the image captioning task described in (Karpathy & Li]2015). The activation values of the RNN layer in the image captioning task exhibit the skewed distribution too. Even though we did not observe. sparsity in the gate activations, the skewed distribution indicates potential sparsity in the LSTM-. based RNN backward propagation, which will be shown in the next section.."}, {"section_index": "6", "section_name": "SPARSIFIED STOCHASTIC GRADIENT DESCENT FOR LSTM", "section_text": "net(i)t= W'xt+U'ht-1+ b net(f)t = Wfxt + net(0)t = Wxt+Uht-1+ bo net(g)t = W9xt+ U9ht-1+ b9 it = o(net(i)t) ft = 0(net(f)t) Ot = (net(0)t) gt = tanh(net(g)t)\nIn this section, we first show how the skewed distribution of gate values leads to potential sparsity in the LSTM backward propagation, and then we propose the \"sparsified\"' SGD to induce more sparsity in LSTM training.\nTo show how the skewed distribution in the gate activations results in potential sparsity in the LSTM- based RNN backward propagation, we need to review the forward and backward propagation at first. We can re-write the forward propagation equations as.\nnett=Wxt+Uht-1+b\nWith these denotations, we can express the backward propagation as\ndnet(g)t = dct 0 it 0 (1- gt) dnet(0)t = dht o tanh(ct) o (1 - 0t) o 0t dnet(f)t = dct O Ct-1 0 (1ft) o ft dnet(i)t = dct 0 gt o (1 - it) o it dxt = dnettWT dht-1 = dnetUT dW+ = xdnett dU+ = ht-1dnet\nIn the equations of the backward propagation. we use dnet to denote the gradient of the linear gates\nFrom these equations we can see that for each linear gate gradient there is one term introduced by. the sigmoid function or the tanh function, e.g. (1 - g?) in dnet(g)t and (1 - ot) o ot in dnet(o)t. As. we observed in the application characterization results, the activation values of these gates exhibit skewed distribution, which means a large fraction of ot, ft and it are close to O or 1 (gt close to -1 or. 1). The skewed distribution makes a large fraction of the linear gate gradients close to zero because. (1 - gt), (1 0t) o 0t, (1 - ft) o ft and (1 it) o it are mostly close to zero given the skewed. distribution of the gate activations."}, {"section_index": "7", "section_name": "3.2 INDUCING MORE SPARSITY", "section_text": "n the previous section we showed how the skewed distribution in gate activations results in potentia parsity in linear gate gradients theoretically. However, from mathematical perspective, there will b. o sparsity in linear gate gradients if the floating point numbers in computers have infinite precisio since they are only close to zero rather than be zero. Even the precision of 32-bit floating poir. umbers is not infinite, the 8-bit exponential part can still accommodate an extremely large dynami ange, which makes the sparsity less interesting to hardware accelerator designers. Fortunately. recent attempts to train neural networks with 16-bit floating points (Gupta et al.|2015) and fixe. ooints (Lin et al.[2015) have shown acceptable performance with smaller dynamic range. Thi. inspires us to induce more sparsity by rounding very small linear gate gradients to zero, which i. similar to replace 32-bit floating points with 16-bit floating points or fixed points..\nThe intuition behind this \"rounding to zero\" approach is that pruning CNNs will not affect the. overall training performance. Similarly, thresholding very small gradient (dnet) values to zero is likely not to affect the overall training accuracy. Therefore, we propose a simple static thresholding. approach which sets small dnet values below a threshold t to zero. By doing this, we can increase the. sparsity in dnet even further than the original sparsity caused by limited dynamic range of floating.\nCt =ftO Ct-1+itO Jt\nHere we introduce variables net(i), net(f), net(o) and net(g) to represent the linear part of the gates and the new cell state. In GPU implementations such as cuDNN v5 (Appleyard et al.]2016), these linear gates (including new cell state from now on) are usually calculated in one step since they share the same input vectors x and ht-1. Therefore we can use a uniform representation for the four linear gates, that is\nThe matrix W here stands for the combination of the matrices Wi, W f, Wo and Wg and the matrix U stands for the combination of the matrices Ui, Uf, Uo and Ug.\ndot = dht o tanh(ct dcp = dht 0 (1 tanh?(ct)) o Ot + ft 0 Ct+1 dnet(g)t = dct 0 it 0 (1 - gt) dnet(0)t = dht o tanh(ct) o (1 - 0t) o 0t dnet(f)t = dct 0 Ct-1 0 (1- ft) o ft dnet(i)t = dct 0 gt 0 (1 - it) o it dxt = dnettWT dht-1 = dnettUT dW+ = x+dnett dU+ = ht-1dnet\ndot = dht o tanh(ct\nWhen implementing the LSTM-based RNNs, we usually use 32-bit floating point numbers to rep resent the gradients. Due to the precision limit, floating point numbers will round extremely small. values to zero. Therefore, there is potential sparsity in dnet since a large fraction of the linear gate gradients are close to zero.\npoint numbers. With our static thresholding technique, the backward propagation of LSTM training becomes as below:\ndot = dht o tanh(ct) dct = dht 0 (1 - tanh?(ct)) 0 0t + ft 0 Ct+1 dnet(g)t = dct 0 it o (1 - gt) dnet(o)t = dht o tanh(ct) o (1 - 0t) o 0t dnet(f)t = dct 0 Ct-1 0 (1- ft) o ft dnet(i)t = dct 0 gt o (1 - it) o it dnett = (dnett > t)?dnett : 0 dxt = dnett WT dht-1 = dnettUT dW+ = xdnett\ndct = dht 0 (1 tanh?(ct)) 0 0t + ft 0 Ct+1 dnet(g)t = dct 0 it 0 (1- gt) dnet(o)t = dht o tanh(ct) o (1 - 0t) o 0t dnet(f)t = dct 0 Ct-1 0 (1 ft) 0 fi dnet(i)t = dct o gt o (1 - it) o it dnett = (dnett > t)?dnett : 0 dxt = dnettWT dht-1 = dnettUT dW+ = xdnet dU+ = ht-1dnett\nIn this \"sparsified\"' SGD backward propagation, a new hyper-parameter t is introduced to control the sparsity we would like to induce in dnet. Clearly, the optimal threshold t is the highest one that has no impact on the training performance since it can induce the highest sparsity in dnet. Therefore, to select the threshold, we need to monitor the impact on the gradients. As the SGD only uses the gradients of the weights (dW) to update the weights, dW is the only gradients we need to care about. From the equations of the backward propagation we can see that dW is computed based on dnet, which is sparsified by our approach. Although sparsifying dnet affects dW, we can control the change of dW by setting the threshold. To determine the largest acceptable threshold, we conducted an evaluation of the impact caused by different thresholds on one single step in LSTM training. The application here is the same as the one in the application characterization..\ndW : dWo correlation : Hdw|: dWo|\n100.00% 90.00% 80.00% MP nnup 0.995 70.00% 60.00% 50.00% 0.99 fo uo!oot 40.00% 30.00% 0.985 20.00% 10.00% 0.00% 0.98 Baseline 1.00E-08 1.00E-07 1.00E-06 Threshold Layer 1 Layer 2 Layer 3 3 --Correlation to Baseline dW\nFigure[3 shows the evaluation result. We measure the change of dW by the normalized inner product. of sparsified dW and the original dW without sparisifying (the baseline shown in Figure3). If we. denote the original weight gradient as dWo, the correlation between sparsified dW and dWo can be measured by normalized inner product.\nIf the correlation is 1, it means dW is exactly the same to dWo. If the correlation is O, it means dW is orthogonal to dWo. The higher the correlation is, the less impact the sparsification has on. this single step backward propagation. From Figure[3|we can see that even without our thresholding. technique, the dnet still exhibits approximately 10% sparsity. These zero values are resulted from. the limited dynamic range of floating point numbers, in which extremely small values are rounded to zero. By applying the thresholds to dnet, we can induce more sparsity shown by the bars. Even with a low threshold (10-8), the sparsity in dnet is increased to about 45%. With a relatively high. threshold (10-6), the sparsity can be increased to around 80%. Although the sparsity is high, the. correlation between the sparsified dW and dWo is close to 1 even with the high threshold. Therefore we can hypothesize that we can safely induce a considerable amount of sparsity with an appropriate threshold. It is straightforward to understand that the threshold cannot be arbitrarily large since we need to contain the information of the gradients. For example, if we increase the thresholc. even further to 10-5, the correlation will drop to 0.26, which is far from the original dWo and not. acceptable.\nWe have demonstrated that we can induce more sparsity by rounding small dnet to zero while. maintaining the information in dW. However, this is only an evaluation on one single iteration of training. To show the generality of our static thresholding approach, we applied the thresholds to. the entire training process."}, {"section_index": "8", "section_name": "4.1 CHARACTER-BASED LANGUAGE MODEI", "section_text": "To validate our proposed static thresholding approach, we apply it to the entire LSTM-based RNN training process. We first conducted an experiment on training a character-based language model. The language model consists of one word2vec layer, three LSTM-based RNN layers, and one linear classifier layer. The number of LSTM cells per RNN layer is 256. We feed the network with sequences of 100 characters each. The training dataset is a truncated Wikipedia dataset. We apply a fixed threshold to all dnet gradients for every iteration during the whole training process.\n100.00% 90.00% 80.00% feee en aaeeh nn eaeeeoet 70.00% 50.00% 50.00% 40.00% 30.00% Time 20.00% 10.00% 0.00% Layer 1 Layer 2 Layer 3 Layer 1 Layer 2 Layer 3 Layer 1Layer 2 Layer 3 Layer 1Layer 2 Layer 3 Baseline threshold=1e-7 threshold=1e-6 threshold=1e-5\nFigure 4: Sparsity in dnet with different thresholds\nFigure4shows the sparsity of the linear gate gradients (dnet) of each layer during the whole training process. In the baseline configuration, the training method is standard SGD without sparsifying (zero\nIn this section, we first present the sparsity induced by applying our sprsified SGD to an entire training process, and then discuss the generality of our approach.\n2.00000 1.90000 ssot uoaeple^ 1.80000 1.70000 1.60000 1.50000 1.40000 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 32000 34000 36000 38000 40000 Training iterations -Baseline--Low (1e-7)-Medium (1e-6) +High (1e-5)"}, {"section_index": "9", "section_name": "Figure 5: Validation Loss with different thresholds", "section_text": "Figure 5 shows the validation loss of each iteration. We observe that up to the medium threshold (10-6), the validation loss of the model trained with sparsified SGD keeps close to the baseline However, if we continues raising the threshold to 10-5, the validation loss becomes unacceptably higher than the baseline. Although the validation loss with the 10-5 threshold is consistently de creasing as the training goes on, we conservatively do not pick this configuration to train the LSTM network.So combining Figure4 with Figure 5 we can choose the threshold 10-6 to train the character-based language model to achieve about 80% sparsity in dnet.\nSince the linear gate gradients dnet are involved in all the four matrix multiplications in the. backward propagation, there are 80% MAC operations in these matrix multiplications have zero. operands. Furthermore, there are six matrix multiplications (all of them are of the same amount of computation) in one LSTM training iteration and four out of them (67%) are sparse. So there are. more than 50% MAC operations will have zero operands introduced by our sparsified SGD in one LSTM training iteration. The MAC operations with zero operands produce zero output and thus. make no contribution to the final results. These redundant MAC operations can be eliminated by hardware techniques similar to (Han et al.]2016a) (Albericio et al.2016) to improve the energy efficiency of LSTM training."}, {"section_index": "10", "section_name": "4.2 SENSITIVITY TEST", "section_text": "Our static thresholding approach can induce more than 80% sparsity in linear gate gradients ol the character-based language model training. To demonstrate the generality of our approach, we then changed the topology of the RNN layers in the character-based language model with several different LSTM-based RNNs for a sensitivity test. The network topologies used in the sensitivity test are shown below.\nNumber of layers: 2, 3, 6, 9; Number of LSTM cells per layer: 128, 256, 512: Sequence length: 25, 50, 100\nWe also trained the network with other datasets, such as the tiny-Shakespear dataset and the novel. War and Peace. For all the data points we collected from the sensitivity test, we can always achieve\nthreshold). The baseline configuration exhibits about 20% sparsity in dnet. By applying only a low. threshold (10-7), the sparsity is increased to around 70%. And we can consistently increase the sparsity further by raising the threshold. However, we have to monitor the impact of the threshold on the overall training performance to check if the threshold is too large to use..\nmore than 80% sparsity in dnet with less than 1% loss of performance in terms of validation los. with respect to the baseline\nMoreover, we also validated our approach by training an image captioning application (Karpathy. & Li]2015) with MSCOCO dataset (Lin et al.]2014) and a machine translation application known as Seq2Seq (Sutskever et al.2014) with WMT15 dataset. As both the two applications are imple- mented based on graph model (Torch and TensorFlow, respectively), we plugged a custom operation. in the automatically generated backward propagation subgraph to implement our proposed sparsi-. fied SGD. The experiment results show that the conclusion for the character-based language model. still holds for the two applications.."}, {"section_index": "11", "section_name": "4.3 DISCUSSION", "section_text": "So far all our experiment results show promising results and we believe our sparsified SGD is a. general approach to induce sparsity for LSTM-based RNN training. From the computer hardware. perspective, the sparsified SGD is similar to reduced precision implementation while the impact of. sparsified SGD is much less since we still use full 32-bit floating point numbers. From the theory. perspective, SGD itself is a gradient descent with noise and thresholding very small gradients to. zero is nothing more than an additional noise source. Since training with SGD is robust to noise the thresholding approach will likely not affect the overall training performance. Additionally, the. weight gradients dW are aggregated through many time steps, which makes the LSTM more robust. to the noise introduce by sparsifying the linear gate gradients.."}, {"section_index": "12", "section_name": "5 CONCLUSION AND FUTURE WORK", "section_text": "In this paper, we conducted an application characterization to an LSTM-based RNN application and observe skewed distribution in the sigmoid-based gates and the tanh-based new cell state, which indicates potential sparsity in the linear gate gradients during backward propagation with SGD. The linear gate gradients are involved with 67% MAC operations in an entire LSTM training process so that we can improve the energy efficiency of hardware implementations if the linear gate gradients are sparse. We propose a simple yet effective rounding to zero technique, which can make the sparsity of the linear gate gradients higher than 80% without loss of performance. Therefore, more than 50% MAC operations are redundant in an entire sparsified LSTM training\nObviously, the static-threshold approach is not optimal. In future, we will design a dynamic threshold approach based on the learning rate, L2-norm of the gradients and the network topology Hardware techniques will also be introduced to exploit the sparsity to improve the energy efficiency and training speed of LSTM-based RNNs for GPU and other hardware accelerators."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Misha Denil, Babak Shakibi, Laurent Dinh, Marc'Aurelio Ranzato, and Nando de Freitas. Predicting Parameters in Deep Learning. Nips, pp. 2148-2156, 2013. URLhttp : / /papers. nips. cc/.\nSuyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep Learning with Limited Numerical Precision. Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1737-1746, 2015. ISSN 19410093. doi: 10.1109/72.80206. URL\nSong Han, Huizi Mao, and William J. Dally. Deep Compression - Compressing Deep Neural Net. works with Pruning, Trained Quantization and Huffman Coding. International Conference on. Learning Representations (ICLR), pp. 1-13, 2016b. URL|http://arxiv.org/abs/1510. 00149{%}5Cnhttp://www.arxiv.0rg/pdf/1510.00149.pdf\nAndrej Karpathy and Fei Fei Li. Deep visual-semantic alignments for generating image descrip. tions. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern. Recognition, 07-12-June:3128-3137, 2015. 1SSN 10636919. doi: 10.1109/CVPR.2015.7298932\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Con volutional Neural Networks. Advances In Neural Information Processing Systems, pp. 1-9, 2012. ISSN 10495258. doi: http://dx.doi.org/10.1016/j.protcy.2014.09.007.\nDarryl D Lin, Sachin S Talathi, and V. Sreekanth Annapureddy. Fixed Point Quantization of Deep ConvolutionalNetworks. 48:1-12, 2015. URLhttp://arxiv.0rg/abs/1511. 06393\nTsung Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piot Dollar, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context. Lecture Note. in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lectur Notes in Bioinformatics), 8693 LNCS(PART 5):740-755, 2014. ISSN 16113349. doi: 10.1007 978-3-319-10602-1_48.\nSong Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, and William J. Dally. EIE: Efficient Inference Engine on Compressed Deep Neural Network. 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), 16:243-254, 2016a. doi: 10.1109/ISCA.2016.30. URLhttp://arxiv.org/abs/1602.01528{%}5Cnhttp:// ieeexplore.ieee.org/1pdocs/epic03/wrapper.htm?arnumber=7551397\nAndrej Karpathy, Justin Johnson, and Li Fei-Fei. Visualizing and Understanding Recurrent Net works. International Conference on Learning Representations (ICLR), pp. 1-13, 2016. ISSN 978-3-319-10589-5. doi: 10.1007/978-3-319-10590-1_53"}] |
H1_QSDqxl | [{"section_index": "0", "section_name": "RULE MINING IN FEATURE SPACE", "section_text": "Stefano Teso & Andrea Passerini\nDepartment of Information Engineering and Computer Science University of Trento.\n{teso, passerini}@disi.unitn.it\nRelational embeddings have emerged as an excellent tool for inferring novel facts from partially observed knowledge bases. Recently, it was shown that some. classes of embeddings can also be exploited to perform a simplified form of rule mining. By interpreting logical conjunction as a form of composition between re lation embeddings, simplified logical theories can be mined directly in the space of latent representations. In this paper, we present a method to mine full-fledgec. logical theories, which are significantly more expressive, by casting the semantics of the logical operators to the space of the embeddings. In order to extract relevan rules in the space of relation compositions we borrow sparse reconstruction pro cedures from the field of compressed sensing. Our empirical analysis showcases. the advantages of our approach."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Knowledge Bases (KB) capture relational knowledge about a domain of choice by modelling entities and facts relating them. In so doing, KBs allow for rich answers to user queries, as happens with the knowledge panels powered by the Google Knowledge Graph. Furthermore, KBs can be mined for rules, i.e. patterns of relations which are frequently found to hold in the KB. Mining theories from data is the task of Rule Mining (Dzeroski & Lavrac 2o00) and Inductive Logic Programming (Dze roski & Lavrac1994Muggleton et al.|1992).\nClassical ILP methods mine theories by searching over the (exponentially large) space of logical the ories, resorting to language biases and heuristics to simplify the learning problem. While powerful, pure ILP methods do not scale to large relational datasets, preventing them from mining Web-scale KBs such as YAGO (Hoffart et al.]2013) and DBpedia (Auer et al.2007). Further, purely logical methods can not gracefully deal with noise. Next-generation miners that specialize on large KBs. such as AM1E (Galarraga et al.]2015), work around these issues by trading off theory expressiveness for runtime efficiency.\nA general strategy for processing huge datasets is dimensionality reduction: instead of working o the original KB directly, one first squeezes it to a summary of manageable size, and then perform the required operations on the summary itself. Common summarization techniques for relationa data include relational factorization (Nickel et al.]2011} London et al.]2013] Riedel et al.]2013 and representation learning (Bordes et al.[2011} Socher et al.]2013). The core idea is to leari compressed latent representations, or embeddings, of entities and relations able to reconstruct th original KB by minimizing a suitable reconstruction loss. Until recently, relational embeddings hav been mostly employed for link prediction and knowledge base completion (Nickel et al.|2016).\nHowever,Yang et al.(2015) have shown that low-dimensional representations can also be exploited to perform a simplified form of theory learning. Their paper shows that, under reasonable assump tions, a simple nearest neighbor algorithm can recover logical rules directly from the fixed size embeddings of a KB, with potential runtime benefits. Furthermore, since the embeddings general- ize beyond the observed facts, the rules are implicitly mined over a completion of the KB. Despite the novelty of their insight, their proposed method has several major downsides. First, their simple approach is limited to extracting rules as conjunctions of relations, with no support for logical dis-"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "junction and negation. Second, the rules are mined independently of one another, which can lead t redundant theories and compromise generalization ability and interpretability\nBuilding on the insights of[Yang et al.(2015), we propose a novel approach to theory learning from low-dimensional representations. We view theory learning as a special sparse recovery problem. In this setting, a logical theory is merely an algebraic combination of embedded relations that best reconstructs the original KB, in a sense that will be made clear later. The recovery problem can be solved with specialized compressed sensing algorithms, such as Orthogonal Matching Pursuit (Pati et al.][1993) or variants thereof. Our approach offers two key advantages: it automatically models the inter-dependency between different rules, discouraging redundancy in the learned theory, and it supports all propositional logic connectives, i.e. conjunction, disjunction, and negation. Our empiri cal analysis indicates that our method can mine satisfactory theories in realistic KBs, demonstrating its ability to discover diverse and interpretable sets of rules. Additionally, our method can in princi ple be applied to \"deeper\" embeddings, that is, embeddings produced by deep models that take into consideration both relational and feature-level aspects of the data.\nThe paper is structured as follows. In the next section we introduce the required background mate. rial. We proceed by detailing our approach in Section[3|and evaluating it empirically in Section4 We discuss relevant related work in Section|5] and conclude with some final remarks in Section|6"}, {"section_index": "3", "section_name": "2 BACKGROUND", "section_text": "In this section we briefly overview the required background. Let us start with the notation we will use. We write column vectors x in bold-face, matrices X in upper-case, and third-order tensors I in calligraphic upper-case. Xk is the kth frontal slice of the tensor , and vec(X) is the vectorization\nflattening) of X. We denote the usual Frobenius matrix norm as x?., the numbe\nKnowledge Bases and Theories. A knowledge base (KB) is a collection of known true fact about a domain of interest. As an example, a KB about kinship relations may include facts such a (Ann, motherOf, Bob), which states that Ann is known to be the mother of Bob. In the followin we will use n and m to denote the number of distinct entities and relations in the KB, respectively With a slight abuse of notation, we will refer to logical constants and relations (e.g. Ann an motherOf) by their index in the KB (e E [n] or r E [m], respectively). Triples not occurring in the KB are unobserved, i.e. neither true nor false.\nGiven an input KB, the goal of theory learning, also known as Inductive Logic Programming (Mug. gleton et al.[[1992), is to induce a compact logical theory that both explains the observed facts and generalizes to the unobserved ones. Most ILP methods extract theories in definite clausal form.. which offers a good compromise between expressiveness and efficiency. A theory in this form is an. implicitly conjoined set of Horn rules, i.e. rules like:.\nE [n (e, uncleOf, e') e\" e [n] (e,brotherOf.e\" ,parentOf,e\n/e. e E [n] (e,uncleOf,e)e\" e [n] (e,brotherof,e\"). ^ (e\",parentof,e'.\nHere represents logical entailment. The left-hand side is called the head of the rule, while th. right-hand side is the body. The semantics of Horn rules are simple: whenever the body is satisfie by a given set of entities and relations, so is the head. The length of a rule is the number of relations. appearing in its body; the above is a length 2 rule..\nClassical ILP approaches cast theory learning as a search problem over the (exponentially large space of candidate theories. When there are no negative facts, as in our case, the quality of a theory. is given by the number of true facts it entails. In practice, learning is regularized by the size of the. theory (number and length of the rules) to encourage compression, generalization and interpretabil ity. Due to the combinatorial nature of the problem, the search task is solved heuristically, e.g. by searching individual Horn rules either independently or sequentially, or by optimizing surrogate ob jective functions. A language bias, provided by a domain expert, is often employed to guide the. search toward more promising theories. Please see (Dzeroski & Lavrac1994) Muggleton et al.. 1992) for more details.\nRelational embeddings. Relational embedding techniques learn a low-dimensional latent repre sentation of a KB. In order to ground the discussion, we focus on a prototypical factorization method REsCAL (Nickel et al.]2011) 2012); many alternative formulations can be seen as variations or gen- eralizations thereof. We stress, however, that our method can be applied to other kinds of relational embeddings, as sketched in Section[6] For a general treatment of the subject, see[Nickel et al.(2016)\nd d = score(e, r, e' :=(xe W+ xe xW i=1 j=1\nThe bilinear product measures how similar xe and Wr xe' are: the higher the dot product, the higl the score.\nIn the next section we will see how theory learning can be generalized to work directly on the embeddings produced by REsCAL and analogous models..\nThe theory T includes rules for all possible relations h E [m], where the relation is the head of the rule and the body is an \"explanation' of the relation as a (logical) combination of relations. Let Th. be the set of rules for head h. In our setting, Th is a conjunction of Horn rules, where each rule is. at most l long[] l is provided by the user. FollowingYang et al.(2015), we require the rules to be closed paths, i.e. to be in the following form:.\ne1,h, ee+1) (e1, 61, e2) ^ ... ^ ee,be,ee+1\nHere h is the head relation, and b1, . . . , bi are the body relations; quantifiers have been left implicit Formally, a Horn rule is a closed path if (i) consecutive relations share the middle argument, and (ii) the left argument of the head appears as the first argument of the body (and conversely for the right argument). This special form enables us to cast theory learning in terms of Boolean matrix operations, as follows.\n'For the sake of exposition, in the following we only consider rules exactly l long; as a matter of fact, the miners we consider can return rules of length l or shorter..\nIn REscAL, each entity e E [n] in the KB is mapped to a vector xe E Rd, and each binary relation r E [m] to a matrix Wr E' Rdxd. These parameters are learned from data. Here d E [n] is a user-specified constant (the rank) controlling the amount of compression. The key idea is to model the plausibility, or score, of each fact as a function of its embedding. In particular, in REsCAL the score of a fact (e, r, e') is given by the bilinear product:\nThe embeddings can be expressed compactly in tensor form by grouping the entity vectors side by-side into a matrix X E Rdn, and stacking the relation matrices into a tensor W E Rdxdxm. The embeddings (X, W) are learned so as to reconstruct the original KB as accurately as possible,. modulo regularization. More formally, let V E {0, 1}n x nx m be a tensor such that Yre, evaluates to. 1 if the fact (e, r, e') appears in the KB, and to O otherwise. The learned embeddings should satisfy. Yee! ~ score(e, r, e') for all possible triples (e, r, e'). Learning equates to solving the optimization. problem:\nm m |yr-xw*x|++X l|X|I*+ Llw'? min W,X r=1 r=1\nThe second summand is a quadratic regularization term, whose impact is modulated by the > 0. hyperparameter. Note that the entity embeddings X are shared between relations. Choosing d < n forces REscAL to learn more compressed latent features, that hopefully better generalize over. distinct facts, at the cost of a potentially larger reconstruction error. While the optimization problem. is non-convex and can not be solved exactly in general, REsCAL pairs clever initialization with an alternating least squares procedure to obtain good quality solutions (Nickel et al.] 2011).\nIn this section we detail our take on rule mining. Given a knowledge base in tensor form V, our goal is to learn a theory T that (1) entails many of the observed facts and few of the unobserved ones. and (2) is composed of few, diverse rules, for improved generalization\nLet ) be a knowledge base and h E [m] the target head relation. Note that the conjunction of Horn rules with the same head relation h amounts to the disjunction of their bodies. Due to requirement. (1), the set of rules targeting h should approximate the truth values of h, i.e.\nyh ~V BETh V\nA(Yh,Th) := Yh VBeTh ^ bER Y6\nwhere is the element-wise exclusive OR operator and : lo computes the misclassification error of Th over Yh. Minimizing Eq. (3) unfortunately is a hard combinatorial problem. We will next show how to approximate the latter as a continuous sparse reconstruction problem.\nThe relaxed reconstruction problem. Our goal is to approximate Eq. (3) in terms of algebraic. matrix operations over the relation embeddings W. First, we replace conjunctions with products between the embeddings of the relations along the path in the body of the rule, i.e..\nbEB Yb ~ XT (IIbEB Wb) X\nThe idea is that a linear operator Wb maps the embedding of the left argument of relation b to. vectors similar to the embedding of the right one, as per Eq. For instance, Wmotherof will map. the embedding of Ann to a vector with high dot product w.r.t. the embedding of Bob. The closed path represented by the conjunction of the relations in the body B is emulated by composition of embeddings and obtained by repeated applications of this mapping (Yang et al.]2015).\nSecond. we replace disjunctions with sums:\nYh ~ XTWhX~ XTBeTh HbEB Wb] X\nClearly. the set of rules Th is unknown and needs to be learned in solving the reconstruction problem We thus let the summation run over all possible paths of length l, i.e. [m]', adding a coefficient aB for each candidate path. The problem boils down to learning these alphas:.\nXT HbER Wb X\nIn principle, the coefficients aB should be zero-one; however, we relax them to be real-valued t. obtain a tractable optimization problem. This choice has another beneficial side effect: the relaxe formulation gives us a straightforward way to introduce negations in formulas, thus augmenting th. expressiveness of our approach beyond purely Horn clauses. The idea builds on the concept of se difference from set theory. A relation like brotherOf can be explained by the rule \"a sibling wh is not a sister\"'. This could be represented in the space of the embeddings as the difference betwee. the siblingOf mapping (accounting for both brothers and sisters) and the sisterOf one. More. specifically, siblingof A- sisterOf would be encoded as Wsiblingof _ Wsisterof. We. thus allow a to also take negative values, with the interpretation that negative bodies are negate. and conjoint (rather than disjoint) with the rest of the formula..\nPutting everything together, we obtain an optimization problem of the form:\nWh _BE[m]e Wb 1bEB\nHere B is the body of a rule, and the logical connectives operate element-wise. In order to learn T from V, we define a loss function that encourages the above condition. We define the loss (Yh, Th) as the accuracy of reconstruction of Yh w.r.t. Th, written as:\nIntuitively, each path should represent an alternative explanation for the head relation, so that two entities are in relation h if at least one path (approximately) maps the left entity to the right one. Di- versity between these alternatives will be enforced by imposing orthogonality between the mappings of the corresponding paths during the mining procedure, as explained later on in the section.\n# triples # entities # relations Nations 3243 14 56 Kinship 10790 104 26 UMLS 6752 135 49 Family 5984 628 24\nTable 1: Number of entities and relations of all datasets\nfor each target head h. Upon finding the coefficients a, we convert them into a logic theory based on their sign and magnitude. First, only bodies with absolute coefficients larger than a threshold > 0 are retained. Each body is then converted to the conjuction of the relations it contains. Bodies with positive coefficients are disjunctively combined with the rest of the formula, while bodies with negative coefficients are added as conjunctions of their negations. The final theory for the mined rule can be written as:\nYh ~ 1 B:oB> A B:oB<-\nSolving the reconstruction problem. Equation|5Jis a matrix recovery problem in Frobenius norm.. Instead of solving it directly, we leverage the norm equivalence ||A - B|F = ||vec(A) - vec(B)|2 to reinterpret it as a simpler vector recovery problem. Most importantly, since most of the candidate paths B can not explain the head h, the recovery problem is typically sparse. Sparse recovery problems are a main subject of study in compressed sensing (Candes et al.|2006), and a multitude of algorithms can be employed to solve them, including Orthogonal Matching Pursuit (OMP) (Pati. et al.[1993), Basis Pursuit (Chen et al.][1998), and many recent alternatives. In Appendix A we show how minimizing the sparse recovery problem in Eq.5lequates to minimizing an upper bound. of the total loss.\nTwo features of the above problem stand out. First, if the target theory is sparse enough, existing recovery algorithms can solve the reconstruction to global optimality with high probability (Candes et al.| 2006). We do not explicitly leverage this perk; we leave finding conditions guaranteeing per- fect theory recovery to future work. Second and most importantly, reconstruction algorithms choose the non-zero coefficients aB so that the corresponding path embeddings IbeB Wb are mutually or- thogonal. This means that similar paths will not be mined together, thus encouraging rule diversity as per requirement (2)."}, {"section_index": "4", "section_name": "4 EMPIRICAL EVALUATION", "section_text": "We compare our method, dubbed Feature Rule Miner (FRm for short), against two variants of the kNN-based theory miner of|Yang et al.(2015) on four publicly available knowledge bases: Nations, Kinship and UMLS from Kemp et al.(2006), and Family from[Fang et al.(2013). The KB statistics can be found in Table|1] Given that FRM requires the relational embeddings W to be normalized (with respect to the Frobenius norm), we compare it against both the original kNN-based miner which mines the unnormalized embeddings, and a variant that uses the normalized embeddings instead, for the sake of fairness.\nThe miners were tested in a 10-fold cross-validation setting. We computed the relational embeddings over the training sets using non-negative REscAL (KrompaB et al.2013) variant with the default parameters (500 maximum iterations, convergence threshold 10-5). The size of the embeddings d was set to a reasonable value for each KB: 100 for Family, 25 for Kinship and UMLS, and 5 for Nations. We configured all competitors to mine at most 100 rules for each head relation. The kNN distance threshold was set to 100 (although the actual value used is chosen dynamically, as done by|Yang et al.(2015)). The desired reconstruction threshold of OMP was set ot 10-3. Finally, the. coefficient threshold t was set to 0.2.\nWe evaluate both the F-score and the per-rule recall of all the methods. The F-score measures how well the mined rules reconstruct the test facts in terms of both precision and recall. The per-rule recall is simply the recall over the number of rules mined for the target head; it favors methods that\n2Standard REscAL tends to penalize the kNN-based competitors\nFigure 1: Results of all methods on the four datasets for max rule length 2. Average F-score is reported on the left, average recall over number of rules on the right..\nFigure 2: Results of all methods on the four datasets for max rule length 3. Average F-score is reported on the left, average recall over number of rules on the right.\nfocus on few rules with high coverage, and penalizes those that mine many irrelevant rules. The results on the four KBs (averaged over all target relations) are reported in Figures [1and[2] and ar example of mined rules in Figure[3] More detailed per-head results can be found in Appendix B (Unfortunately, the normalized kNN method failed to work with the UMLS dataset; we left a blank in the plots.)\nThe plots show a clear trend: FRm performs better than the kNN-based methods in all four knowl. edge bases, both in terms of F-score and in terms of per-rule recall. Further, the normalized kNN variant tends to outperform the original, unnormalized version, providing support for our use of normalized relation embeddings\nNotably, the three methods mine similar amounts of rules. While OMP stops automatically when the mined body reconstructs the target head sufficiently well, the kNN methods compensate for the lack of a proper termination criterion by employing a distance-based pruning heuristict (as discussed byYang et al.(2015)). Rather, the poor per-rule recall performance of the kNN methods can be imputed to insufficient rule diversity. The kNN miners discover the rules independently of each other, leading to theory redundancy. This is a well known problem in rule mining. On the contrary OMP avoids this issue by enforcing orthogonality between the mined bodies. The resulting theories performs much better especially in terms of per-rule recall.\nThe phenomenon is also visible in Figure[3] The theory found by FRm contains many diverse bodies while the one found by kNN does not. The two rules also show the power of negation: the FRM. theory includes the \"perfect\" definition of a brother, i.e. siblingOf ^ -sisterOf (as well as an obvious error, i.e. that a brother can not be a sibling of a sibling). In contrast the theory found by kNN completely ignores the complementarity of brotherOf and sisterOf, and includes the. rule brotherOf sisterOf.\naverage f-score average recall knn knn nknn nknn omp omp 0.1 reeoreee neell 0.0 0.0 family kinship nations umls family kinship nations umls\naverage f-score average recall knn knn nknn nknn omp omp 0.1 0.0 nations 0.0 family kinship umls family kinship nations umls\nFigure 3: Example rules for the brotherOf relation mined by FRm (top) and kNN (bottom)"}, {"section_index": "5", "section_name": "5 RELATED WORK", "section_text": "There is a huge body of work on theory learning, historically studied in Inductive Logic Program ming (Dzeroski & Lavrac1994] Muggleton et al.]1992). For the sake of brevity, we focus on techniques that are more closely related to our proposal..\nThe core of most ILP methods, e.g. FOIL (Quinlan|1990), Progol (Muggleton1995), and Aleph3 is a search loop over the space of candidate theories. Bottom-up methods start from an initially empty theory, and add one Horn rule at a time. Individual rules are constructed by conjoining first order relations so as to maximize the number of covered positive facts, while trying to keep covered negative facts to a minimum. After each rule is constructed, all covered facts are removed from the KB. These methods are extremely expressive, and can handle general nary relations. Instead FRM focuses on binary relations only, which are more common in today's Web-centric knowledge bases. ILP methods are designed to operate on the original KB only; this fact, paired with the sheer magnitude of the search space, makes standard ILP methods highly non-scalable. More recent extensions (e.g. kFOIL (Landwehr et al.]2006)) adopt a feature-space view of relational facts but are still based on the classical search loop and can not be trivially adapted to working on the relational embeddings directly. Finally, rule elongation can be hindered by the presence of plateaus in the cost function.\nOur path-based learning procedure is closely related to Relational Pathfinding (RP) (Richards & Mooney1992). RP is based on the observation that ground relation paths (that is, conjunctions of true relation instances) do act as support for arbitrary-length rules. It follows that mining these paths directly allows to detect longer rules with high support, avoiding the rule elongation problem entirely. There are many commonalities between RP and FRM. Both approaches are centered around relation paths, although in different representations (original versus compressed), and focus on path- based theories. The major drawback of RP is that it requires exhaustive enumeration of relation paths (up to a maximum length), which can be impractical depending on the size of the KB. FRM sidesteps this issue by leveraging efficient online decoding techniques, namely Online Search OMP (Weinstein & Wakin2012).\nTo alleviate its computational requirements, a lifting procedure for RP was presented in Kok &. Domingos(2009). Similarly to FRm, lifted RP is composed of separate compression and learning stages. In the first stage, the original KB is \"lifted\"' by clustering functionally identical relation. paths together, producing a smaller KB as output. In the second stage, standard RP is applied to the compressed KB. A major difference with FRM is that lifting is exact, while REsCAL is typically lossy Consequently, lifted RP guarantees equivalence of the original and compressed learning problems. but it also ignores the potential generalization benefit provided by the embeddings. Additionally, the first step of lifted RP relies on a (rather complex) agglomerative clustering procedure, while FRM. can make use of state-of-the-art representation learning methods. Note that, just like lifted RP, FRM. can be straightforwardly employed for structure learning of statistical relational models.\nThe work of [Malioutov & Varshney(2013) is concerned with mining one-level rules from binary. data. Like in FRM, rule learning is viewed as a recovery problem, and solved using compressed sens-. ing techniques. Two major differences with FRM exist. In|Malioutov & Varshney[(2013) the truth. value matrix is recovered with an extension of Basis Pursuit that handles O-1 coefficients through\nhttp://www.cs.ox.ac.uk/activities/machinelearning/Aleph/\nbrotherOf (siblingOf V (siblingOfAbrotherOf) V (siblingOfA sisterOf) ^ -(sisterOf V (siblingOf^ siblingOf)) brotherOf siblingOf V (siblingOf ^ siblingOf) V (siblingOf ^ brotherOf) V (childOf AparentOf) V (sonof^ parentOf) V sisterOf V (siblingOf ^ sisterOf)\nrOf (siblingOf V(siblingOf ^brotherOf) V(siblingOf A sisterOf)) ^ -(sisterOf V (siblingOf^ siblingOf)) rOf siblingOf V (siblingOf ^ siblingOf) V (siblingOf ^brotherOf) V (child0f ^parentOf) V (sonOf ^ parentOf) V sisterOf V (siblingOf ^ sisterOf)\na mixed-integer linear programming (MILP) formulation, which is however solved approximately using linear relaxations. BP however requires the dictionary to be explicitly grounded, which is no the case for FRM. Additionally, their method is limited to one-level rules, i.e. either conjunctions or disjunctions of relations, but not both. An extension to two-level rules has been presented by Su et al.(2015), where BP is combined with heuristics to aggregate individual rules into two-leve theories. In contrast, FRM natively supports mining two-level rules via efficient online search.\nThe only other theory learning method that is explicitly designed for working on embeddings is the. one of Yang et al.(2015). It is based on the observation (also made byGu et al.(2015)) that closec path Horn rules can be converted to path queries, which can be answered approximately by searching the space of (type-compatible) compositions of relation embeddings. They propose to perform a. simple nearest neighbor search around the embedding of the head relation, Wh, while avoiding. type-incompatible relation compositions. Unfortunately, rules are searched for independently o one another, which seriously affects both quality and interpretability of the results as shown by ou. experimental evaluation."}, {"section_index": "6", "section_name": "6 CONCLUSION", "section_text": "We presented a novel approach for performing rule mining directly over a compressed summary of a KB. A major advantage over purely logical alternatives is that the relational embeddings au. tomatically generalize beyond the observed facts; as consequence, our method implicitly mines a completion of the knowledge base. The key idea is that theory learning can be approximatec by a recovery problem in the space of relation embeddings, which can be solved efficiently using. well-known sparse recovery algorithms. This novel formulation enables our method to deal with. all propositional logic connectives (conjunction, disjunction, and negation), unlike previous tech niques. We presented experimental results highlighting the ability of our miner to discover relevant. and, most importantly, diverse rules\nOne difficulty in applying our methods is that classical sparse recovery algorithm require the com-. plete enumeration of the candidate rule bodies, which is exponential in rule length. In order to solve this issue, we plan to apply recent online recovery algorithms, like Online Search OMP (Weinstein. & Wakin2012), which can explore the space of alternative bodies on-the-fly..\nAs the quality of relational embedding techniques improves, for instance thanks to path-based Gu et al (2015); Neelakantan et al.(2015); Garcia-Duran et al.(2015) and logic-based Rocktaschel (2015) training techniques, we expect the reliability and performance of theory learning in et al. feature space to substantially improve as well.\nS.S. Chen, David L. Donoho, and M.A. Saunders. Atomic decomposition by basis pursuit. SIAM journal on scientific computing, 20(1):33-61, 1998\nSaso Dzeroski and Nada Lavrac. Inductive logic pro. gramming: Techniques and applications. 1994\nSaso Dzeroski and Nada Lavrac (eds.). Relational Data Mining, New York, NY, USA, 2000 Springer- Verlag New York, Inc\nL. Galarraga, C. Teflioudi, K. Hose, and F. M. Suchanek. Fast rule mining in ontological knowledge bases with amie+. The VLDB Journal, 24(6):707-730, 2015. A. Garcia-Duran, A. Bordes, and N. Usunier. Composing relationships with translations. In Pro-. ceedings of EMNLP, pp. 286-290, 2015. K. Gu, J. Miller, and P. Liang. Traversing knowledge graphs in vector space. arXiv preprint. arXiv:1506.01094, 2015. J. Hoffart, F. M. Suchanek, K. Berberich, and G. Weikum. Yago2: A spatially and temporally enhanced knowledge base from wikipedia. Artificial Intelligence, 194:28-61, 2013. Charles Kemp, Joshua B Tenenbaum, Thomas L Griffiths, Takeshi Yamada, and Naonori Ueda.. Learning systems of concepts with an infinite relational model. In Proceedings of AAAI, volume 3,. pp. 5, 2006. S. Kok and P. Domingos. Learning markov logic network structure via hypergraph lifting. In. Proceedings of ICML, pp. 505-512, 2009. Denis Krompa, Maximilian Nickel, Xueyan Jiang, and Volker Tresp. Non-negative tensor factor- ization with rescal. In Tensor Methods for Machine Learning, ECML workshop, 2013.. N. Landwehr, A. Passerini, L. De Raedt, and P. Frasconi. kfoil: Learning simple relational kernels.. In AAAI, volume 6, pp. 389-394, 2006. B. London, T. Rekatsinas, B. Huang, and L. Getoor. Multi-relational learning using weighted tensor. decomposition with modular loss. arXiv preprint arXiv:1303.1733, 2013. D. Malioutov and K. Varshney. Exact rule learning via boolean compressed sensing. In Proceedings\nDenis Krompa3, Maximilian Nickel, Xueyan Jiang, and Volker Tresp. Non-negative tensor factor ization with rescal. In Tensor Methods for Machine Learning, ECML workshop, 2013\nS. Muggleton. Inverse entailment and progol. New eneration cor sting, 13(3-4):245-286, 1995\nStephen Muggleton, Ramon Otero, and Alireza Tamaddoni-Nezhad. Inductive logic programming volume 168. 1992\nMaximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. Factorizing yago: scalable machine learning for linked data. In Proceedings of www, pp. 271-280, 2012.\nMaximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. A review of relational. machine learning for knowledge graphs. Proceedings of the IEEE. 104(1):11-33. 2016\n. R. Quinlan. Learning logical definitions from relations. Machine learning, 5(3):239-266, 1990\nL. Galarraga, C. Teflioudi, K. Hose, and F. M. Suchanek. Fast rule mining in ontological knowledge bases with amie+. The VLDB Journal. 24(6):707-730. 2015. A. Garcia-Duran, A. Bordes, and N. Usunier. Composing relationships with translations. In Pro- ceedings of EMNLP, pp. 286-290, 2015. K. Gu, J. Miller, and P. Liang. Traversing knowledge graphs in vector space. arXiv preprint arXiv:1506.01094, 2015. J. Hoffart, F. M. Suchanek, K. Berberich, and G. Weikum. Yago2: A spatially and temporally enhanced knowledge base from wikipedia. Artificial Intelligence, 194:28-61, 2013. Charles Kemp, Joshua B Tenenbaum, Thomas L Griffiths, Takeshi Yamada, and Naonori Ueda. 11th ntn1t relational model In Pro O11 f AA AI volume 3\nD0-35.992 S. Riedel, L. Yao, A. McCallum, and B. M. Marlin. Relation extraction with matrix factorization and universal schemas. In Proceedings of NAACL-HLT, pp. 74-84, 2013. T. Rocktaschel, S. Singh, and S. Riedel. Injecting logical background knowledge into embeddings for relation extraction. In Proceedings of NAACL HTL, 2015. R. Socher, D. Chen, C. D. Manning, and A. Ng. Reasoning with neural tensor networks for knowl- edge base completion. In Proceedings of NIPS, pp. 926-934, 2013."}, {"section_index": "7", "section_name": "APPENDIX A: ERROR DERIVATION", "section_text": "Then. the Frobenius norm of the reconstruction error of head h is:\nYh XI IXT Eh X+ Eh <l[X'EhX|[+l|Eh<l[X[2[Eh|+|E\naverage f-score. average recall-per-rule 0.3 knn knn nknn nknn omp omp 0.2 reeerrreennee 0.1 0.0 0.0 0 10 20 30 40 50 60 0 10 20 30 40 50 60 relation relation\nFigure 4: Detailed results for the nations KB with length 2 rules\naverage f-score average recall-per-rule knn knn nknn nknn omp omp eecrre 0.1 0.0 0.0 0 5 10 15 20 25 0 5 10 15 20 25 relation relation\nFigure 5: Detailed results for the kinship KB with length 2 rules\nEh =Yh _ XT BeTh WB+EhX=Yh-XT [BeTh WB] X-XEhX\nWe note in passing that the bound can be tightened by reducing the norm of the entity embeddings X, for instance by choosing the proper embedding method. The question of how to find an optimal choice, however, is left as future work.\naverage f-score average recall-per-rule knn knn nknn nknn omp omp 0.1 fscore 0.0 0.0 0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 relation relation\nFigure 6: Detailed results for the UMLS KB with length 2 rules\nFigure 7: Detailed results for the family KB with length 2 rules\naverage f-score. average recall-per-rule knn knn nknn nknn omp omp 0.2 0.1 0.0 0.0 0 10 20 30 40 50 60 0 10 20 30 40 50 60 relation relation\nFigure 8: Detailed results for the nations KB with length 3 rules\naverage f-score average recall-per-rule 0.2 knn knn nknn nknn omp omp Core 0.1 0.0 0.0 0 5 10 15 20 25 5 10 15 20 2 latior elation\nFigure 9: Detailed results for the kinship KB with length 3 rules\naverage f-score average recall-per-rule knn knn nknn nknn 0.1 omp omp neeernnrenie fccore 0.0 0.0 0 5 10 15 20 25 0 5 10 15 20 25 relatior elatior\naverage f-score average recall-per-rule knn knn nknn nknn omp omp 0.1 eseore 0.0 0.0 0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 relation relation\nFigure 10: Detailed results for the UMLS KB with length 3 rules\naverage f-score average recall-per-rule 0.1 knn knn nknn nknn omp omp recrree 0.0 0.0 0 5 10 15 20 25 0 5 10 15 20 25 relation relation\nFigure 11: Detailed results for the family KB with length 3 rules"}] |
HyAbMKwxe | [{"section_index": "0", "section_name": "TIGHTER BOUNDS LEAD TO IMPROVED CLASSIFIERS", "section_text": "Nicolas Le Roux\nCriteo Research"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Classification aims at mapping inputs X E ' to one or several classes y E V. For instance, in. object categorization, ' will be the set of images depicting an object, usually represented by the RGB values of each of their pixels, and V will be a set of object classes, such as \"car\"' or \"dog'\n1 1og p(yi|Xj,0) 0* = arg min N 0 2 - arg min Llog (e) 0\nOne justification for minimizing Llog(0) is that 0* is the maximum likelihood estimator, i.e. the parameter which maximizes\naIn practice, we choose the class deterministically and output arg max,, p(y|X, 0)"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The standard approach to supervised classification involves the minimization of a og-loss as an upper bound to the classification error. While this is a tight bound arly on in the optimization, it overemphasizes the influence of incorrectly clas. sified examples far from the decision boundary. Updating the upper bound dur. ng the optimization leads to improved classification rates while transforming the learning into a sequence of minimization problems. In addition, in the context where the classifier is part of a larger system, this modification makes it possible o link the performance of the classifier to that of the whole system, allowing the seamless introduction of external constraints.\nWe shall assume we are given a training set comprised of N independent and identically distributed labeled pairs (X,, y). The standard approach to solve the problem is to define a parameterized class of functions p(y[X, 0) indexed by 0 and to find the parameter 0* which minimizes the log-loss, i.e\n1 1og p(yi|Xi,0) Llog (0) = N 2\n0* = argmax p(D|) 0 Ip(y|Xi,0). - argmax 0 i\nThere is another reason to use Eq.1.1] Indeed, the goal we are interested in is minimizing the classification error. If we assume that our classifiers are stochastic and outputs a class according to p(y|X,, 0), then the expected classification error is the probability of choosing the incorrect classa This translates to\n1 (1-p(y|Xi,0)) Le N 2 1 p(yi|Xi,0) N\nThis is a highly nonconvex function of 0, which makes its minimization difficult. However, we have\nwhere K = [V]is the number of classes (assumed finite), using the fact that, for every nonnegative t, we have t 1 + logt. Thus, minimizing Llog(0) is equivalent to minimizing an upper bound. of L(0). Further, this bound is tight when p(yi|X,, 0) = for all yi. As a model with randomly. initialized parameters will assign probabilities close to 1/ K to each class, it makes sense to minimize. Llog(0) rather than L(0) early on in the optimization..\nHowever, this bound becomes looser as 0 moves away from its initial value. In particular, poorl classified examples, for which p(yi|X, 0) is close to O, have a strong influence on the gradient o Llog(0) despite having very little influence on the gradient of L(0). The model will thus wast capacity trying to bring these examples closer to the decision boundary rather than correctly clas sifying those already close to the boundary. This will be especially noticeable when the model ha limited capacity, i.e. in the underfitting setting.\nWe now present a general class of upper bounds of the classification error which will prove useft when the model is far from its initialization.\np(y|X,0) pv(y|X,0) =p(y|X,v) 1 + log p(y|X,v)\nThe second line stems from the inequality t > 1 + log t\nL(0) =1 - ) p(yi|Xi,0) N 1 1 (1 + logp(yi|Xi,0) + log K N K (K -1- log K) L1og(0) K K\nSection|2|proposes a tighter bound of the classification error as well as an iterative scheme to easily optimize it. Section|3|experiments this iterative scheme using generalized linear models over a vari ety of datasets to estimate its impact. Section|4|then proposes a link between supervised learning and reinforcement learning, revisiting common techniques in a new light. Finally, Section 5 concludes and proposes future directions.\npv(y[X,0)<p(y[X,0)\npe(y|X,0) =p(y|X,0), 0pv(y|X,0) dp(y|X,0) de de v=0\np(y|X,0) p(y|X,0) =p(y|X,v p(y|X,v) p(y|X,0) Z p(y|X,v p(y|X,v) pv(y|X,0)\nPv(y|X, 0) = p(y|X, 0) is immediate when setting 0 = v in Eq.2.1 Deriving Pv(y|X,0) witl respect to 0 yields\ndpv(y|X,0) d logp(y|X,0) p(y de de p(y[X,v) dp(y|X,0) p(y|X,0) de dpv(y|X,0) dp(y|X,0) 0 = v on both sides yields ae de |v=0\n0pv(y[X,0) a logp(yX,0 de de p(y|X,v) dp(y|X,0) p(y|X,0) de\nLemma[1suggests that, if the current set of parameters is 0t, an appropriate upper bound on th probability that an example will be correctly classified is.\n1 p(yi|Xi,0) L(0) = 1 N 1 p(yi|Xi,0) p(yi|Xi,0t) 1 + log N p(yi|Xi,0t) 1 p(yiq|Xi,0t) logp(yi|Xi,0) , N 2\nwhere C is a constant independent of 0. We shall denote\n1 Le+(0) = N\nOne possibility is to recompute the bound after every gradient step. This is exactly equivalent to directly minimizing L. Such a procedure is brittle. In particular, Eq.[2.5Jindicates that, if an example is poorly classified early on, its gradient will be close to O and it will difficult to recover from this situation. Thus, we propose using Algorithm 1 for supervised learning: In regularly recomputing\nng Le+ =-);p(yi[X,0t) logp(yi[X,0\nthe bound, we ensure that it remains close to the quantity we are interested in and that we do not waste time optimizing a loose bound..\nAdditionally, our idea extends naturally to the case where p is a complicated function of 0 and not. easily written as a sum of a convex and a concave function. This might lead to nonconvex inner optimizations but we believe that this can still yield lower classification error. A longer study in the case of deep networks is planned.\nThe idea of computing tighter bounds during optimization is not new. In particular, several authors used a CCCP-based (Yuille & Rangarajan,2003) procedure to achieve tighter bounds for SVMs (Xu et al.]2006}Collobert et al. 2006 Ertekin et al.]2011).Though Collobert et al.(2006) show a small improvement of the test error, the primary goal was to reduce the number of support vectors. to keep the testing time manageable. Also, the algorithm proposed by Ertekin et al.(2011) required the setting of an hyperparameter, s, which has a strong influence on the final solution (see Fig. 5 in. their paper). Finally, we are not aware of similar ideas in the context of the logistic loss..\nAs this model further optimizes the training classification accuracy, regularization is often needed The standard optimization procedure minimizes the following regularized objective:.\n0* = arg min - >`logp(yiX,0) +XQ(0 A - arg min - ) logp(yi|Xi,0) + K K 0 2\nThus, we can view this as an. per bound of the following \"true'' objective.\n0* = arg min - p(yi[Xi,0) 0 2"}, {"section_index": "3", "section_name": "3 EXPERIMENTS", "section_text": "We experimented the impact of using tighter bounds to the expected misclassification rate on severa. latasets, which will each be described in their own section. The experimental setup for all datase. vas as follows. We first set aside part of the dataset to compose the test set. We then performe <-fold cross-validation, using a generalized linear model, on the remaining datapoints for differer. alues of T, the number of times the importance weights were recomputed, and the l2-regularize. . For each value of T, we then selected the set of hyperparameters ( and the number of iterations. vhich achieved the lowest validation classification error. We computed the test error for each of th. models (one per fold) with these hyperparameters. This allowed us to get a confidence interval. on the test error, where the random variable is the training set but not the test set..\nFor a fair comparison, each internal optimization was run for Z updates so that ZT was constant Each update was computed on a randomly chosen minibatch of 50 datapoints using the SAG algo rithm (Le Roux et al.|2012). Since we used a generalized linear model, each internal optimization was convex and thus had no optimization hyperparameter.\nFig.1presents the training classification errors on all the datasets.\nThe Covertype binary dataset (Collobert et al.]2002) has 581012 datapoints in dimension 54 and 2 classes. We used the first 90% for the cross-validation and the last 10% for testing. Due to the small dimension of the input, linear models strongly underfit, a regime in which tighter bounds are most beneficial. We see in Fig. 2 that using T > 1 leads to much lower training and validation classification errors. Training and validation curves are presented in Fig.2land the test classification error is listed in Table 1"}, {"section_index": "4", "section_name": "3.2 ALPHA DATASET", "section_text": "The Alpha dataset is a binary classification dataset used in the Pascal Large-Scale challenge and con tains 500000 samples in dimension 500. We used the first 400000 examples for the cross-validatior. and the last 1ooooo for testing. A logistic regression trained on this dataset overfits quickly and, as. a result, the results for all values of T are equivalent. Training and validation curves are presentec. in Fig.3|and the test classification error is listed in Table2\nBecause of its iterative nature, Algorithm[1|is adapted to a batch setting. However, in many cases. we have access to a stream of data and we cannot recompute the importance weights on all the points. A natural way around this problem is to select a parameter vector 0 and to use v = 0 for the subsequent examples. One can see this as \"crystallizing\" the current solution as the value of chosen will affect all subsequent gradients.\nT=1-12=0 T=10-12=0 T=100-12=0 T=1000 - I2=1e-8 T=1-12=1e-6 T=10 - 12=0 T=100 - I2=0 T=1000 - 12=0 T=1-12=0 T=10-12=0 T=100- 12=0 T=1000- 12=1e-7 T=1-12=0 T=10-12=1e-8 T=100-12=0 T=1000-12=0\nFigure 1: Training classification errors for covertype (top left), alpha (top right), MNist (bottom left) and IJCNN (bottom right). We can immediately see that all values of T > 1 yield significant lower errors than the standard log-loss (the confidence intervals represent 3 standard deviations).\nFigure 2: Training (top) and validation (bottom) negative log-likelihood (left) and classificatior. error (right) for the covertype dataset. We only display the result for the value of X yielding the lowest validation error. As soon as the importance weights are recomputed, the NLL increases and. the classification error decreases (the confidence intervals represent 3 standard deviations)..\nT Z Test error 3 (%) 1000 1e5 32.88 0.07 100 1e6 32.96 0.06 10 1e7 32.85 0.06 1 1e8 36.32 0.06\n3.0 0.218 2.5 0.216 0.214 2.0 0.212 1.5 0.210 0.208 1.0 0.206 ATAAnAAAAAZAA 0.5 0.204 105 10a 105 10 10 10 3.0 0.220 2.5 0.215 2.0 1.5 0.210 1.0 0.205 0.5 105 10 T=100 - 12=0 T=1 - l2=1e-6 T=10 - I2=0 106 T=1000 - 12=0 10 8 11111\nT Z Test error 3 (%) Table 1: Test error for the models reaching the best valida- 1000 1e5 32.88 0.07 tion error for various values of T on the covertype dataset. 100 1 e6 32.96 0.06 We can see that any value of T greater than 1 leads to a sig- 10 1e7 32.85 0.06 nificant improvement over the standard log-loss (the confi- 1 1e8 36.32 0.06 dence intervals represent 3 standard deviations).\nTable 1: Test error for the models reaching the best valida- tion error for various values of T on the covertype dataset.. We can see that any value of T greater than 1 leads to a sig-. nificant improvement over the standard log-loss (the confi-. dence intervals represent 3 standard deviations)..\nFigure 3: Training (top) and validation (bottom) negative log-likelihood (left) and classificatior error (right) for the alpha dataset. We only display the result for the value of X yielding the lowes validation error. As soon as the importance weights are recomputed, the NLL increases. Overfitting. occurs very quickly and the best validation error is the same for all values of T (the confidence. intervals represent 3 standard deviations)."}, {"section_index": "5", "section_name": "3.3 MNIST DATASET", "section_text": "The MNist dataset is a digit recognition dataset with 70000 samples. The first 60000 were used fo the cross-validation and the last 10000 for testing. Inputs have dimension 784 but 67 of them ar always equal to O. Despite overfitting occurring quickly, values of T greater than 1 yield significan improvements over the log-loss. Training and validation curves are presented in Fig.4|and the tes classification error is listed in Table[3\nT Z Test error 3 (%) 1000 1e5 7.00 0.08 100 1e6 7.01 0.05 10 1e7 6.97 + 0.08 1 1e8 7.460.11\nThe IJCNN dataset is a dataset with 191681 samples. The first 80% of the dataset were used for training and validation (70% for training, 10% for validation, using random splits), and the last 20% were used for testing samples. Inputs have dimension 23, which means we are likely to be in the underfitting regime. Indeed, larger values of T lead to significant improvements over the log-loss. Training and validation curves are presented in Fig. 5|and the test classification error is listed in. Table4]\n0.217 3.0 0.216 2.5 0.215 0.214 2.0 0.213 1.5 0.212 1.0 0.211 0.210 0.5 105 0 10 5 106 107 108 0.221 3.0 0.220 2.5 0.219 2.0 0.218 1.5 0.217 1.0 0.216 0.5 10 T=1-i2=1e-7 T=100 - I2=1e-5 106 T=1000 - i2=1e-7 10 T=10 -I2=1e-7 11111\nTable 2: Test error for the models reaching the best valida. tion error for various values of T on the alpha dataset. We can see that overfitting occurs very quickly and, as a result.. all values of T lead to the same result as the standard log. loss.\nT Z Test error 3o (%) 1000 1e5 7.00 0.08 Table 3: Test error for the models reaching the best valida- 100 1e6 7.01 0.05 tion error for various values of T on the MNist dataset. The 10 1e7 6.97 0.08 results for all values of T' strictly greater than 1 are compa- 1 1e8 7.46 0.11 rable and significantly better than for T = 1.\nTable 3: Test error for the models reaching the best valida- tion error for various values of T on the MNist dataset. The results for all values of T strictly greater than 1 are compa- rable and significantly better than for T = 1.\n0.8 0.065 0.7 0.060 0.6 0.055 0.5 0.050 0.4 0.045 0.3 0.040 0.2 105 10 105 106 10 10 0.090 1.1 1.0 0.085 0.9 0.8 0.7 0.080 0.6 0.5 0.075 0.4 0.3 10 T=1-I2=0 T=10 - I2=1e-5 10 T=100 - I2=1e-5 106 T=1000 - 12=1e-7 10 a\nFigure 4: Training (top) and validation (bottom) negative log-likelihood (left) and classificatior error (right) for the MNist dataset. We only display the result for the value of X yielding the lowesi validation error. As soon as the importance weights are recomputed, the NLL increases. Overfitting. occurs quickly but higher values of T still lead to lower validation error. The best training error was. 2.52% with T = 10.\n0.080 1.0 0.078 0.076 0.8 0.074 0.6 0.072 0.070 0.4 0.068 0.2 0.066 10 105 10b 10 10 1.2 1.0 0.080 0.8 0.075 0.6 0.070 0.4 0.2 105 T=1-12=1e-8 1O T=10 - 12=1e-7 T=100 - 12=1e-8 106 T=1000- I2=1e-7 10 a\nFigure 5: Training (top) and validation (bottom) negative log-likelihood (left) and classificatior error (right) for the IJCNN dataset. We only display the result for the value of X yielding the lowes validation error. As soon as the importance weights are recomputed, the NLL increases. Since the. number of training samples is large compared to the dimension of the input, the standard logisti regression is underfitting and higher values of T lead to better validation errors..\nT Z Test error 3 (%) 1000 1e5 4.62 0.12 100 1e6 5.26 0.33 10 1e7 5.87 0.13 1 1e8 6.19 0.12\nT Z Test error 3o (%) 1000 1 e5 4.62 0.12 Table 4: Test error for the models reaching the best vali-. 100 1 e6 5.26 0.33 dation error for various values of T on the IJCNN dataset. 10 1e7 5.87 0.13 Larger values of T lead significantly lower test errors.. 1 1e8 6.19 0.12\nTable 4: Test error for the models reaching the best vali. dation error for yarious values of T on the IJCNN dataset Larger values of T lead significantly lower test errors.."}, {"section_index": "6", "section_name": "SUPERVISED LEARNING AS POLICY OPTIMIZATION", "section_text": "We now propose an interpretation of supervised learning which closely matches that of direct polic. optimization in reinforcement learning. This allows us to naturally address common issues in the literature, such as optimizing ROC curves or allowing a classifier to withhold taking a decision..\nA machine learning algorithm is often only one component of a larger system whose role is to make. decisions, whether it is choosing which ad to display or deciding if a patient needs a specific treat. ment. Some of these systems also involve humans. Such systems are complex to optimize and it is often appealing to split them into smaller components which are optimized independently. However. such splits might lead to poor decisions, even when each component is carefully optimized (Bottou). This issue can be alleviated by making each component optimize the full system with respect to its. own parameters. Doing so requires taking into account the reaction of the other components in the. system to the changes made, which cannot in general be modeled. However, one may cast it as a. reinforcement learning problem where the environment is represented by everything outside of ou. component, including the other components of the system (Bottou et al.l2013).\nPushing the analogy further, we see that in one-step policy learning, we try to find a policy p(y|X, 0 over actions y given the state Xb[to minimize the expected loss defined as\nL(0) =->>`R(y,Xi)p(y|Xi,0) i y\nL(0) is equivalent to L(0) from Eq.[1.3|where all actions have a reward of O except for the actior choosing the correct class yi yielding R(yi, X) = 1. One major difference between policy learning and supervised learning is that, in policy learning, we only observe the reward for the actions we have taken, while in supervised learning, the reward for all the actions is known.\nCasting the classification problem as a specific policy learning problem yields a loss function com-. mensurate with a reward. In particular, it allows us to explicit the rewards associated with each decision, which was difficult with Eq.1.1 We will now review several possibilities opened by this. formulation."}, {"section_index": "7", "section_name": "OPTIMIZING THE ROC CURVE", "section_text": "To test the impact of optimizing the probabilities rather than a surrogate loss, we reproduced the binary problem of Bach et al.(2o06). We computed the average training and testing performance over 10 splits. An example of the training set and the results are presented in Fig.6.\nEven though working directly with probabilities solved the non-concavity issue, we still had to. explore all possible cost asymmetries to draw this curve. In particular, if we had been asked tc maximize the true positive rate for a given false positive rate, we would have needed to draw the whole curve then find the appropriate point..\nHowever, expressing the loss directly as a function of the probabilities of choosing each class allows. us to cast this requirement as a constraint and solve the following constrained optimization problem:\n1 1 0* = arg min L p(1|x, 0) such that L p(1|xi,0) < CFP , N1 No e i/yi=1 i/yi=0\nIn standard policy learning, we actually consider full rollouts which include not only actions but also stat changes due to these actions.\nIn some scenarios, we might be interested in other performance metrics than the average classifica tion error. In search advertising, for instance, we are often interested in maximizing the precision at a given recall.Mozer et al.(2001) address the problem by emphasizing the training points whose output is within a certain interval. Gasso et al.(2011); Parambath et al.[(2014), on the other hand, assign a different cost to type I and type II errors, learning which values lead to the desired false positive rate. Finally, Bach et al.[(2006) propose a procedure to find the optimal solution for all costs efficiently in the context of SVMs and showed that the resulting models are not the optimal models in the class.\nwith No (resp. N1) the number of examples belonging to class 0 (resp. class 1). Since p(1|xi, 0) 1 - p(0|x;, 0) , we can solve the following Lagrangian problem\n1 L 1 min max L(0, X) = min max p(1|xi,0)+\\ p(O|xi,0)-CFP N1 X>0 e X>0 i/yi=1 i/yi=0\nThis is an approach proposed by Mozer et al.(2001) who then minimize this function directly. We can however replace L(0, ) with the following upper bound:\nand jointly optimize over 0 and X. Even though the constraint is on the upper bound and thus. will not be exactly satisfied during the optimization, the increasing tightness of the bound with the. convergence will lead to a satisfied constraint at the end of the optimization. We show in Fig.7|the. obtained false positive rate as a function of the required false positive rate and see that the constraint. is close to being perfectly satisfied. One must note, however, that the ROC curve obtained using. the constrained optimization problems matches that of T = 1, i.e. is not concave. We do not have. an explanation as to why the behaviour is not the same when solving the constrained optimizatior problem and when optimizing an asymmetric cost for all values of the asymmetry.."}, {"section_index": "8", "section_name": "ALLOWING UNCERTAINTY IN THE DECISION", "section_text": "Let us consider a cancer detection algorithm which would automatically classify patients in twc categories: healthy or ill. In practice, this algorithm will not be completely accurate and, given the high price of a misclassification, we would like to include the possibility for the algorithm to hand over the decision to the practitioner. In other words, it needs to include the possibility of being \"Undecided\".\n1.0 T = 1 0.8 T=10 rate Poooiore 0.6 ** 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 1.0 False positive rate\n1.0 T = 1 0.8 T=10 ** rate nooolree 0.6 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 1.0 False positive rate\nFigure 6: Training data (left) and test ROC curve (right) for the binary classification problem from |Bach et al.(2006). The black dots are obtained when minimizing the log-loss for various. values of the cost asymmetry. The red stars correspond to the ROC curve obtained when directly. optimizing the probabilities. While the former is not concave, a problem already mentioned by Bach. et al.(2006), the latter is.\n1 p(1|xi,0) p(1|xi,V) 1 + log Nj p(1|xi,V) i/yi=1 p(O|xi,0) 1 +X No p(O|xi,V) H i/yi=0\nThe standard way of handling this situation is to manually set a threshold on the output of the clas sifier and, should the maximum score across all classes be below that threshold, deem the example too hard to classify. However, it is generally not obvious how to set the value of that threshold nor how it relates to the quantity we care about, even though some authors provided guidelines (?). The difficulty is heightened when the prior probabilities of each class are very different.\n1.0 0.8 0.6 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 1.0 Desired false positive rate\nand the average under the policy is p(y|X,, 0 (\"Undecided\"|X;, 0)\nLearning this model on a training set is equivalent to minimizing the following quantit\n1 0* = arg min (p(yi|Xi,0) +rnp(\"Undecided\"|Xi,0) N 0 2\nFor each training example, we have added another example with importance weight rn and class \"Undecided\". If we were to solve this problem through a minimization of the log-loss, it is well- known that the optimal solution would be, for each example X, to predict yi with probability 1/(1 + rn) and \"Undecided\"' with probability rn/(1 + rn). However, when optimizing the weighted sum of probabilities, the optimal solution is still to predict yi with probability 1. In other words. adding the \"Undecided\"' class does not change the model if it has enough capacity to learn the training set accurately."}, {"section_index": "9", "section_name": "DISCUSSION AND CONCLUSION", "section_text": "Using a general class of upper bounds of the expected classification error, we showed how a sequence of minimizations could lead to reduced classification error rates. However. there are still a Iot of questions to be answered. As using T > 1 increases overfitting, one might wonder whether the standard regularizers are still adapted. Also, current state-of-the-art models, especially in image classification, already use strong regularizers such as dropout. The question remains whether using T > 1 with these models would lead to an improvement.\nAdditionally, it makes less and less sense to think of machine learning models in isolation. They are increasingly often part of large systems and one must think of the proper way of optimizing them in this setting. The modification proposed here led to an explicit formulation for the true impact of a classifier. This facilitates the optimization of such a classifier in the context of a larger production system where additional costs and constraints may be readily incorporated. We believe this is a critical venue of research to be explored further..\ncThis is assuming that the external intervention always leads to the correct decision. Any other setting ca easily be used.\nFigure 7: Test false positive rate as a function of the desired false positive rate cFp. The dotted line representing the optimal behaviour, we can see that the constraint is close to being satisfied T = 10 was used\nEq.4.1allows us to naturally include an extra \"action\"', the \"Undecided\"' action, which has its own. reward. This reward should be equal to the reward of choosing the correct class (i.e., 1) minus the cost ch, of resorting to external intervention[I which is less than 1 since we would otherwise rather have an error than be undecided. Let us denote by rh = 1 - ch the reward obtained when the model. chooses the \"Undecided'' class. Then, the reward obtained when the input is X, is:.\nR(yi|Xi) =1 Undecided\"|Xi) = rh ,\nWe thank Francis Bach, Leon Bottou, Guillaume Obozinski, and Vianney Perchet for helpful dis cussions."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Francis R Bach, David Heckerman, and Eric Horvitz. Considering cost asymmetry in learnin classifiers. The Journal of Machine Learning Research, 7:1713-1741, 2006\nAlan L Yuille and Anand Rangarajan. The concave-convex procedure. Neural computation, 15(4): 915-936, 2003.\nRonan Collobert, Samy Bengio, and Yoshua Bengio. A parallel mixture of svms for very large scale problems. Neural computation, 14(5):1105-1114, 2002.\nSeyda Ertekin, Leon Bottou, and C Lee Giles. Nonconvex online support vector machines. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 33(2):368-381, 2011."}] |
ry_sjFqgx | [{"section_index": "0", "section_name": "PROGRAM SYNTHESIS FOR CHARACTER LEVEL LANGUAGE MODELING", "section_text": "Pavol Bielik. Veselin Ravchey & Martin Vechev\nDepartment of Computer Science, ETH Zurich, Switzerland {pavol.bielik, veselin.raychev,martin.vechev}@inf.ethz.c\npavol.bielik, veselin.raychev,martin.vechev}@inf.ethz.ch"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recent years have shown increased interest in learning from large datasets in order to make accurate predictions on important tasks. A significant catalyst for this movement has been the ground break- ing precision improvements on a number of cognitive tasks achieved via deep neural networks. Deep neural networks have made substantial inroads in areas such as image recognition (Krizhevsky et al. 2012) and natural language processing (Jozefowicz et al.2016) thanks to large datasets, deeper net- works (He et al.2016) and substantial investments in computational power Oh & Jung(2004).\nWhile neural networks remain a practical choice for many applications, they have been less effective. when used for more structured tasks such as those concerning predictions about programs (Allama-. nis et al.[2016f Raychev et al.[2014). Initially targeting the programming languages domain, a new method for synthesizing probabilistic models proposed by Bielik et al.(2016), without a neural. network, has shown to be effective for modeling source code, and has gained traction.\nIn this work, we investigate the applicability of this new method to tasks which have so far been. addressed with recurrent neural networks and n-gram language models. The probabilistic models we propose are defined by a program from a domain-specific language (DSL). A program in this DSL describes a probabilistic model such as n-gram language models or a variant of it - e.g. trained. on subsets of the training data, queried only when certain conditions are met and specialized in making specific classes of predictions. These programs can also be combined to produce one large. program that queries different specialized submodels depending on the context of the query..\nFor example, consider predicting the characters in an English text. Typically, the first character of a word is much more difficult to predict than other characters and thus we would like to predict it differently. Let f be a function that takes a prediction position t in a text x and returns a list ol characters to make prediction on. For example, let f be defined as follows:\nwhere xs is the first character of the word preceding the predicted word at position t. Now, conside. a model that predicts a character by estimating each character xt from a distribution P(xt[f(t, x)) For positions that do not follow a whitespace, this distribution uses the two characters preceding xt and thus it simply encodes a trigram language model (in this example, without backoff, but we"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We propose a statistical model applicable to character level language modeling nd show that it is a good fit for both, program source code and English text. Th nodel is parameterized by a program from a domain-specific language (DSL hat allows expressing non-trivial data dependencies. Learning is done in tw hases: (i) we synthesize a program from the DSL, essentially learning a goo epresentation for the data, and (ii) we learn parameters from the training data he process is done via counting, as in simple language models such as n-gram. ur experiments show that the precision of our model is comparable to that of neu al networks while sharing a number of advantages with n-gram models such a ast query time and the capability to quickly add and remove training data samples 'urther, the model is parameterized by a program that can be manually inspected Inderstood and updated, addressing a major problem of neural networks.\nconsider backoff separately). However, P(xt[f(t, x)) combines the trigram model with anothe. interesting model for the samples in the beginning of the words where trigram models typically fail\nMore generally, f is a function that describes a probabilistic model. By simply varying f, we can de. fine trivial models such as n-gram models, or much deeper and interesting models and combinations In this work, we draw f from a domain-specific language (DSL) that resembles a standard program. ming language: it includes if statements, limited use of variables and one iterator over the text, bu. overall that language can be further extended to handle specific tasks depending on the nature of the data. The learning process now includes finding f from the DSL such that the model P(xt[f(t, x)). performs best on a validation set and we show that we can effectively learn such functions using. Markov chain Monte Carlo (MCMC) search techniques combined with decision tree learning..\nAdvantages s An advantage of having a function f drawn from a DSL is that f becomes humanl eadable, in contrast to neural networks that generally provide non-human readable matrices (I et al.2016). Further, the training procedure is two-fold: first, we synthesize f from the DSL, an then for a given f, we estimate probabilities for P(xt[f(t, x)) by counting in the training data. Th gives us additional advantages such as the ability to synthesize f and learn the probability distrib tion P on different datasets: e.g., we can easily add and remove samples from the dataset used fc computing the probability estimate P. Finally, because the final model is based on counting, est mating probabilities P(xt|f(t, x)) is efficient: applying f and looking up in a hashtable to determin how frequently in the training data, xt appears in the resulting context of f(t, x).\nBefore we continue, we note an important point about DSL-based models. In contrast to deep neura networks that can theoretically encode all continuous functions (Hornik| 1991), a DSL by definition targets a particular application domain, and thus comes with restricted expressiveness. Increasing the expressibility of the DSL (e.g., by adding new instructions) can in theory make the synthesis problem intractable or even undecidable. Overall, this means that a DSL should balance between expressibility and efficiency of synthesizing functions in it (Gulwani 2010)."}, {"section_index": "3", "section_name": "A DSL FOR CHARACTER LEVEL LANGUAGE MODELING", "section_text": "We now provide a definition of our domain specific language (DsL) called TChar for learning. character level language models. At a high level, executing a program p E TChar at a position t E N. in the input sequence of characters x E X returns an LMProgram that specifies language model to be used at position t. That is, p E TChar : N X -> LMProgram. The best model to use at positior. t is selected depending on the current program state (updated after processing each character) and. conditioning on the dynamically computed context for position t of the input text x. This allows us to train specialized models suited for various types of prediction such as for comments vs. source. code, newlines, indentation, opening/closing brackets, first character of a word and many more. Despite of the fact that the approach uses many specialized models, we still obtain a valid probability. distribution as all of these models operate on disjoint data and are valid probability distributions. Subsequently, the selected program f E LMProgram determines the language model that estimates. the probability of character xt by building a probability distribution P(xt|f(t, x)). That is, the. probability distribution is conditioned on the context obtained by executing the program f..\nThe syntax of TChar is shown in Fig.1|and is designed such that it contains general purpose in structions and statements that operate over a sequence of characters. One of the advantages of our approach is that this language can be further refined by adding more instructions that are specialized for a given domain at hand (e.g., in future versions, we can easily include set of instructions specific\nWe define a DSL which is useful in expressing (character level) language models. Thi DSL can express n-gram language models with backoff, with caching and can compos them using if statements in a decision-tree-like fashion. We also provide efficient synthesi procedures for functions in this DSL. We experimentally compare our DSL-based probabilistic model with state-of-the-art neura network models on two popular datasets: the Linux Kernel dataset (Karpathy et al. 2015 and the Hutter Prize Wikipedia dataset (Hutter2012).\nFigure 1: Syntax of TChar language for character level language modeling. Program semantics are given in the appendix.\nto modeling C/C++ source code). We now informally describe the general TChar language of this work. We provide a formal definition and semantics of the TChar language in the appendix."}, {"section_index": "4", "section_name": "2.1 SIMPLEPROGRAMS", "section_text": "The SimpleProgram is a basic building block of the TChar language. It describes a loop-free anc branch-free program that accumulates context with values from the input by means of navigating within the input (using Move instructions) and writing the observed values (using Write instruc- tions). The result of executing a SimpleProgram is the accumulated context which is used either to condition the prediction, to update the program state or to determine which program to execute next\nMove Instructions We define four basic types of Move instructions - LEFT and RIGHT that move to the previous and next character respectively, PREv_CHAR that moves to the most recent position in the input with the same value as the current character x and PREV_POS which works as PREV_CHAR but only considers positions in the input that are partitioned into the same language model. Additionally, for each character c in the input vocabulary we generate instruction PREV_CHAR(c) that navigates to the most recent position of character c. We note that all Move instructions are allowed to navigate only to the left of the character xt that is to be predicted.\nWrite Instructions We define three Write instructions - WRITE_CHAR that writes the value of the character at current position, WRITE_HASH that writes a hash of all the values seen between the current position and the position of last Write instruction, and wRITE_DIST that writes a distance (i.e., number of characters) between current position and the position of last Write instruction. In our implementation we truncate WRITE_HASH and WRITE_DIST to a maximum size of 16.\nLMProgram ::= SimpleProgram SimpleProgram backoff d; LMProgram (SimpleProgram, SimpleProgram).\nExample With Write and Move instructions, we can express various programs that extract useful. context for a given position t in text x. For example, we can encode the context used in trigram lan guage model with a program LEFT WRITE_CHAR LEFT WRITE_CHAR. We can also express programs such as LEFT PREV_CHAR RIGHT WRITE_CHAR that finds the previous occurrence of the character on the left of the current position and records the character following it in the context.."}, {"section_index": "5", "section_name": "2.2 SWITCHPROGRAMS", "section_text": "Example We now briefly discuss some of the BranchProgram synthesized for the Linux Kerne. dataset in our experiments described in Section[3 By inspecting the synthesized program we iden tified interesting Simp1ePrograms building blocks such as PREV_CHAR() RIGHT WRITE_CHAR tha conditions on the first character of the current word, PREV_CHAR(\\n) WRITE_DIST that condition on the distance from the beginning of the line or PREV_CHAR(_) LEFT WRITE_CHAR that checks the. preceding character of a previous underscore (useful for predicting variable names). These are ex amples of more specialized programs that are typically found in the branches of nested switche. of a large TChar program. The top level switch of the synthesized program used the character be. fore the predicted position (i.e. switch LEFT wRITE_CHAR) and handles separately cases such a. newline, tabs, special characters (e.g., !//@.*), upper-case characters and the rest.."}, {"section_index": "6", "section_name": "2.3 STATEPROGRAMS", "section_text": "A common difficulty in building statistical language models is capturing long range dependencies in the given dataset. Our TChar language partially addresses this issue by using Move instructions that can jump to various positions in the data using PREV_CHAR and PREV_POS instructions. However we can further improve by explicitly introducing a state to our programs using StateProgram. The StateProgram consists of two sequential operations - updating the current state and determining which program to execute next based on the value of the current state. For both we reuse the switch construct defined previously for SwitchProgram. In our work we consider integer valued state that can by either incremented, decremented or left unmodified after processing each input character. We note that other definitions of the state, such as stack based state, are possible.\nExample As an example of a StateProgram consider the question of detecting whether the cur rent character is inside a comment or is a source code. These denote very different types of data that we might want to model separately if it leads to improvement in our cost metric. This can be achieved by using a simple state program with condition LEFT WRITE_CHAR LEFT WRITE_CHAR that increments the state on '/*', decrements on '*/' and leaves the state unchanged otherwise.."}, {"section_index": "7", "section_name": "2.4 LMPROGRAMS", "section_text": "Recall that given a position t in text x, executing a SimpleProgram returns context f(t, x). Fo example, executing LEFT WRITE_CHAR LEFT WRITE_CHAR returns the two characters Xt-1Xt-2 pre ceding xt. In this example P(xt[f(t, x)) is a trigram model. To be effective in practice, howeve. such models should support smoothing or backoff to lower order models. We provide backoff ii two ways. First, because the accumulated context by the SimpleProgram is a sequence we simpl backoff to a model that uses a shorter sequences by using Witten-Bell backoff (Witten & Bell||1991) Second, in the LMProgram we explicitly allow to backoff to other models specified by a TChar pro\nA problem of using only one SimpleProgram is that the context it generates may not work well for the entire dataset, although combining several such programs can generate suitable con- texts for the different types of predictions. To handle these cases, we introduce SwitchProgram. with switch statements that can conditionally select appropriate subprograms to use depending. on the context of the prediction. The checked conditions of switch are themselves program Pguard E SimpleProgram that accumulate values that are used to select the appropriate branch. that should be executed next. During the learning the goal is then to synthesize best program pguard. to be used as a guard, the values v1, :.. , Vn used as branch conditions as well as the programs to. be used in each branch. We note that we support disjunction of values within branch conditions. As. a result, even if the same program is to be used for two different contexts v1 and v2, the synthesis. procedure can decide whether a single model should be trained for both (by synthesizing a single branch with a disjunction of v1 and v2) or a separate models should be trained for each (by synthe-. sizing two branches).\nThe LMProgram describes a probabilistic model trained and queried on a subset of the data as defined by the branches taken in the SwitchPrograms and StatePrograms. The LMProgram in TChar is instantiated with a language model described by a SimpleProgram plus backoff. That is, the predic tion is conditioned on the sequence of values returned by executing the program, i.e. P(xtf(t, x))\ngram if the probability of the most likely character from vocabulary V according to the P(xtf(t, x) model is less than a constant d. Additionally, for some of our experiments we also consider backoff to a cache model (Kuhn & De Mori]1990)\nPredicting Out-of-Vocabulary Labels Finally, we incorporate a feature of the language models as proposed by Raychev et al.[(2016a) that enables us not only to predict characters directly but instead to predict a character which is equal to some other character in the text. This is achieved by synthesising a pair of (SimpleProgram, SimpleProgram). The first program is called equality. program and it navigates over the text to return characters that may be equal to the character that we are trying to predict. Then, the second program f describes P(xt[f(t, x)) as described before,. except that a possible output is equality to one of the characters returned by the equality program\nThe goal of the synthesizer is given a set of training and validation samples D, to find a program\nPbest = arg min cost(D, p pETChar\nwhere cost(D, p) = -logprob(D, p) + X . (p). Here logprob(D,p) is the log-probability of the trained models on the dataset D and Q(p) is a regularization that penalizes complex functions to avoid over-fitting to the data. In our implementation, Q(p) returns the number of instructions in p.\nThe language TChar essentially consists of two fragments: branchesandstraight-line SimplePrograms. To synthesize branches, we essentially need a decision tree learning algorithm that we instantiate with the ID3+ algorithm as described in Raychev et al.(2016a). To synthesize SimplePrograms we use a combination of brute-force search for very short programs (up to 5 in- structions), genetic programming-based search and Markov chain Monte Carlo-based search. These procedures are computationally feasible, because each SimpleProgram consists of only a small number of moves and writes. We provide more details about this procedure in Appendix|B.1"}, {"section_index": "8", "section_name": "3 EXPERIMENTS", "section_text": "Datasets For our experiments we use two diverse datasets: a natural language one and a structured. text (source code) one. Both were previously used to evaluate character-level language models - the. Linux Kernel dataset Karpathy et al.(2015) and Hutter Prize Wikipedia dataset Hutter(2012). The. Linux Kernel dataset contains header and source files in the C language shuffled randomly, and. consists of 6, 206, 996 characters in total with vocabulary size 101. The Hutter Prize Wikipedic dataset contains the contents of Wikipedia articles annotated with meta-data using special mark-up. (e.g., XML or hyperlinks) and consists of 100, 000, 000 characters and vocabulary size 205. For. both datasets we use the first 80% for training, next 10% for validation and final 10% as a test set..\nEvaluation Metrics To evaluate the performance of various probabilistic language models we use two metrics. Firstly, we use the bits-per-character (BPC) metric which corresponds to the negative log likelihood of a given prediction E[ log2 p(xt x<t)], where xt is character being predicted and x<t denotes characters preceding xt. Further, we use error rate which corresponds to the ratio of mistakes the model makes. This is a practical metric that directly quantifies how useful is the model in a concrete task (e.g., completion). As we will see, having two different evaluation metrics is beneficial as better (lower) BPC does not always correspond to better (lower) error rate."}, {"section_index": "9", "section_name": "3.1 LANGUAGE MODELS", "section_text": "We compare the performance of our trained DSL model, instantiated with the TChar language de. scribed in Section[2] against two widely used language models - n-gram model and recurrent neural networks. For all models we consider character level modeling of the dataset at hand. That is, the models are trained by feeding the input data character by character, without any knowledge of highe. level word boundaries and dataset structure..\nTable 1: Detailed comparison of LSTM, n-. gram and DSL models on Linux Kernel dataset\nDSL model To better understand various features of the TChar language we include experiment. that disable some of the language features. Concretely, we evaluate the effect of including cach. and backoff. In our experiments we backoff the learned program to a 7-gram and 3-gram mod. els and we use cache size of 800 characters. The backoff thresholds d are selected by evaluating. the model performance on the validation set. Finally, for the Linux Kernel dataset we manuall include a StateProgram as a root that distinguishes between comments and code (illustrated ii. Section2.3). The program learned for the Linux Kernel dataset contains ~ 700 BranchPrograms and ~ 2200 SimplePrograms and has over 8600 Move and Write instructions in total. We prc. vide an interactive visualization of the program and it's performance on the Linux Kernel datase. online at:\nLinux Kernel Dataset (Karpathy et al. 2015 Bits per Error Training Queries Model Model Character Rate Time per Second Size LSTM (Layers Hidden Size) 2x128 2.31 40.1% ~28 hours 4 000 5 MB 2 x256 2.15 37.9% ~49 hours 1100 15 MB 2x512 2.05 38.1% ~80 hours 300 53 MB n-gram 4-gram 2.49 47.4% 1 sec 46 000 2 MB 7-gram 2.23 37.7% 4 sec 41 000 24 MB 10-gram 2.32 36.2% 11 sec 32 000 89 MB 15-gram 2.42 35.9% 23 sec 21 500 283 MB DSL model (This Work) TChar w/o cache & backoff 1.92 33.3% ~8 hours 62 000 17 MB TChar w/o backoff 1.84 31.4% ~8 hours 28 000 19 MB TCharw/o cache 1.75 28.0% ~8.2 hours 24 000 43 MB TChar 1.53 23.5% ~8.2 hours 3 000 45 MB\nN-gramWe use the n-gram model as a baseline model as it has been traditionally the most widely used language model due to it simplicity, efficient training and fast sampling. We note that n- gram can be trivially expressed in the TChar language as a program containing a sequence of LEFT and wRITE instructions. To deal with data sparseness we have experimented with various smooth- ing techniques including Witten-Bell interpolation smoothing Witten & Bell (1991) and modified Kneser-Ney smoothingKneser & Ney(1995);Chen & Goodman(1998).\nRecurrent Neural Networks To evaluate the effectiveness of the DSL model we compare to a re- current network language model shown to produce state-of-the-art performance in various natural language processing tasks. In particular, for the Linux Kernel dataset we compare against a variant. of recurrent neural networks with Long Short-Term Memory (LSTM) proposed by Hochreiter &. Schmidhuber (1997). To train our models we follow the experimental set-up and use the implemen-. tation of Karpathy et al.(2015). We initialize all parameters uniformly in range [-0.08, 0.08], use. mini-batch stochastic gradient descent with batch size 50 and RMSProp Dauphin et al.(2015) per parameter adaptive update with base learning rate 2 10-3 and decay 0.95. Further, the network. s unrolled 100 time steps and we do not use dropout. Finally, the network is trained for 50 epochs (with early stopping based on a validation set) and the learning rate is decayed after 10 epochs by. multiplying it with a factor of O.95 each additional epoch. For the Hutter Prize Wikipedia dataset. we compared to various other, more sophisticated models as reported byChung et al.(2017).\nTable 2: Bits-per-character metric for various neural language models (as reported by|Chung et al. 2017) achieved on Hutter Prize Wikipedia dataset where the TChar model achieves competitive results. +State-of-the-art network combining character and word level models that are learned from. the data."}, {"section_index": "10", "section_name": "3.2 MODEL PERFORMANCE", "section_text": "Precision In terms of model precision, we can see that as expected, the n-gram model perform worse in both BPC and error rate metrics. However, even though the best BPC is achieved fo a 7-gram model, the error rate decreases up to 15-gram. This suggests that none of the smoothing techniques we tried can properly adjust to the data sparsity inherent in the higher order n-gram mod els. It is however possible that more advanced smoothing techniques such as one based on Pitman Yor Processes (Teh] 2006) might address this issue. As the DSL model uses the same smoothing technique as n-grams, any improvement to smoothing is directly applicable to it.\nAs reported byKarpathy et al. (2015), the LSTM model trained on the Linux Kernel dataset improves BPC over the n-gram. However, in our experiments this improvement did not translate to lower error rate. In contrast, our model is superior to n-gram and LSTM in all configurations, beating the best other model significantly in both evaluation metrics - decreasing BPC by over 0.5 and improving error rate by more than 12%.\nFor the Hutter Prize Wikipedia dataset, even though the dataset consists of natural language text and is much less structured than the Linux Kernel, our model is competitive with several neural network models. Similar to the results achieved on Linux Kernel, we expect the error rate of the DSL model for Hutter Prize Wikipedia dataset, which is 30.5%, to be comparable to the error rate achieved by other models. However, this experiment shows that our model is less suitable to unstructured text such as the one found on Wikipedia.\nTraining Time Training time is dominated by the LSTM model that takes several days for the. network with the highest number of parameters. On the other hand, training n-gram models is extremely fast since the model is trained simply by counting in a single pass over the training data The DSL model sits between these two approaches and takes ~ 8 hours to train. The majority o. the DSL training time is spent in the synthesis of SwitchPrograms where one needs to conside. a massive search space of possible programs from which the synthesis algorithm aims to find one that is approximately the best (e.g., for Linux Dataset the number of basic instructions is 108 whicl means that naive enumeration of programs up to size 3 already leads to 1083 different candidate. programs). All of our experiments were performed on a machine with Intel(R) Xeon(R) CPU E5. 2690 with 14 cores. All training times are reported for parallel training on CPU. Using GPUs for. training of the neural networks might provide additional improvement in training time..\nQuery (Sampling) Time Sampling the n-gram is extremely fast, answering ~ 46, 000 queries pe second, as each query corresponds to only a several hash look-ups. The query time for the DSI model is similarly fast, in fact can be even faster, by answering ~ 62, 000 queries per second. This is because it consists of two steps: (i) executing the TChar program that selects a suitable language model (which is very fast once the program has been learned), and (ii) querying the language model The reason why the model can be faster is that there are fewer hash lookups which are also faste due to the fact that the specialized language models are much smaller compared to the n-gram (e.g.\n~ 22% of the models simply compute unconditioned probabilities). Adding backoff and cache to the DSL model decreases sampling speed, which is partially because our implementation is currently not optimized for querying and fully evaluates all of the models even though that is not necessary.\nModel Size Finally, we include the size of all the trained models measured by the size in MB of the. model parameters. The models have roughly the same size except for the n-gram models with high. order for which the size increases significantly. We note that the reason why both the n-gram and DSL models are relatively small is that we use hash-based implementation for storing the prediction. context. That is, in a 7-gram model the previous 6 characters are hashed into a single number. This. decreases the model size significantly at the expense of some hash collisions.."}, {"section_index": "11", "section_name": "4 RELATED WORK", "section_text": "Program Synthesis Program synthesis is a well studied research field in which the goal is to au tomatically discover a program that satisfies a given specification that can be expressed in variou forms including input/output examples, logical formulas, set of constraints or even natural languag description.In addition to the techniques that typically scale only for smaller programs and attemp to satisfy the specification completely (Alur et al.[2013] Solar-Lezama et al.[2006] Solar-Lezama 2013, Jha et al.]2010) a recent line of work considers a different setting consisting of large (e.g millions of examples) and noisy (i.e., no program can satisfy all examples perfectly) datasets. Thi is the setting that needs to be considered when synthesizing programs in our setting and in a lan guage such as TChar. Here, the work of|Raychev et al.(2016b) showed how to efficiently synthesiz straight-line programs and how to handle noise, then[Raychev et al. (2016a) showed how to synthe size branches, and thework of|Heule et al.(2015) proposed a way to synthesize loops.\nIn our work we take advantage of these existing synthesis algorithms and use them to efficiently synthesize a program in TChar. We propose a simple extension that uses MCMC to sample fron a large amount of instructions included in TChar (many more than prior work). Apart from this no other modifications were required in order to use existing techniques for the setting of character level language modeling we consider here..\nRecurrent Neural Networks Recent years have seen an emerging interest in building a neural. language models over words (Bengio et al.|. 2003), characters (Karpathy et al. 2015 Sutskever et al.2011} Wu et al.]2016) as well as combination of both (Chung et al.[ 2017} Kim et al.2015 Mikolov et al.2012). Such models have been shown to achieve state-of-the-art performance in. several domains and there is a significant research effort aimed at improving such models further.\nIn contrast, we take a very different approach, one that aims to explain the data by means of syn thesizing a program from a domain specific language. Synthesising such programs efficiently while achieving competitive performance to the carefully tuned neural networks creates a valuable re source that be used as a standalone model, combined with existing neural language models or ever used for their training. For example, the context on which the predictions are conditioned is simila to the attention mechanism (Sukhbaatar et al.2015, Bahdanau et al.2014) and might be incorpo rated into that training in the future."}, {"section_index": "12", "section_name": "5 CONCLUSION", "section_text": "In this paper we proposed and evaluated a new approach for building character level statistical lan guage models based on a program that parameterizes the model. We design a language TChar fo1 character level language modeling and synthesize a program in this language. We show that our. model works especially well for structured data and is significantly more precise that prior work We also demonstrate competitive results in the less structured task of modeling English text..\nExpressing the language model as a program results in several advantages including easier inter pretability, debugging and extensibility with deep semantic domain knowledge by simply incorpo rating a new instruction in the DSL. We believe that this development is an interesting result in bridging synthesis and machine learning with much potential for future research.."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. CoRR, abs/1409.0473, 2014.\nYoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Janvin. A neural probabilistic language model. J. Mach. Learn. Res., 3:1137-1155, 2003\nStanley F. Chen and Joshua Goodman. An empirical study of smoothing techniques for language modeling. Technical report, Harvard University, 1998.\nJunyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net works. ICLR, 2017.\nAlex Graves. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850, 2013\nSumit Gulwani. Dimensions in program synthesis. In Proceedings of the 12th International ACM SIGPLAN Symposium on Principles and Practice of Declarative Programming, PPDP '10, pp 13-24. ACM, 2010.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. In 2O16 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 770-778, June 2016. doi: 10.1109/CVPR.2016.90.\nKurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural Netw., 4(2) 251-257. 1991. ISSN 0893-6080.\nRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. CoRR, abs/1602.02410, 2016.\nSusmit Jha, Sumit Gulwani, Sanjit A. Seshia, and Ashish Tiwari. Oracle-guided component-based program synthesis. In Proceedings of the 32nd ACM/IEEE International Conference on Software. Engineering - Volume 1, ICSE '10, pp. 215-224, New York, NY, USA, 2010. ACM..\nKyoung-Su Oh and Keechul Jung. GPU implementation of neural networks. Pattern Recognition 37(6):1311-1314, 2004.\nArmando Solar-Lezama. Program sketching. STTT, 15(5-6):475-495, 2013.\nArmando Solar-Lezama, Liviu Tancau, Rastislav Bodik, Sanjit A. Seshia, and Vijay A. Saraswat Combinatorial sketching for finite programs. In ASPLOS, pp. 404-415, 2006.\nJiwei Li, Xinlei Chen, Eduard H. Hovy, and Dan Jurafsky. Visualizing and understanding neural models in NLP. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego Califor- nia, USA, June 12-17, 2016, pp. 681-691, 2016. URLhttp: / /ac1web.org/antho1ogy/ N/N16/N16-1082.pdf\nVeselin Raychev, Pavol Bielik, and Martin Vechev. Probabilistic model for code with decision trees In Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Pro-. gramming, Systems, Languages, and Applications, OOPSLA 2016, pp. 731-747. ACM, 2016a.\nYee Whye Teh. A hierarchical bayesian language model based on pitman-yor processes. In Pro ceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL-44, pp. 985-992, 2006"}, {"section_index": "14", "section_name": "APPENDIX", "section_text": "We provide three appendices. In Appendix A, we describe how to obtain a probabilistic model from. a LMProgram. In Appendix B, we provide details on how TChar programs are synthesized from data. Finally, in Appendix C, we provide detailed formal semantics of TChar programs.."}, {"section_index": "15", "section_name": "OBTAINING A PROBABILISTIC MODEL FROM TChar", "section_text": "We start by describing the program formulation used in the learning and then describe how a prob. bility distribution is obtained from a program in TChar.\nDataset Given an input sentence s = x1 : x2 : ... ... : xn, the training dataset is defined as D :. {(xt, x<t) }t=1, where xt is the character to be predicted at position t and x<t are all the preceding characters seen in the input (x<1 is e - the empty sequence)..\nCost function Let r: S(charpairs) TChar -> R+ be a cost function. Here, charpairs = (char, char*) is the set containing of all possible pairs of a character and a sequence of characters (char can be any single character). Conceptually, every element from charpairs is a character and all the characters that precede it in a text. Then, given a dataset D C charpairs and a learned program p E TChar, the function r returns the cost of p on D as a non-negative real number - in our case the average bits-per-character (BPC) loss:\nn 1 r(D,p) log2P(xtx<t,P n t=1"}, {"section_index": "16", "section_name": "Problem Statement", "section_text": "where : TChar -> R+ is a regularization term used to avoid over-fitting to the data by penalizing complex programs and X E R is a regularization constant. We instantiate N(p) to return the number of instruction in a program p. The objective of the learning is therefore to find a program that minimizes the cost achieved on the training dataset D such that the program is not overly complex (as controlled by the regularization).\nObtaining the Probability Distribution P(xt | x<t,p).We use P(xt x<t,p) above to denote the probability of label xt in a probability distribution specified by program p when executed on context x<t at position t. As described in Section2|this probability is obtained in two steps:\nExecute program p(t, x<t) to obtain a program f E LMProgram Calculate the probability Pf(xtf(t, x<t)) using f..\nObtaining the Probability Distribution Pf(xt f(t, x<t)).As discussed in Section[2] a function. f E LMProgram is used to define a probability distribution. Let us show how we compute this distribution for both cases, first, when f E SimpleProgram and then, when f consists of several SimplePrograms connected via backoff.\nAssume we are given a function f E SimpleProgram and a training sample (xt, x<t), where xt is a character to be predicted at position t and x<t are all the preceding characters seen in the input (x<1 is e). The probability of label xt is denoted as Pf(xt f(t, x<t)) that is, we condition the prediction on a context obtained by executing program f from position t in input x<t (the formal\nThe semantics of how programs p and f are executed are described in detail in Appendix [C.1] In. the remained of this section we describe how the probability distribution Pr is obtained. We note that P(xt | x<t,p) is a valid probability distribution since p(t, x<t) always computes a unique. program f and the distributions Pf(xt f(t, x<t)) are valid probability distributions as defined. below.\nCount(f(t, x<t) : Xt Pf(xtf(t,x<t)) = Count(f(t,x<t))\nwhere Count(f(t, x<t) : xt) denotes the number of times label x, is seen in the training dataset after the context produced by f(t, x<t). Similarly, f(t, x<t) denotes the number of times the context was seen regardless of the subsequent label. To deal with data sparseness we additionally adjust the obtained probability by using Witten-Bell interpolation smoothing[Witten & Bell(1991)\nNow lets us consider the case when f consists of multiple SimplePrograms with backoff, i.e. f = f1 backoff d; f2, where f1 E SimpleProgram and f2 E LMProgram..\nThis means that if f1 produces a probability distribution that has confidence greater than d about. its best prediction, then f1 is used, otherwise a probability distribution based on f2 is used. When backoff is used, both probabilities are estimated from the same set of training samples (the dataset that would be used if there was no backoff)..\nFrom Programs to Probabilistic Models: an Illustrative Example We illustrate how programs in TChar are executed and a probability distribution is built using a simple example shown in Fig.2 Consider the program p E TChar containing two LMPrograms f1 and f2 that define two probability distributions Pf. and Pf2 respectively.\nThe program p determines which one to use by inspecting the previous character in the input. Let the input sequence of characters be \"a1a2b1b2\" where each character corresponds to one training sample. Given the training samples, we now execute p to determine which LMProgram to use as shown in Fig.2(left). In our case this splits the dataset into two parts - one for predicting letters and one for numbers.\nFor numbers, the model based on f1 = LEFT LEFT WRITE_CHAR conditions the prediction on the second to left character in the input. For letters, the model based on f2 = empty is simply an un conditional probability distribution since executing empty always produces empty context. As can be seen, the context obtained from f1 already recognizes that the sequence of numbers is repeat ing. The context from f2 can however be further improved, especially upon seeing more training samples.\nIn this section we describe how ams in our TChar lang ge are synthesized."}, {"section_index": "17", "section_name": "B.1 LEARNING SIMPLEPROGRAMS", "section_text": "We next describe the approach to synthesize good programs f E SimpleProgram such that they minimize the objective function:\nn arg min -logP(xtf(t,xt))+X(f Ost f ESimpleProgram t=1\nWe use a combination of three techniques to solve this optimization problem and find fest - al. exact enumeration, approximate genetic programming search, and MCMC search\nRecall from Section2|that we backoff to the next model if the probability of the most likely character according to the current model is less than a constant d. More formally:.\nPf1(xt|f1(t,x<t)) if d argmaxyev Pf1(y|f1(t,x<t) Pf(xt|f(t,x<t))f=f1 backoff d;f2 = (Pf2(xt|f2(t,x<t)) otherwise\n'a1a2b1b2' + x<t Xt p(t,x<t) 1 f2 a 2 a 1 f1 3 a1 a f2 4 a1a 2 f1 5 a1a2 b f2 6 a1a2b 1 f1 7 a1a2b1 b f2 8 a1a2b1b 2 f1\nFigure 2: Illustration of a probabilistic model built on an input sequence 'a1a2b1b2' using pro. gram p. (Left) The dataset of samples specified by the input and the result of executing program p on each sample. (Top Right) The result of executing program f1 on a subset of samples selected by program p and the corresponding probability distribution Pf1. (Bottom Right) The result of execut- ing empty program f2 and it's corresponding unconditioned probability distribution Pf2. We use the notation e to describe an empty sequence.\nThe reason we omit the PREv_CHAR(c) instruction is that this instruction is parametrized by a charac ter c that has a large range (of all possible characters in the training data). Considering all variations of this instruction would lead to a combinatorial explosion.\nGenetic Programming Search The genetic programming search proceeds in several epochs where each epoch generates a new set of candidate programs by randomly mutating the programs in the current generation. Each candidate program is generated using following procedure:\n1. select a random program f from the current generation 2. select a random position i in f, and 3. apply mutation that either removes, inserts or replaces the instruction at positior a randomly selected new instruction (not considering PREv_CHAR(c)).\nThese candidate programs are then scored with the objective function (2) and after each epoch a subset of them is retained for the next generation white the rest is discarded. The policy we use to discard programs is to randomly select two programs and discard the one with worse score. We keep discarding programs until the generation has less than 25 candidate programs. Finally, we do not apply a cross-over operation in the genetic search procedure.\nEnumerative Search We start with the simplest approach that enumerates all possible programs up to some instruction length. As the number of programs is exponential in the number of instruc- tions, we enumerate only short programs with up to 5 instructions (not considering PREV_CHAR(c)) that contain single Write instruction. The resulting programs serve as a starting population for. a follow-up genetic programming search.\nMarkov Chain Monte Carlo Search (MCMC). Once a set of candidate programs is generated using a combination of enumerative search and genetic programming, we apply MCMC search to. further improve the programs. This procedure is the only one that considers the PREv_CHAR(c). instruction (which has 108 and 212 variations for the Linux Kernel and Hutter Prize Wikipedia datasets respectively).\nThe synthesized program is a combination of several basic building blocks consisting of a few in structions. To discover a set of good building blocks, at the beginning of the synthesis we first build a probability distribution I that determines how likely a building block will be the one with the best cost metric, as follows:\nGiven the probability distribution I, we now perform random modifications of a candidate program p. by appending/removing such blocks according to the distribution I in a MCMC procedure that does. a random walk over the set of possible candidate programs. That is, at each step we are giver a candidate program, and we sample a random piece from the distriution I to either randomly add i. to the candidate program or remove it (if present) from the candidate program. Then, we keep the. updated program either if the score (2) of the modified candidate program improves, or sometimes. we randomly keep it even if the score did not improve (in order to avoid local optimums)..\nBackoff Programs The synthesis procedure described above can be adapted to synthesize SimplePrograms with backoff using a simple greedy technique that synthesizes the backoff pro. grams one by one. In our implementation we however synthesize only a single SimpleProgram. and then backoff to n-grams as described in Section[3] We optimize the backoff threshold d E (0, 1). using a grid search with a step size 0.02 on a validation dataset.."}, {"section_index": "18", "section_name": "B.2 LEARNING SWITCHPROGRAMS", "section_text": "Recall from Fig.1 that the SwitchPrograms have the following syntax\nTo synthesize a SwitchProgram we need to learn the following components: (i) the predicate in the form of a SimpleProgram, (ii) the branch targets v1, V2, ..., Un that match the values obtained by executing the predicate, and (iii) a program for each target branch as well as a program that matches the default branch. This is a challenging task as the search space of possible predicates, brancl targets and branch programs is huge (even infinite).\nTo efficiently learn SwitchProgram we use a decision tree learning algorithm that we instantiate with the ID3+ algorithm from Raychev et al.(2016a). The main idea of the algorithm is to syn- thesize the program in smaller pieces, thus making the synthesis tractable in practice by using the same idea as in the ID3 decision tree learning algorithm. This synthesis procedure runs in two steps -- a top-down pass that synthesizes branch predicates and branch targets, followed by a bottom-up pass that synthesizes branch programs and prunes unnecessary branches..\nconsider all building blocks that consist of up to three Move and one Write instruction B : {empty, Move}3 x Write. score each building block b E B on the full dataset D by calculating the bits-per-characte (BPC) bbpc as defined in (1) and the error rate berror-rate (as usually defined) on dataset D accept the building block with probability min(1.0, emptybpc/bbpc) where emptybpc is the score for the unconditioned empty program. Note that for the BPC metric, lower is better That is, if the program has better (lower) BPC than the empty program it is always accepte and accepted otherwise with probability emptybpc/bbpc- : for an accepted building block b, set the score as I'(b) = 1.0 - berror-rate, that is, the score is proportional to the number of samples in the dataset D that are classified correctly using building block 6. set the probability with which a building block b E B will be sampled by normalizing th distribution I', that is, I(b) = I'(b)/ b'er I'(b').\nThe top-down pass considers all branch programs to be an unconditioned probability distributioi (which can be expressed as an empty Simp1eProgram) and searches for best predicates and brancl targets that minimize our objective function (i.e., regularized loss of the program on the training dataset). For a given predicate we consider the 32 most common values obtained by executing the predicate on the training data as possible branch targets. We further restrict the generated program tc. avoid over-fitting to the training data by requiring that each synthesized branch contains either more than 250 training data samples or 10% of the samples in the training dataset. Using this approacl we search for a good predicate using the same techniques as described in Appendix B.1 and ther apply the same procedure recursively for each program in the branch. The synthesis stops when nc predicate is found that improves the loss over a program without branches. For more details anc analysis of this procedure, we refer the reader to the work of Raychev et al.(2016a)..\nThe bottom-up pass then synthesizes approximately best programs for individual branches by in. voking the synthesis as described in Appendix B.1 Additionally, we prune branches with higher loss compared to a single SimpleProgram trained on the same dataset..\nThis section provides the formal semantics of TChar\nWe formally define TChar programs to operate on a state o = (x, i, ctx, counts) E States where. the domain States is defined as States = char* N Context Counts, where x E char* i an input sequence of characters, i E N is a position in the input sequence, Context : (char U N)* is. a list of values accumulated by executing Write instructions and Counts : StateSwitch -> N is. mapping that contains a value denoting current count for each StateSwitch program. Initially, the. execution of a program p E lang starts with the empty context E Context and counts initialized tc. value 0 for every StateSwitch program, i.e., countso = Vsp E StateSwitch . 0. For a prograrr. p E TChar, an input string x, and a position i in x, we say that p computes a program m . LMProgram, denoted as p(i, x) = m, iff there exists a sequence of transitions from configuratior. (p, x, i, , countsi-1) to configuration (m, x, i, , counts). As usual, a configuration is simpl. a program coupled with a state. The small-step semantics of executing a TChar program are showr. in Fig.3 These rules describe how to move from one configuration to another configuration by. executing instructions from the program p. We next discuss these semantics in detail.."}, {"section_index": "19", "section_name": "C.1.1 SEMANTICS OF Write INSTRUCTIONS", "section_text": "The semantics of the Write instructions are described by the [WriTe] rule in Fig.3] Each write accumulates a value c to the context ctx according to the function wr:.\nwr(wRITE_CHAR, x, i) returns character at position i in the input string x. If i is not within the bounds of x (i.e., i < 1 or i > len(x)) then a special unk character is returned. wr(wRITE_HASH, x,i) returns the hash of all characters seen between the current posi tion i and the position of the latest Write instruction that was executed. More for- mally, let ipreu be the position of the previous write. Then wr(wRITE HASH, x,i) H(x, i, min(i + 16, iprev)), where H : char* N N -> N is a hashing function that hashes characters in the string from the given range of positions. The hashing function used in our implementation is defined as follows:\nXi if i = j H(x,i,j) = H(x,i,j-1) *137+ xj if i < j otherwise\nwhere we use to denote that the hash function returns no value. This happens in case i > j (i.e., i > iprev).\nx E char* i,n E N ctx E Context s E TChar op E {SwitchProgram, StateUpdate, StateSwitch} op.guard E SimpleProgram op.cases E Context -> TChar op.default E TChar counts : StateSwitch -> N w E Write v = wr(w,x,i) [WRITE] (w :: s, x, i, ctx, counts) -> (s, x, i, ctx : v, counts) i' =mv(m,x,i) m E Move [MOVE] (m :: s, x, i, ctx, counts) -> (s, x, i', ctx, counts) op E SwitchProgram (op.guard, x, i, []) -> (e, x, i', ctxguard) ctxguard E dom(op.cases) [SWITCH] (op, x, i, ctx, counts) -> (op.cases(ctxguard), x, i, , counts) op E SwitchProgram (op.guard, x, i, [) -> (e, x, i', ctxguard) ctxguard dom(op.cases) [SWITCH-DEF] (op, x, i, ctx, counts) -> (op.default, x, i, , counts) op E StateUpdate (op.guard, x, i, [) -> (e, x, i', ctxguard) ctxguard E dom(op.cases) [STATEUPDATE] (op :: s, x, i, ctx, counts) -> (op.cases(ctxguard) :: s, x, i, ], counts) op E StateUpdate (op.guard, x, i, [) -> (e, x, i', ctxguard) ctxguard dom(op.cases) [STATEUPDATE-L (op :: s, x, i, ctx, counts) -> (s, x, i, [, counts) op1 E{Inc,Dec} n = update(op1, counts(op2)) counts' = counts[op2 -> n] [UPDATE] (op1 :: op2, x, i, ctx, counts) -> (op2, x, i, [], counts') op E StateSwitch counts[op] E dom(op.cases [STATESWITCH] (op, x, i, ctx, counts) -> (op.cases(counts(op)), x, i, [], counts) op E StateSwitch counts[op] dom(op.cases) [STATESWITCH-DEF] on x i ctx counts) > Lon default x i L counts\nm E Move i' = mv(m,x,i) [MOVE] m :: s, x, i, ctx, counts) -> (s, x, i', ctx, counts\nFigure 3: TChar language small-step semantics. Each rule is of the type: TChar States - TChar States."}, {"section_index": "20", "section_name": "C.1.2 SEMANTICS OF MoVe INSTRUCTIONS", "section_text": "op E StateUpdate (op.guard, x, i, [) -> (e, x, i', ctxguard) ctxguard E dom(op.cases) [STATEUPDATE (op :: s, x, i, ctx, counts) -> (op.cases(ctxguard) :: s, x, i, [], counts)\nwr(wRITE_DIST, x, i) returns a distance (i.e., the number of characters) between the cur rent position and the position of latest Write instruction. In our implementation we limi the return value to be at most 16, i.e., wr(wRITE_DIST, x, i) = min(16, |i - iprev|)\nThe semantics of Move instructions are described by the [Move] rule in Fig.3] Each Move instruc tion changes the current position in the input by using the following function mv:\nmv(LEFT, x, i) = max(1, i - 1), that is, we move to the position of the previous character in the input. mv(RIGHT, x,i) = min(len(x) - 1, i + 1), that is, we move to the position of the next character in the input. We use len(x) to denote the length of x. mv(PREv_CHAR, x,i) = i', where i' is the position of the most recent character with the same value as the character at position i, i.e., maximal i' such that x, = x; and i' < i. If no such character is found in x, a value -1 is returned."}, {"section_index": "21", "section_name": "C.1.3 SEMANTICS OF SwitchProgram", "section_text": "The semantics of SwitchProgram are described by the [SwiTch] and [SwiTcH-DeF] rules. In both cases the guard of the SwitchProgram denoted as op.guard E SimpleProgram is executed to obtain the context ctx guard. The context is then matched against all the branch targets in op.cases which is a partial function from contexts to TChar programs. If an exact match is found, that is the function is defined for a given context ctxguard denoted as ctxguard E dom(op.cases), then the corresponding program is selected for execution (rule [Switch]). If no match is found, the default program denotes as op.def ault is selected for execution (rule [SwiTcH-DeF]).\nIn both cases, the execution continues with the empty context and with the original position i with which SwitchProgram was called"}, {"section_index": "22", "section_name": "C.1.4 SEMANTICS OF StateProgram", "section_text": "The semantics of StateProgram are described by rules [STATEUpDATE], [UpDATE] anc [STATESwiTcH]. In our work the state is represented as a set of counters associated with eacl SwitchProgram. First, the rule [STATEUpDATE] is used to execute StateUpdate program whicl determines how the counters should be updated. The execution of StateUpdate is similar tc SwitchProgram and results in selecting one of the update operations INc, DEC or SKIP to be exe cuted next.\ndefined as follows\nmUPREV_CHAR(c),x,l) , where i' is the position of the most recent character with th same value as character c, i.e., maximal i' such that x;, = c and i' < i. If no such characte. c is found in x a value -1 is returned.. mv(PREv_POS, x, i) = i', where i' is the position of the most recent character for whicl. executing program p results in the same f E LMProgram, i.e., maximal i' such tha. p(i', x<i') = p(i, x<i) and i' < i. This means that the two positions fall into the same. branch of the program p. If no such i' is found in x a value -1 is returned. We note tha. two LMPrograms are considered equal if they have the same identity and not only wher they contain same sequence of instructions (e.g., programs f1 and f2 in the example fron. Section|A are considered different even if they contain same sequence of instructions).\nThe goal of the update instructions is to enable the program to count (e.g. opening, closing brackets and others). Their semantics are described by the [UpDATE] rule and each instruction computes value of the updated counter and is described by following function update:\nupdate(INC, n) = n + 1 increments the value. update(DEC, n) = max(0, n - 1) decrements the value. Bounded from below by value 0 update(SKIP, n) = n keeps the value unchanged.\nAfter the value of counter for given SwitchProgram is updated, then depending on its value next program to execute is selected by executing the SwitchProgram as described by rules STATESWITCH] and [STATESWITCH-DEF]"}] |
rJ0JwFcex | [{"section_index": "0", "section_name": "NEURO-SYMBOLIC PROGRAM SYNTHESIS", "section_text": "Emilio Parisotto1,2, Abdel-rahman Mohamed1, Rishabh Singh1 Lihong Li, Dengyong Zhou1, Pushmeet Kohli\n1Microsoft Research, USA 2Carnegie Mellon University, USA eparisot@andrew.cmu.edu , {asamir,risin,lihongli,denzho,pkohli} @microsoft.co."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The act of programming, i.e., developing a procedure to accomplish a task, is a remarkable demon-. stration of the reasoning abilities of the human mind. Expectedly, Program Induction is considered as one of the fundamental problems in Machine Learning and Artificial Intelligence. Recent progress on deep learning has led to the proposal of a number of promising neural architectures for this prob- lem. Many of these models are inspired from computation modules (CPU, RAM, GPU) (Graves. et al.2014] Kurach et al.]2015] Reed & de Freitas2015] Neelakantan et al.]2015) or common data structures used in many algorithms (stack) (Joulin & Mikolov]2015). A common thread in this. line of work is to specify the atomic operations of the network in some differentiable form, allowing. efficient end-to-end training of a neural controller, or to use reinforcement learning to make hard. choices about which operation to perform. While these results are impressive, these approaches have a number of important limitations: (a) they are computationally expensive and hard to train, (b) a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verify. the correctness of the learnt mapping (as it is defined by a neural network). While some recently proposed methods (Kurach et al.]2015] Gaunt et al.]2016] Riedel et al.]2016] Bunel et al.]2016 do learn interpretable programs, they still need to learn a separate neural network model for each. individual task.\nMotivated by the need for model interpretability and scalability to multiple tasks, we address the problem of Program Synthesis. Program Synthesis, the problem of automatically constructing pro-. grams that are consistent with a given specification, has long been a subject of research in Computer Science (Biermann 1978 Summers! 1977). This interest has been reinvigorated in recent years on."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Recent years have seen the proposal of a number of neural architectures for the. problem of Program Induction. Given a set of input-output examples, these ar-. chitectures are able to learn mappings that generalize to new test inputs. While. achieving impressive results, these approaches have a number of important limi-. tations: (a) they are computationally expensive and hard to train, (b) a model has. to be trained for each task (program) separately, and (c) it is hard to interpret or. verify the correctness of the learnt mapping (as it is defined by a neural network). In this paper, we propose a novel technique, Neuro-Symbolic Program Synthesis,. to overcome the above-mentioned problems. Once trained, our approach can au-. tomatically construct computer programs in a domain-specific language that are. consistent with a set of input-output examples provided at test time. Our method. is based on two novel neural modules. The first module, called the cross corre- lation I/O network, given a set of input-output examples, produces a continuous. representation of the set of I/O examples. The second module, the Recursive-. Reverse-Recursive Neural Network (R3NN), given the continuous representation. of the examples, synthesizes a program by incrementally expanding partial pro-. grams. We demonstrate the effectiveness of our approach by applying it to the. rich and complex domain of regular expression based string transformations. Ex-. periments show that the R3NN model is not only able to construct programs from. new input-output examples, but it is also able to construct new programs for tasks. that it had never observed before during training..\nMost of the recently proposed methods for program synthesis operate by searching the space o programs in a Domain-Specific Language (DsL) instead of arbitrary Turing-complete languages This hypothesis space of possible programs is huge (potentially infinite) and searching over it is a. challenging problem. Several search techniques including enumerative (Udupa et al.||2013), stochas tic (Schkufza et al.|2013), constraint-based (Solar-Lezama2008), and version-space algebra basec algorithms (Gulwani et al.[2012) have been developed to search over the space of programs in the DSL, which support different kinds of specifications (examples, partial programs, natural language. etc.) and domains. These techniques not only require significant engineering and research effort tc develop carefully-designed heuristics for efficient search, but also have limited applicability and car only synthesize programs of limited sizes and types.\nIn this paper, we present a novel technique called Neuro-Symbolic Program Synthesis (NSPS) tha learns to generate a program incrementally without the need for an explicit search. Once trained NSPS can automatically construct computer programs that are consistent with any set of input-outpu examples provided at test time. Our method is based on two novel module neural architectures. Th first module, called the cross correlation I/O network, produces a continuous representation of an given set of input-output examples. The second module, the Recursive-Reverse-Recursive Neura Network (R3NN), given the continuous representation of the input-output examples, synthesizes a program by incrementally expanding partial programs. R3NN employs a tree-based neural archi ecture that sequentially constructs a parse tree by selecting which non-terminal symbol to expanc using rules from a context-free grammar (i.e., the DSL).\nWe demonstrate the efficacy of our method by applying it to the rich and complex domain of regular-. expression-based syntactic string transformations, using a DSL based on the one used by Flash- Fill (Gulwani]2011Gulwani et al.2012), a Programming-By-Example (PBE) system in Microsoft Excel 2013. Given a few input-output examples of strings, the task is to synthesize a program built. on regular expressions to perform the desired string transformation. An example task that can be expressed in this DSL is shown in Figure[1] which also shows the DSL.\nTo summarize. the key contributions of our work are:\nIn this section, we formally define the DSL-based program synthesis problem that we consider in this paper. Given a DSL L, we want to automatically construct a synthesis algorithm A such that given. a set of input-output example, {(1, o1), ..: , (in, On)}, A returns a program P E L that conforms to the input-output examples, i.e.,.\nVj :1<j<nP(ij)=0j\nOur evaluation shows that NSPS is not only able to construct programs for known tasks from new input-output examples, but it is also able to construct completely new programs that it had not ob- served during training. Specifically, the proposed system is able to synthesize string transformation programs for 63% of tasks that it had not observed at training time, and for 94% of tasks when. 100 program samples are taken from the model. Moreover, our system is able to learn 38% of 238. real-world FlashFill benchmarks.\nA novel Neuro-Symbolic program synthesis technique to encode neural search over the space of programs defined using a Domain-Specific Language (DSL). The R3NN model that encodes and expands partial programs in the DSL, where each node has a global representation of the program tree. A novel cross-correlation based neural architecture for learning continuous representation of sets of input-output examples. Evaluation of the NSPS approach on the complex domain of regular expression based string transformations.\nFigure 1: An example FlashFill task for transforming names to lastname with initials of first name and (b) The DSL for regular expression based string transformations\nThe syntax and semantics of the DSL for string transformations is shown in Figure[1(b) and Figure[8 respectively. The DSL corresponds to a large subset of FlashFill DSL (except conditionals), anc allows for a richer class of substring operations than FlashFill. A DSL program takes as input a. string v and returns an output string o. The top-level string expression e is a concatenation of a finite list of substring expressions f1, ... , fn. A substring expression f can either be a constant. string s or a substring expression, which is defined using two position logics pi (left) and pr (right) A position logic corresponds to a symbolic expression that evaluates to an index in the string. A position logic p can either be a constant position k or a token match expression (r, k, Dir), which denotes the Start or End of the kth match of token r in input string v. A regex token can either be a. constant string s or one of 8 regular expression tokens: p (ProperCase), C (CAPS), l (lowercase), c (Digits), (Alphabets), an (Alphanumeric), ^ (StartOfString), and $ (EndOfString). The semantics. of the DSL programs is described in the appendix..\nA DSL program for the name transformation task shown in Figure 1(a) that is con- sistent with the examples is: Concat(fi, ConstStr(\", \"),f2, ConstStr(\".)), where f1 SubStr(v, (\"\", -1, End), ConstPos(-1)) and f2 = SubStr(v, ConstPos(0), ConstPos(1)). The program concatenates the following 4 strings: i) substring between the end of last whitespace and end of string, ii) constant string ' ', iii) first character of input string, and iv) constant string.\nWe now present an overview of our approach. Given a DSL L, we learn a generative model of programs in the DSL L that is conditioned on input-output examples to efficiently search for con- sistent programs. The workflow of our system is shown in Figure [2l which is trained end-to-end using a large training set of programs in the DSL together with their corresponding input-output examples. To generate a large training set, we uniformly sample programs from the DSL and then use a rule-based strategy to compute well-formed input strings. Given a program P (sampled from the DSL), the rule-based strategy generates input strings for the program P ensuring that the pre conditions of P are met (i.e. P doesn't throw an exception on the input strings). It collects the pre-conditions of all Substring expressions present in the sampled program P and then generates inputs conforming to them. For example, let's assume the sampled program is SubStr(v,(CAPS, 2. Start), (\" \", 3, Start)), which extracts the substring between the start of 2nd capital letter and start of 3rd whitespace. The rule-based strategy would ensure that all the generated input strings consist of at least 2 capital letters and 3 whitespaces in addition to other randomly generated characters. The corresponding output strings are obtained by running the programs on the input strings.\nA DSL can be considered as a context-free grammar with a start symbol S and a set of non-terminals. with corresponding expansion rules. The (partial) grammar derivations or trees correspond to (par. tial) programs. A naive way to perform a search over the programs in a DSL is to start from the star. symbol S and then randomly choose non-terminals to expand with randomly chosen expansion rules. until reaching a derivation with only terminals. We, instead, learn a generative model over partia derivations in the DSL that assigns probabilities to different non-terminals in a partial derivation anc. corresponding expansions to guide the search for complete derivations..\nString e Concat(f1,... , fn It Substring f ConstStr(s) W. SubStr(v, Pt,Pr M. B. Position p (r, k, Dir) , M. ConstPos(k) P. Direction Dir Start End Regex r := sT ...Tn (b)\nString e. := Concat(f1,...,fn) Input v Output Substring f := ConstStr(s) 1 William Henry Charles Charles, W. SubStr(v,Pt,Pr) 2 Michael Johnson Johnson, M. Position p 3 Barack Rogers Rogers, B. := (r, k, Dir) 4 Martha D. Saunders Saunders, M.. ConstPos(k) 5 Peter T Gates Gates, P. Direction Dir := Start | End Regexr := s|T ...|Tn (a) (b)\nDSL O pj,o DSL O- i1-01 DSL R3NN i2-02 R3NN Program Pj,1 Sampler DSL DSL O Input Gen Rules 1/O Encoder O Pj2 >I/O Encoder R3NN R3NN . ... DSL DSL R3NN R3NN pj Learnt program 000 00 (a) Training Phase (b) Test Phase\nDSL DSL Pj,0 i1-01 DSL R3NN i2-02 R3NN Program DSL Pj,1 0 Sampler O DSL Input Gen Rules. 1/O Encoder O I/O Encoder R3NN Pj2 R3NN .. : DSL DSL R3NN R3NN p. Learnt program 000 00 (a) Training Phase (b) Test Phase\nFigure 2: An overview of the training and test workflow of our synthesis appraoch\nOur generative model uses a Recursive-Reverse-Recursive Neural Network (R3NN) to encode par- tial trees (derivations) in L, where each node in the partial tree encodes global information about every other node in the tree. The model assigns a vector representation for every symbol and every. expansion rule in the grammar. Given a partial tree, the model first assigns a vector representation to each leaf node, and then performs a recursive pass going up in the tree to assign a global tree. representation to the root. It then performs a reverse-recursive pass starting from the root to assign a global tree representation to each node in the tree..\nThe generative process is conditioned on a set of input-output examples to learn a program that is consistent with this set of examples. We experiment with multiple input-output encoders including an LSTM encoder that concatenates the hidden vectors of two deep bidirectional LSTM networks for input and output strings in the examples, and a Cross Correlation encoder that computes the cross correlation between the LSTM tensor representations of input and output strings in the examples This vector is then used as an additional input in the R3NN model to condition the generative model."}, {"section_index": "3", "section_name": "TREE-STRUCTURED GENERATION MODEL", "section_text": "We define a program t-steps into construction as a partial program tree (PPT) (see Figure 3|for a visual depiction). A PPT has two types of nodes: leaf (symbol) nodes and inner non-leaf (rule) nodes. A leaf node represents a symbol, whether non-terminal or terminal. An inner non-leaf node represents a particular production rule of the DSL, where the number of children of the non-leaf node is equivalent to the arity of the RHS of the rule it represents. A PPT is called a program tree (PT) whenever all the leaves of the tree are terminal symbols. Such a tree represents a completed program under the DSL and can be executed. We define an expansion as the valid application of a specific production rule (e -> e op2 e) to a specific non-terminal leaf node within a PPT (leaf with symbol e). We refer to the specific production rule that an expansion is derived from as the expansion type. It can be seen that if there exist two leaf nodes (l1 and l2) with the same symbol then for every expansion specific to l1 there exists an expansion specific to l, with the same type."}, {"section_index": "4", "section_name": "4 . 1 RECURSIVE-REVERSE-RECURSIVE NEURAL NETWORK", "section_text": "In order to define a generation model over PPTs, we need an efficient way of assigning probabilitie to every valid expansion in the current PPT. A valid expansion has two components: first the prc duction rule used, and second the position of the expanded leaf node relative to every other node i the tree. To account for the first component, a separate distributed representation for each produc tion rule is maintained. The second component is handled using an architecture where the forwar oropagation resembles belief propagation on trees, allowing a notion of global tree state at ever node within the tree. A given expansion probability is then calculated as being proportional to th inner product between the production rule representation and the global-tree representation of th leaf-level non-terminal node. We now describe the design of this architecture in more detail.\nThe R3NN has the following parameters for the grammar described by a DSL (see Figure3\n1. For every symbol s E S, an M-dimensional representation (s) E RM 2. For every production rule r E R, an M-dimensional representation w(r) E R\npITOOL OWSe2 Oot S-e $(n2) = g(W(e e op2 e)[$(n1), (op2), (e)]) )e - e op2 e e - e op2 e n3 n4 n5 (n3) = o(W(e e op2 e)[(e), ['(n6),'( (op2), (e)]) op2 e n6 n8 e op2 e (op2) $(e) (e) (op2) $(e) (a) Recursive pass\nFigure 3: (a) The initial recursive pass of the R3NN. (b) The reverse-recursive pass of the R3NN where the input is the output of the previous recursive pass.\nLet E be the set of all valid expansions in a PPT T, let L be the current leaf nodes of T and N be the current non-leaf (rule) nodes of T. Let S(l) be the symbol of leaf l E L and R(n) represent the production rule of non-leaf node n E N..\nTo compute the probability distribution over the set E, the R3NN first computes a distributed rep-. resentation for each leaf node that contains global tree information. To accomplish this, for every leaf node l E L in the tree we retrieve its distributed representation $(S(l)) . We now do a standard recursive bottom-to-top, RHS->LHS pass on the network, by going up the tree and applying fR(n). for every non-leaf node n E N on its RHS node representations (see Figure[3[a)). These networks fR(n) produce a node representation which is input into the parent's rule network and so on until we. reach the root node.\nOnce at the root node, we effectively have a fixed-dimensionality global tree representation $(root for the start symbol. The problem is that this representation has lost any notion of tree position. Tc solve this problem R3NN now does what is effectively a reverse-recursive pass which starts at the root node with $(root) as input and moves towards the leaf nodes (see Figure|3(b))\nMore concretely, we start with the root node representation $(root) and use that as input into the rule network gR(root) where R(root) is the production rule that is applied to the start symbol in T. This produces a representation $'(c) for each RHS node c of R(root). If c is a non-leaf node. we iteratively apply this procedure to c, i.e., process '(c) using gR(c) to get representations $'(cc) for every RHS node cc of R(c), etc. If c is a leaf node, we now have a leaf representation '(c) which has an information path to $(root) and thus to every other leaf node in the tree. Once the reverse-recursive process is complete, we now have a distributed representation $'(l) for every leaf node l which contains global tree information. While $(l1) and $(l2) could be equal for leaf nodes which have the same symbol type, '(l1) and '(l2) will not be equal even if they have the same symbol type because they are at different positions in the tree.\n(root) = o(W(s e)(n2)) (root) '(n2) = o(G(s e)[(root)]) (n2) = a(W(e + e op2 e)[$(n1), $(op2), (e)]) n)e - e op2 e (n2) e - e op2 e [$'(n3),'(n4),$'(n5)] = o(G(e e op2 e)['(n2)]) e e op2 e e e op2 e (n3) = o(W(e e op2 e)[$(e), n5 ['(n6),$'(n7),$'(ng)] = o(G(e e op2 e)[$'(n3)]) $(op2), (e)]) op2 n6 n7 '(n4) $(n5) op2 (op2) (e) e e op2 e ^ -) (e) $(op2) (e) $'(n6) $(n7) $'(ng) (a) Recursive pass (b) Reverse-Recursive pass\n'(n2) = o(G(s e)[(root)]) (n2) = o(W(e e op2 e)[$(n1), (0p2), $(e)]) (n2) e - + e op2 e n2 e - e op2 e ['(n3),'(ns),'(n5)] = o(G(e e op2 e)[$(n2)) e - e op2 e e e op2 e (n3 (ns) n5) (n3) = o(W(e e op2 e)[$(e), ['(n6),'(n7),$'(ng)] = o(G(e e op2 e)[$'(n3)]) (op2), $(e)]) op2 e op2 (n7) n8) n6 n7 n8 '(n4) $'(n5) e op2 e $(op2) $(e) e op2 e 1 ? $(e) $(op2) (e) $'(n6) $'(n7) $'(ng) (a) Rec (h) Rever R\n3. For every production rule r E R, a deep neural network fr which takes as input a vec and outputs a vector y E RM. Therefore, the production-rule network fr takes as inpu. concatenation of the distributed representations of each of its RHS symbols and produc a distributed representation for the LHS symbol.. 4. For every production rule r E R, an additional deep neural network gr which takes. input a vector x' E RM and outputs a vector y' E RQ.M. We can think of gr as a rever. production-rule network that takes as input a vector representation of the LHS and produc. a concatenation of the distributed representations of each of the rule's RHS symbols.."}, {"section_index": "5", "section_name": "4.1.2 EXPANSION PROBABILITIES", "section_text": "Given the global leaf representations '(l), we can now straightforwardly acquire scores for each e E E. For expansion e, let e.r be the expansion type (production rule r E R that e applies) and let e.l be the leaf node l that e.r is applied to. ze = $'(e.l) : w(e.r) The score of an expansion is. calculated using ze = '(e.l) : w(e.r). The probability of expansion e is simply the exponentiated e~ e normalized sum over all scores: r(e) =\nAn additional improvement that was found to help was to add a bidirectional LSTM (BLSTM) tc process the global leaf representations right before calculating the scores. To do this, we first order the global leaf representations sequentially from left-most leaf node to right-mode leaf node. We then treat each leaf node as a time step for a BLSTM to process. This provides a sort of skip connection between leaf nodes, which potentially reduces the path length that information needs t travel between leaf nodes in the tree. The BLSTM hidden states are then used in the score calculatior rather than the leaves themselyes.\nThe R3NN can be seen as an extension and combination of several previous tree-based models which were mainly developed in the context of natural language processing (Le & Zuidema|2014 Paulus et al.2014]Irsoy & Cardie2013)"}, {"section_index": "6", "section_name": "CONDITIONING WITH INPUT/OUTPUT EXAMPLES", "section_text": "Now that we have defined a generation process over tree-structured programs, we need a way of conditioning this generation process on a set of input/output examples. The set of input/outpu examples provide a nearly complete specification for the desired output program, and so a good encoding of the examples is crucial to the success of our program generator. For the most part, this example encoding needs to be domain-specific, since different DSLs have different inputs (some may operate over integers, some over strings, etc.). Therefore, in our case, we use an encoding adapted to the input-output strings that our DSL operates over. We also investigate different ways of conditioning program search on the learnt example input-output encodings."}, {"section_index": "7", "section_name": "5.1 ENCODING INPUT/OUTPUT EXAMPLES", "section_text": "There are two types of information that string manipulation programs need to extract from input output examples: 1) constant strings, such as \"@domain.com' or \"\", which appear in all output examples; 2) substring indices in input where the index might be further defined by a regular expres- sion. These indices determine which parts of the input are also present in the output. To simplify the DSL, we assume that there is a fixed finite universe of possible constant strings that could appear in programs. Therefore we focus on extracting the second type of information, the substring indices.\nIn earlier hand-engineered systems such as FlashFill, this information was extracted from the input output strings by running the Longest Common Substring algorithm, a dynamic programming algo rithm that efficiently finds matching substrings in string pairs. To extract substrings, FlashFill runs LCS on every input-output string pair in the I/O set to get a set of substring candidates. It then takes the entire set of substring candidates and simply tries every possible regex and constant index that can be used at substring boundaries, exhaustively searching for the one which is the most \"general'' where generality is specified by hand-engineered heuristics.\nIn contrast to these previous methods, instead of hand-designing a complicated algorithm to extract regex-based substrings, we develop neural network based architectures that are capable of learning tc extract and produce continuous representations of the likely regular expressions given I/O examples"}, {"section_index": "8", "section_name": "5.1.1 BASELINE LSTM ENCODER", "section_text": "Our first I/O encoding network involves running two separate deep bidirectional LSTM networks fo processing the input and the output string in each example pair. For each pair, it then concatenate. the topmost hidden representation at every time step to produce a 4HT-dimensional feature vectoi per I/O pair, where T is the maximum string length for any input or output string, and H is the topmost LSTM hidden dimension."}, {"section_index": "9", "section_name": "5.1.2 CROSS CORRELATION ENCODER", "section_text": "To help the model discover input substrings that are copied to the output, we designed an novel I/O example encoder to compute the cross correlation between each input and output example repre- sentation. We used the two output tensors of the LSTM encoder (discussed above) as inputs to this encoder. For each example pair, we first slide the output feature block over the input feature block and compute the dot product between the respective position representation. Then, we sum over all overlapping time steps. Features of all pairs are then concatenated to form a 2 * (T - 1)-dimensional vector encoding for all example pairs. There are 2 * (T - 1) possible alignments in total between input and output feature blocks. An illustration of the cross-correlation encoder is shown in Figure|9 We also designed the following variants of this encoder.\nDiffused Cross Correlation Encoder: This encoder is identical to the Cross Correlation encod except that instead of summing over overlapping time steps after the element-wise dot product, v simply concatenate the vectors corresponding to all time steps, resulting in a final representation th. contains 2 * (T - 1) * T features for each example pair.\nLSTM-Sum Cross Correlation Encoder: In this variant of the Cross Correlation encoder, instead of doing an element-wise dot product, we run a bidirectional LSTM over the concatenated feature. blocks of each alignment. We represent each alignment by the LSTM hidden representation of the. final time step leading to a total of 2 * H * 2 * (T - 1) features for each example pair..\nAugmented Diffused Cross Correlation Encoder: For this encoder, the output of each character position of the Diffused Cross Correlation encoder is combined with the character embedding at this position, then a basic LSTM encoder is run over the combined features to extract a 4* H-dimensional vector for both the input and the output streams. The LSTM encoder output is then concatenated with the output of the Diffused Cross Correlation encoder forming a (4* H +T*(T-1))-dimensional feature vector for each example pair.\nOnce the I/O example encodings have been computed, we can use them to perform conditiona generation of the program tree using the R3NN model. There are a number of ways in which the PPT generation model can be conditioned using the I/O example encodings depending on where the I/O example information is inserted in the R3NN model. We investigated three locations to injec example encodings:\n1) Pre-conditioning: where example encodings are concatenated to the encoding of each tree leaf, and then passed to a conditioning network before the bottom-up recursive pass over the program tree. The conditioning network can be either a multi-layer feedforward network, or a bidirectional LSTM network running over tree leaves. Running an LSTM over tree leaves allows the model to learn more about the relative position of each leaf node in the tree.\n2) Post-conditioning: After the reverse-recursive pass, example encodings are concatenated to the updated representation of each tree leaf and then fed to a conditioning network before computing the expansion scores.\n3) Root-conditioning: After the recursive pass over the tree, the root encoding is concatenated tc the example encodings and passed to a conditioning network. The updated root representation is then used to drive the reverse-recursive pass.\nEmpirically, pre-conditioning worked better than either root- or post- conditioning. In addition conditioning at all 3 places simultaneously did not cause a significant improvement over just pre-conditioning. Therefore, for the experimental section, we report models which only use pre conditioning.\nWe then concatenate the encoding vectors across all I/O pairs to get a vector representation of the en- tire I/O set. This encoding is conceptually straightforward and has very little prior knowledge about what operations are being performed over the strings, i.e., substring, constant, etc., which might make it difficult to discover substring indices, especially the ones based on regular expressions.\nIn order to evaluate and compare variants of the previously described models, we generate a dataset randomly from the DSL. To do so, we first enumerate all possible programs under the DSL up tc a specific number of instructions, which are then partitioned into training, validation and test sets In order to have a tractable number of programs, we limited the maximum number of instructions for programs to be 13. Length 13 programs are important for this specific DSL because all larger programs can be written as compositions of sub-programs of length at most 13. The semantics of length 13 programs therefore constitute the \"atoms\"' of this particular DSL.\nIn testing our model, there are two different categories of generalization. The first is input/outpi generalization, where we are given a new set of input/output examples as well as a program with . specific tree that we have seen during training. This represents the model's capacity to be applie. on new data. The second category is program generalization, where we are given both a previously. unseen program tree in addition to unseen input/output examples. Therefore the model needs tc. have a sufficient enough understanding of the semantics of the DSL that it can construct nove combinations of operations. For all reported results, training sets correspond to the first type o. generalization since we have seen the program tree but not the input/output pairs. Test sets represen. the second type of generalization, as they are trees which have not been seen before on input/outpu pairs that have also not been seen before..\nIn this section, we compare several different variants of our model. We first evaluate the effect of each of the previously described input/output encoders. We then evaluate the R3NN model against a simple recurrent model called io2seq, which is basically an LSTM that takes as input the input/output conditioning vector and outputs a sequence of DSL symbols that represents a linearized program tree. Finally, we report the results of the best model on the length 13 training and testing sets, as well as on a set of 238 benchmark functions..\nFor training the R3NN, two hyperparameters that were crucial for stabilizing training were the us of hyperbolic tangent activation functions in both R3NN (other activations such as ReLU mor consistently diverged during our initial experiments) and cross-correlation I/O encoders and the us of minibatches of length 8. Additionally, for all results, the program tree generation is conditione on a set of 10 input/output string pairs. We used ADAM (Kingma & Ba2014) to optimize the. networks with a learning rate of O.001. Network weights used the default torch initializations..\nDue to the difficulty of batching tree-based neural networks since each sample in a batch has a potentially different tree structure, we needed to do batching sequentially. Therefore for each mini. batch of size N, we accumulated the gradients for each sample. After all N sample gradients were. accumulated, we updated the parameters and reset the accumulated gradients. Due to this sequentia. processing, in order to train models in a reasonable time, we limited our batch sizes to betweer. 8-12. Despite the computational inefficiency, batching was critical to successfully train an R3NN.. as online learning often caused the network to diverge.\nFor each latent function and set of input/output examples that we test on, we report whether we had a success after sampling 100 functions from the model and testing all 100 to see if one of these functions is equivalent to the latent function. Here we consider two functions to be equivalent with respect to a specific input/output example set if the functions output the same strings when run on the inputs. Under this definition, two functions can have a different set of operations but still be equivalent with respect to a specific input-output set.\nWe restricted the maximum size of training programs to be 13 because of two computational consid- erations. As described earlier, one difficulty was in batching tree-based neural networks of different structure and the computational cost of batching increases with the increase in size of the program trees. The second issue is that valid I/O strings for programs often grow with the program length in the sense that for programs of length 40 a minimal valid I/O string will typically be much longer than a minimal valid I/O string for length 20 programs. For example, for a program such as (Concat (ConstStr \"longstring\") (Concat (ConstStr \"longstring\") (Concat (ConstStr \"longstring\") ..))), the valid output string would be \"longstringlongstringlongstring..\" which could be many\nTable 1: The effect of different input/output encoders on accuracy. Each result used 100 samples There is almost no generalization error in the results.\nTable 2: Testing the I/O-vector-to-sequence model. Each result used 100 samples\nhundreds of characters long. Because of limited GPU memory, the I/O encoder models can quickl run out of memory.\nIn this section, we evaluate the effect of several different input/output example encoders. To contro. for the effect of the tree model, all results here used an R3NN with fixed hyperparameters to generate the program tree. Table|1|shows the performance of several of these input/output example encoders We can see that the summed cross-correlation encoder did not perform well, which can be due tc the fact that the sum destroys positional information that might be useful for determining specific substring indices. The LSTM-sum and the augmented diffused cross-correlation models did the best. Surprisingly, the LSTM encoder was capable of finding nearly 88% of all programs without having any prior knowledge explicitly built into the architecture. We use 100 samples for evaluating the Train and Test sets. The training performance is sometimes slightly lower because there are close to 5 million training programs but we only look at less than 2 million of these programs during training. We sample a subset of only 1000 training programs from the 5 million program set to report the training results in the tables. The test sets also consist of 1o00 programs."}, {"section_index": "10", "section_name": "6.3 102SEQ", "section_text": "In this section, we motivate the use of the R3NN by testing whether a simpler model can also b used to generate programs. The io2seq model is an LSTM whose initial hidden and cell state. are a function of the input/output encoding vector. The io2seq model then generates a linearizec tree of a program symbol-by-symbol. An example of what a linearized program tree looks like is (s(e(f(ConstStr \"@\")ConstStr) f)e)s, which represents the program tree that returns the constant string \"@\". Predicting a linearized tree using an LSTM was also done in the context of pars ing (Vinyals et al.]2015). For the io2seq model, we used the LSTM-sum cross-correlation I/C conditioning model.\nThe results in Table[2|show that the performance of the io2seq model at 100 samples per latent test function is far worse than the R3NN, at around 42% versus 91%, respectively. The reasons for that could be that the io2seq model needs to perform far more decisions than the R3NN, since the io2seq model has to predict the parentheses symbols that determine at which level of the tree a particular symbol is at. For example, the io2seq model requires on the order of 100 decisions for length 13 programs, while the R3NN requires no more than 13."}, {"section_index": "11", "section_name": "6.4 EFEECT OE SAMPLING MULTIPLE PROGRAMS", "section_text": "For the best R3NN model that we trained, we also evaluated the effect that a different number oi samples per latent function had on performance. The results are shown in Table[3] The increase of the model's performance as the sample size increases hints that the model has a notion of what type of program satisfies a given I/O pair, but it might not be that certain about the details such as which regex to use, etc. By 300 samples, the model is nearing perfect accuracy on the test sets.\nSampling Train Test 1-best 60% 63% 1-sample 56% 57% 10-sample 81% 79% 50-sample 91% 89% 100-sample 94% 94% 300-sample 97% 97%\nTable 3: The effect of sampling multiple programs on accuracy. 1-best is deterministically choosing the expansion with highest probability at each step..\nModel accuracy with increasing I/O examples 60 55 Aceaneey 50 45 40 Train Test 35 30 1 2 3 4 5 6 7 8 9 10 Number of I/O Examples to train the Encoder\nModelaccuracywithincreasingI/Oexamples 60 55 Aeeanecy 50 45 40 Train Test 35 30 1 2 3 4 5 6 7 8 9 10 Number of I/O Examples to train the Encoder.\nWe evaluate the effect of varying the number of input-output examples used to train the Input-output encoders. The 1-best accuracy for train and test data for models trained for 74 epochs is shown ir Figure 4 As expected, the accuracy increases with increase in number of input-output examples since more examples add more information to the encoder and constrain the space of consisten programs in the DSL."}, {"section_index": "12", "section_name": "6.6 FLASHFILL BENCHMARKS", "section_text": "The distribution of the size of smallest DSL programs needed to solve the benchmark tasks is shown in Figure 5(a), which varies from 4 to 63. The figure also shows the number of benchmarks for. which our model was able to learn the program using 5 input-output examples using samples of top-2000 learnt programs. In total, the model is able to learn programs for 91 tasks (38.2%). Since. the model was trained for programs upto size 13, it is not surprising that it is not able to solve tasks that need larger program size. There are 110 FlashFill benchmarks that require programs upto size. 13, out of which the model is able to solve 82.7% of them..\nThe effect of sampling multiple learnt programs instead of only top program is shown in Figure[5(b) With only 10 samples, the model can already learn about 13% of the benchmarks. We observe a steady increase in performance upto about 2000 samples, after which we do not observe any significant improvement. Since there are more than 2 million programs in the DSL of length 11. itself, the enumerative techniques with uniform search do not scale well (Alur et al.]2015)..\nWe also evaluate a model that is learnt with 10 input-output examples per benchmark. This model can only learn programs for about 29% of the FlashFill benchmarks. Since the FlashFill benchmarks. contained only 5 input-output examples for each task, to run the model that took 10 examples as input, we duplicated the I/O examples. Our models are trained on the synthetic training dataset.\nigure 4: The train and test accuracies for models trained with different number of input-outpu examples.\nWe also evaluate our learnt models on 238 real-world FlashFill benchmarks obtained from the Mi- crosoft Excel team and online help-forums. These benchmarks involve string manipulation tasks described using input-output examples. We evaluate two models - one with a cross correlation en- coder trained on 5 input-output examples and another trained on 10 input-output examples. Both the models were trained on randomly sampled programs from the DSL upto size 13 with randomly generated input-output examples.\nNumber of FlashFill Benchmarks solved. Sampling Solved Benchmarks 50 Total Solved 45 10 13% 50 21% 100 23% 200 29% Juqwnr 500 33% 1000 34% 2000 38% 101113151719242527303137505963 5000 38% Size of smallest programs for FlashFill Benchmarks. (a) (b)\nFigure 5: (a) The distribution of size of programs needed to solve FlashFill tasks and the perfor mance of our model, (b) The effect of sampling for trying top-k learnt programs..\nFigure 6: Some example solved benchmarks: (a) cleaning up medical codes with closing brackets (b) generating Hex numbers with first two digits, (c) transforming names to firstname and last initial.\nthat is generated uniformly from the DSL. Because of the discrepancy between the training data distribution (uniform) and auxiliary task data distribution, the model with 10 input/output examples might not perform the best on the FlashFill benchmark distribution, even though it performs better. on the synthetic data distribution (on which it is trained) as shown in Figure4\nOur model is able to solve majority of FlashFill benchmarks that require learning programs witl upto 3 Concat operations. We now describe a few of these benchmarks, also shown in Fig ure 6 An Excel user wanted to clean a set of medical billing records by adding a missing \"] to medical codes as shown in Figure [6[a). Our system learns the following program given these 5 input-output examples: Concat(SubStr(v,ConstPos(O),(d,-1,End)), ConstStr(\"])). The pro gram concatenates the substring between the start of the input string and the position of the las digit regular expression with the constant string \"]'. Another task that required user to trans form some numbers into a hex format is shown in Figure [6(b). Our system learns the followin? program: Concat(ConstStr(\"0x'),SubStr(v,ConstPos(0),ConstPos(2))).For some benchmarks with long input strings, it is still able to learn regular expressions to extract the desired sub string, e.g. it learns a program to extract \"NancyF\" from the string \"123456789,freehafer ,drev ,nancy,19700101,11/1/2007,NancyF@north.com,1230102,123 1st Avenue,Seattle,wa,09999\".\nOur system is currently not able to learn programs for benchmarks that require 4 or more Con. cat operations. Two such benchmarks are shown in Figure[7] The task of combining names i Figure 7(a) requires 6 Concat arguments, whereas the phone number transformation task in Fig ure 7(b) requires 5 Concat arguments. This is mainly because of the scalability issues in training. with programs of larger size. There are also a few interesting benchmarks where the R3NN model. gets very close to learning the desired program. For example, for the task \"Bill Gates\" -> \"Mr Bill Gates', it learns a program that generates \"Mr.Bill Gates' (without the whitespace), and fc. the task \"617-444-5454\" > \"(617) 444-5454\", it learns a program that generates the string \"(61 444-5454\".\nInput v Output Input v Output Input v Output [CPT-00350 [CPT-00350] 732606129 0x73 John Doyle John D. [CPT-00340] [CPT-00340] 430257526 0x43 Matt Walters Matt W. [CPT-114563 [CPT-114563] 444004480 0x44 Jody Foster Jody F. [CPT-1AB02 [CPT-1AB02] 371255254 0x37 Angela Lindsay Angela L. [CPT-00360 [CPT-00360] 635272676 0x63 Maria Schulte Maria S. (a) (b) (c)\nFigure 7: Some unsolved benchmarks: (a)Combining names by different delimiters. (b) Transform ing phone numbers to consistent format.."}, {"section_index": "13", "section_name": "7 RELATED WORK", "section_text": "We have seen a renewed interest in recent years in the area of Program Induction and Synthesis\nIn this paper, we consider input-output example based specification over the hypothesis space de fined by a DSL of string transformations, similar to that of FlashFill (without conditionals) (Gul. wani!2011). The key difference between our approach over previous techniques is that our system. is trained completely in an end-to-end fashion, while previous techniques require significant manual. effort to design heuristics for efficient search. There is some work on guiding the program search us-. ing learnt clues that suggest likely DSL expansions, but the clues are learnt over hand-coded textual features of examples (Menon et al.] 2013). Moreover, their DSL consists of composition of about 100 high-level text transformation functions such as count and dedup, whereas our DSL consists of. tree structured programs over richer regular expression based substring constructs..\nThere is also a recent line of work on learning probabilistic models of code from a large number o. code repositories (big code) (Raychev et al.[2015] Bielik et al.]2016, Hindle et al.]2016), whicl. are then used for applications such as auto-completion of partial programs, inference of variable. and method names, program repair, etc. These language models typically capture only the syntactic.\nInput v Output Input v Output 1 John James Paul John, James, and Paul. 1 (425) 221 6767 425-221-6767 2 Tom Mike Bill Tom, Mike, and Bill. 2 206.225.1298 206-225-1298 3 Marie Nina John. Marie, Nina, and John. 3 617-224-9874 617-224-9874 4 Reggie Anna Adam Reggie, Anna, and Adam. 4 425.118.9281 425-118-9281 (a) (b)\nIn the machine learning community, a number of promising neural architectures have been pro. posed to perform program induction. These methods have employed architectures inspired from. computation modules (Turing Machines, RAM) (Graves et al.2014] Kurach et al.]2015] Reed & de Freitas|2015] Neelakantan et al.[2015) or common data structures such as stacks used in many algorithms (Joulin & Mikolov|2015). These approaches represent the atomic operations of the net-. work in a differentiable form, which allows for efficient end-to-end training of a neural controller.. However, unlike our approach that learns comprehensible complete programs, many of these ap. proaches learn only the program behavior (i.e., they produce desired outputs on new input data).. Some recently proposed methods (Kurach et al.2015f|Gaunt et al.||2016] Riedel et al.[2016] Bunel et al.[2016) do learn interpretable programs but these techniques require learning a separate neural. network model for each individual task, which is undesirable in many synthesis settings where we. would like to learn programs in real-time for a large number of tasks.Liang et al.(2010) restrict. the problem space with a probabilistic context-free grammar and introduce a new representation. of programs based on combinatory logic, which allows for sharing sub-programs across multiple. tasks. They then take a hierarchical Bayesian approach to learn frequently occurring substructures. of programs. Our approach, instead, uses neural architectures to condition the search space of pro grams, and does not require additional step of representing program space using combinatory logic. for allowing sharing.\nThe DSL-based program synthesis approach has also seen a renewed interest recently (Alur et al.. 2015). It has been used for many applications including synthesizing low-level bitvector implemen- tations (Solar-Lezama et al.|2005), Excel macros for data manipulation (Gulwani]2011) Gulwani et al.2012), superoptimization by finding smaller equivalent loop bodies (Schkufza et al.f 2013), protocol synthesis from scenarios (Udupa et al.|2013), synthesis of loop-free programs (Gulwani et al.2011), and automated feedback generation for programming assignments (Singh et al.2013) The synthesis techniques proposed in the literature generally employ various search techniques in-. cluding enumeration with pruning, symbolic constraint solving, and stochastic search, while sup-. porting different forms of specifications including input-output examples, partial programs, program invariants, and reference implementation..\nproperties of code, unlike our approach that also tries to capture the semantics to learn the desire program. The work byMaddison & Tarlow(2014) addresses the problem of learning structurec generative models of source code but both their model and application domain are different fron ours.Piech et al.(2015) use an NPM-RNN model to embed program ASTs, where a subtree o the AST rooted at a node n is represented by a matrix obtained by combining representations o the children of node n and the embedding matrix of the node n itself (which corresponds to its functional behavior). The forward pass in our R3NN architecture from leaf nodes to the root node is, at a high-level, similar, but we use a distributed representation for each grammar symbol tha leads to a different root representation. Moreover, R3NN also performs a reverse-recursive pass tc ensure all nodes in the tree encode global information about other nodes in the tree. Finally, th R3NN network is then used to incrementally build a tree to synthesize a program.\nThe R3NN model employed in our work is related to several tree and graph structured neural ne works present in the NLP literature (Le & Zuidema2014]Paulus et al. 2014]Irsoy & Cardie|2013 The Inside-Outside Recursive Neural Network (Le & Zuidema!|2014) in particular is most similar t. the R3NN, where they generate a parse tree incrementally by using global leaf-level representation. to determine which expansions in the parse tree to take next.."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Alur, Rajeev, Bodik, Rastislav, Dallal, Eric, Fisman, Dana, Garg, Pranav, Juniwal, Garvit, Kress. Gazit, Hadas, Madhusudan, P., Martin, Milo M. K., Raghothaman, Mukund, Saha, Shamwaditya. Seshia, Sanjit A., Singh, Rishabh, Solar-Lezama, Armando, Torlak, Emina, and Udupa, Ab hishek. Syntax-guided synthesis. In Dependable Software Systems Engineering, pp. 1-25. 2015.\nBunel, Rudy, Desmaison, Alban, Kohli, Pushmeet, Torr, Philip H. S., and Kumar, M. Pawan. Adap tive neural compilation. CoRR, abs/1605.07969, 2016. URL http://arxiv.org/abs/1605.07969\nWe have proposed a novel technique called Neuro-Symbolic Program Synthesis that is able to con-. struct a program incrementally based on given input-output examples. To do so, a new neural. architecture called Recursive-Reverse-Recursive Neural Network is used to encode and expand a partial program tree into a full program tree. Its effectiveness at example-based program synthesis. is demonstrated, even when the program has not been seen during training..\nThese promising results open up a number of interesting directions for future research. For example. we took a supervised-learning approach here, assuming availability of target programs during train. ing. In some scenarios, we may only have access to an oracle that returns the desired output given. an input. In this case, reinforcement learning is a promising framework for program synthesis\nHindle, Abram, Barr, Earl T., Gabel, Mark, Su, Zhendong, and Devanbu, Premkumar T. On th naturalness of software. Commun. ACM, 59(5):122-131, 2016.\nIrsoy, Orzan and Cardie, Claire. Bidirectional recursive neural networks for token-level labeling with structure. In NIPS Deep Learning Workshop, 2013.\nJoulin, Armand and Mikolov, Tomas. Inferring algorithmic patterns with stack-augmented recurren nets. In NIPS, pp. 190-198, 2015.\nKingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. In ICLR, 2014\nKurach, Karol, Andrychowicz, Marcin, and Sutskever, Ilya. Neural random-access machines. arXi preprint arXiv:1511.06392, 2015.\nLe, Phong and Zuidema, Willem. The inside-outside recursive neural network model for dependency parsing. In EMNLP, pp. 729-739, 2014.\nLiang, Percy, Jordan, Michael I., and Klein, Dan. Learning programs: A hierarchical Bayesian approach. In ICML, pp. 639-646, 2010.\nMenon, Aditya Krishna, Tamuz, Omer, Gulwani, Sumit, Lampson, Butler W., and Kalai, Adam. A machine learning framework for programming by example. In ICML, pp. 187-195, 2013\nNeelakantan, Arvind, Le, Quoc V, and Sutskever, Ilya. Neural programmer: Inducing latent pro grams with gradient descent. arXiv preprint arXiv:1511.04834, 2015.\nPaulus, Romain, Socher, Richard, and Manning, Christopher D. Global belief recursive neural networks. pp. 2888-2896, 2014.\nPiech, Chris, Huang, Jonathan, Nguyen, Andy, Phulsuksombati, Mike, Sahami, Mehran, and Guibas, Leonidas J. Learning program embeddings to propagate feedback on student code. In ICML, pp. 1093-1102, 2015.\nSchkufza, Eric, Sharma, Rahul, and Aiken, Alex. Stochastic superoptimization. In ASPLOS, pp 305-316, 2013.\nSingh, Rishabh and Solar-Lezama, Armando. Synthesizing data structure manipulations from sto ryboards. In SIGSOFT FSE, pp. 289-299, 2011.\nSingh, Rishabh, Gulwani, Sumit, and Solar-Lezama, Armando. Automated feedback generation fo. introductory programming assignments. In PLDI, pp. 15-26, 2013.\nSolar-Lezama, Armando, Rabbah, Rodric, Bodik, Rastislav, and Ebcioglu, Kemal. Programming b sketching for bit-streaming programs. In PLDI, 2005..\nUdupa, Abhishek, Raghavan, Arun, Deshmukh, Jyotirmoy V., Mador-Haim, Sela, Martin, Mil. M. K., and Alur, Rajeev. TRANSIT: specifying protocols with concolic snippets. In PLDI, pp. 287-296, 2013.\n[Concat(f1,..., fn)] Concat([fiv,...,Ifn]v) [ConstStr(s)] S [SubStr(v,Pl,Pr)] u[[pi]v[pr]v] [ConstPos(k)] k > 0? k : len(s) + k Start of kth match of r in v [(r, k, Start)] from beginning (end if k < 0) End of kthmatch of r in v I(r, k, End)] from beginning (end if k < 0)\nFigure 8: The semantics of the DSL for string transformations\nDot-Product D B' C 0 0 A CA' Sum Element-wise Dot-Product C' A' B' A'C' B Input Featurize Sum B'A' String + C' B Element-wise Dot-Product Sum D': A' Align &Pad D' A' BBC +B'B' + C' C' Output Featurize String Element-wise D' B' Sum Dot-Product + B' C' D' : C' Sum Element-wise Dot-Product B\nFigure 9: The cross correlation encoder to encode a single input-output example"}, {"section_index": "15", "section_name": "DOMAIN-SPECIFIC LANGUAGE FOR STRING TRANSFORMATIONS", "section_text": "The semantics of the DSL programs is shown in Figure 8 The semantics of a Concat expression. is to concatenate the results of recursively evaluating the constituent substring expressions f. The. semantics of ConstStr(s) is to simply return the constant string s. The semantics of a substring expression is to first evaluate the two position logics pi and pr to p1 and p2 respectively, and then. return the substring corresponding to v[p1..p2]. We denote s[i..j] to denote the substring of string. s starting at index i (inclusive) and ending at index j (exclusive), and len(s) denotes its length The semantics of ConstPos(k) expression is to return k if k > 0 or return len + k (if k < 0). The semantics of position logic (r, k, Start) is to return the Start of kth match of r in v from the. beginning (if k > 0) or from the end (if k < 0)..\nElement-wise Dot-Product D' B C' A' B' Sum Element-wise Dot-Product B C' : A' C A Featurize ut Sum B' A' ng + C' B' Element-wise Dot-Product 3 Sum D': A' Align &Pad D' A' B.BIC.C +B'B A' B' C + C' C' Featurize put ing Element-wise D' B' Dot-Product Sum +B'C' 3 C :B B'.C A B' D' : C' Sum Element-wise Dot-Product C' D' C 0\nVinyals, Oriol, Kaiser, Lukasz, Koo, Terry, Petrov, Slav, Sutskever, Ilya, and Hinton, Geoffrey Grammar as a foreign language. In ICLR, 2015."}] |
S1c2cvqee | [{"section_index": "0", "section_name": "DESIGNING NEURAL NETWORK ARCHITECTURES USING REINFORCEMENT LEARNING", "section_text": "Bowen Baker, Otkrist Gupta, Nikhil Naik & Ramesh Raskar\nAt present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcrafted by carefu experimentation or modified from a handful of existing networks. We intro duce MetaQNN, a meta-modeling algorithm based on reinforcement learning tc automatically generate high-performing CNN architectures for a given learning task. The learning agent is trained to sequentially choose CNN layers using Q learning with an e-greedy exploration strategy and experience replay. The ageni explores a large but finite space of possible architectures and iteratively discovers designs with improved performance on the learning task. On image classificatior benchmarks, the agent-designed networks (consisting of only standard convolu tion, pooling, and fully-connected layers) beat existing networks designed with the same layer types and are competitive against the state-of-the-art methods tha use more complex layer types. We also outperform existing meta-modeling ap proaches for network design on image classification tasks."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep convolutional neural networks (CNNs) have seen great success in the past few years on a variety of machine learning problems (LeCun et al., 2015). A typical CNN architecture consists of several convolution, pooling, and fully connected layers. While constructing a CNN, a network designer has to make numerous design choices: the number of layers of each type, the ordering of layers, and the hyperparameters for each type of layer, e.g., the receptive field size, stride, anc number of receptive fields for a convolution layer. The number of possible choices makes the desig! space of CNN architectures extremely large and hence, infeasible for an exhaustive manual search While there has been some work (Pinto et al., 2009; Bergstra et al., 2013; Domhan et al., 2015) o1 automated or computer-aided neural network design, new CNN architectures or network design ele ments are still primarily developed by researchers using new theoretical insights or intuition gainec from experimentation.\nIn this paper, we seek to automate the process of CNN architecture selection through a meta modeling procedure based on reinforcement learning. We construct a novel Q-learning agent whose. goal is to discover CNN architectures that perform well on a given machine learning task with nc human intervention. The learning agent is given the task of sequentially picking layers of a CNN. model. By discretizing and limiting the layer parameters to choose from, the agent is left witl. a finite but large space of model architectures to search from. The agent learns through randon exploration and slowly begins to exploit its findings to select higher performing models using the e greedy strategy (Mnih et al., 2015). The agent receives the validation accuracy on the given machine. learning task as the reward for selecting an architecture. We expedite the learning process througl. repeated memory sampling using experience replay (Lin, 1993). We refer to this Q-learning basec. meta-modeling method as MetaQNN, which is summarized in Figure 1.1.\nWe conduct experiments with a space of model architectures consisting of only standard convolution. pooling, and fully connected layers using three standard image classification datasets: CIFAR-10,.\n' For more information, model files, and code, please visit https://bowenbaker.github.io/metaqnn"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Agent Learns Agent Samples Train Network Conv From Memory Network Topology Topology: C64,5,1 Conv C(128,3,1) : P(2,2) SM(10) Pool Performance: Store in. 93.3% Sample Update Replay Memory Memory Q-Values Softmax\nFigure 1: Designing CNN Architectures with Q-learning: The agent begins by sampling a Con volutional Neural Network (CNN) topology conditioned on a predefined behavior distribution and the agent's prior experience (left block). That CNN topology is then trained on a specific task; the topology description and performance, e.g. validation accuracy, are then stored in the agent's mem ory (middle block). Finally, the agent uses its memories to learn about the space of CNN topologies through Q-learning (right block)..\nSVHN, and MNIST. The learning agent discovers CNN architectures that beat all existing networks. designed only with the same layer types (e.g., Springenberg et al. (2014); Srivastava et al. (2015)) In addition, their performance is competitive against network designs that include complex layer types and training procedures (e.g., Clevert et al. (2015); Lee et al. (2016)). Finally, the MetaQNN. selected models comfortably outperform previous automated network design methods (Stanley & Miikkulainen, 2002; Bergstra et al., 2013). The top network designs discovered by the agent on. one dataset are also competitive when trained on other datasets, indicating that they are suited for transfer learning tasks. Moreover, we can generate not just one, but several varied, well-performing. network designs, which can be ensembled to further boost the prediction performance..\nDesigning neural network architectures: Research on automating neural network design goe. oack to the 1980s when genetic algorithm-based approaches were proposed to find both architec. ures and weights (Schaffer et al., 1992). However, to the best of our knowledge, networks designe. vith genetic algorithms, such as those generated with the NEAT algorithm (Stanley & Miikkulainer. 2002), have been unable to match the performance of hand-crafted networks on standard bench. narks (Verbancsics & Harguess, 2013). Other biologically inspired ideas have also been explorec. notivated by screening methods in genetics, Pinto et al. (2009) proposed a high-throughput networ election approach where they randomly sample thousands of architectures and choose promisin ones for further training. In recent work, Saxena & Verbeek (2016) propose to sidestep the archi. ecture selection process through densely connected networks of layers, which come closer to th. oerformance of hand-crafted networks..\nBayesian optimization has also been used (Shahriari et al., 2016) for automatic selection of network architectures (Bergstra et al., 2013; Domhan et al., 2015) and hyperparameters (Snoek et al., 2012; Swersky et al., 2013). Notably, Bergstra et al. (2013) proposed a meta-modeling approach based on Tree of Parzen Estimators (TPE) (Bergstra et al., 2011) to choose both the type of layers and hyperparameters of feed-forward networks; however, they fail to match the performance of hand- crafted networks.\nReinforcement Learning: Recently there has been much work at the intersection of reinforcement. learning and deep learning. For instance, methods using CNNs to approximate the Q-learning utility. function (Watkins, 1989) have been successful in game-playing agents (Mnih et al., 2015; Silver. et al., 2016) and robotic control (Lillicrap et al., 2015; Levine et al., 2016). These methods rely on. phases of exploration, where the agent tries to learn about its environment through sampling, and. exploitation, where the agent uses what it learned about the environment to find better paths. In. traditional reinforcement learning settings, over-exploration can lead to slow convergence times, yet. over-exploitation can lead to convergence to local minima (Kaelbling et al., 1996). However, in the. case of large or continuous state spaces, the e-greedy strategy of learning has been empirically shown to converge (Vermorel & Mohri, 2005). Finally, when the state space is large or exploration is costly..\nthe experience replay technique (Lin, 1993) has proved useful in experimental settings (Adam et al. 2012; Mnih et al., 2015). We incorporate these techniques-Q-learning, the e-greedy strategy anc experience replay-in our algorithm design.\nOur method relies on Q-learning, a type of reinforcement learning. We now summarize the theoret ical formulation of Q-learning, as adopted to our problem. Consider the task of teaching an agen to find optimal paths as a Markov Decision Process (MDP) in a finite-horizon environment. Con straining the environment to be finite-horizon ensures that the agent will deterministically terminat in a finite number of time steps. In addition, we restrict the environment to have a discrete an finite state space S as well as action space U. For any state s; E S, there is a finite set of actions U(s) C U, that the agent can choose from. In an environment with stochastic transitions, an agen in state s; taking some action u E U(s;) will transition to state s; with probability Ps'|s,u(sj|Si, u) which may be unknown to the agent. At each time step t, the agent is given a reward rt, dependen on the transition from state s to s' and action u. rt may also be stochastic according to a distributioi Pr|s',s,u. The agent's goal is to maximize the total expected reward over all possible trajectories, i.e. maxT, eT RT,, where the total expected reward for a trajectory T, is\n[r|Si, U, Sj]+ y maxu'eU(sj)\nIn many cases, it is impossible to analytically solve Bellman's Equation (Bertsekas, 2015), but it car be formulated as an iterative update\nQt+1(Si,u) =(1-Q)Qt(si,u) + a|rt+ymaxu'Eu(s;) Qt(Sj,u'\nEquation 3 is the simplest form of Q-learning proposed by Watkins (1989). For well formulated. problems, limt->oo Qt(s, u) = Q*(s, u), as long as each transition is sampled infinitely many times (Bertsekas, 2015). The update equation has two parameters: (i) a is a Q-learning rate which determines the weight given to new information over old information, and (ii) y is the discount fac-. tor which determines the weight given to short-term rewards over future rewards. The Q-learning. algorithm is model-free, in that the learning agent can solve the task without ever explicitly con structing an estimate of environmental dynamics. In addition, Q-learning is off policy, meaning it can learn about optimal policies while exploring via a non-optimal behavioral distribution, i.e. the. distribution by which the agent explores its environment..\nWe choose the behavior distribution using an e-greedy strategy (Mnih et al., 2015). With this strat. egy, a random action is taken with probability e and the greedy action, maxueu(s) Qt(si, u), is. chosen with probability 1 e. We anneal e from 1 -> 0 such that the agent begins in an exploration phase and slowly starts moving towards the exploitation phase. In addition, when the exploratior cost is large (which is true for our problem setting), it is beneficial to use the experience replay. technique for faster convergence (Lin, 1992). In experience replay, the learning agent is providec with a memory of its past explored paths and rewards. At a given interval, the agent samples fror. the memory and updates its Q-values via Equation 3.."}, {"section_index": "3", "section_name": "DESIGNING NEURAL NETWORK ARCHITECTURES WITH Q-LEARNING", "section_text": "We consider the task of training a learning agent to sequentially choose neural network layers Figure 2 shows feasible state and action spaces (a) and a potential trajectory the agent may take along with the CNN architecture defined by this trajectory (b). We model the layer selection process as a Markov Decision Process with the assumption that a well-performing layer in one network should\nRT = Er|s,u,s'[rs, u, s'] s,u,s')ET\nThough we limit the agent to a finite state and action space, there are still a combinatorially large. number of trajectories. which motivates the use of reinforcement learning. We define the maximiza tion problem recursively in terms of subproblems as follows. For any state s, E S and subsequent. action u E U(s), we define the maximum total expected reward to be Q*(si, u). Q* () is known as. the action-value function and individual Q*(s,, u) are know as Q-values. The recursive maximiza-. tion equation, which is known as Bellman's Equation, can be written as.\nalso perform well in another network. We make this assumption based on the hierarchical nature of the feature representations learned by neural networks with many hidden layers (LeCun et al., 2015). The agent sequentially selects layers via the e-greedy strategy until it reaches a termination state The CNN architecture defined by the agent's path is trained on the chosen learning problem, and the. agent is given a reward equal to the validation accuracy. The validation accuracy and architecture description are stored in a replay memory, and experiences are sampled periodically from the replay memory to update Q-values via Equation 3. The agent follows an e schedule which determines its shift from exploration to exploitation..\nOur method requires three main design choices: (i) reducing CNN layer definitions to simple stat tuples, (ii) defining a set of actions the agent may take, i.e., the set of layers the agent may pick nex given its current state, and (iii) balancing the size of the state-action space---and correspondingly, th model capacity--with the amount of exploration needed by the agent to converge. We now describe the design choices and the learning process in detail.\nEach state is defined as a tuple of all relevant layer parameters. We allow five different types of lay-. ers: convolution (C), pooling (P), fully connected (FC), global average pooling (GAP), and softmax (SM), though the general method is not limited to this set. Table 1 shows the relevant parameters for each layer type and also the discretization we chose for each parameter. Each layer has a parameter. layer depth (shown as Layer 1, 2, ... in Figure 2). Adding layer depth to the state space allows us to constrict the action space such that the state-action graph is directed and acyclic (DAG) and also. allows us to specify a maximum number of layers the agent may select before terminating..\nEach layer type also has a parameter called representation size (R-size). Convolutional nets pro. gressively compress the representation of the original signal through pooling and convolution. The presence of these layers in our state space may lead the agent on a trajectory where the intermediate. signal representation gets reduced to a size that is too small for further processing. For example, five 2 2 pooling layers each with stride 2 will reduce an image of initial size 32 32 to size 1 1. At this stage, further pooling, or convolution with receptive field size greater than 1, would be mean-. ingless and degenerate. To avoid such scenarios, we add the R-size parameter to the state tuple s,. which allows us to restrict actions from states with R-size n to those that have a receptive field size less than or equal to n. To further constrict the state space, we chose to bin the representation sizes. into three discrete buckets. However, binning adds uncertainty to the state transitions: depending on. the true underlying representation size, a pooling layer may or may not change the R-size bin. As a result, the action of pooling can lead to two different states, which we model as stochasticity in state. transitions. Please see Figure A1 in appendix for an illustrated example..\nState Layer 1 Layer 2 Layer N-1 Layer N Layer 1 Layer 2 Action Input (64,3,1 C64,3,1 C64,3, C(64,3,1 C64,3,1 64,3, G Convolution G 64 Filters Input ... ... Input ... ... 3x3 Receptive Field 1x1 Strides Max Pooling P2. P2.2 P(2,2) P(2,2) P(2,2) G G G Softmax G (a) (b)\nState Layer 1 Layer 2 Layer N-1 Layer N Layer 1 Layer 2 Action Input C(64,3, C(64,3,1 C(64,3,1 C(64,3,1 C(64,3, 64,3,1 G Convolution 64 Filters Input ... Input ... .. 3x3 Receptive Field 1x1 Strides Max Pooling P2.2 P(2,2) P(2,2) G Softmax G G G (b)\nFigure 2: Markov Decision Process for CNN Architecture Generation: Figure 2(a) shows the full state and action space. In this illustration, actions are shown to be deterministic for clarity, but they are stochastic in experiments. C(n, f, l) denotes a convolutional layer with n filters, receptive field size f, and stride l. P(f, l) denotes a pooling layer with receptive field size f and stride l. G denotes a termination state (Softmax/Global Average Pooling). Figure 2(b) shows a path the agent may choose, highlighted in green, and the corresponding CNN topology.\ni ~ Layer depth < 12 f ~ Receptive field size. Square. E {1, 3, 5} Convolution (C) l ~ Stride Square. Always equal to 1. d ~ # receptive fields. E {64, 128, 256, 512} n ~ Representation size E {00,8], 8, 4], 4, 1]} i ~ Layer depth < 12 Pooling (P) (f, l) ~ (Receptive field size, Strides) Square. E {(5, 3), (3, 2), (2, 2)} n ~ Representation size. E {o0,8], (8, 4 and (4, 1} i ~ Layer depth < 12 Fully Connected (FC) n ~ # consecutive FC layers < 3 d ~ # neurons E {512, 256, 128} s ~ Previous State Termination State\nConvolution (C)\nFully Connected (FC Termination State\nTable 1: Experimental State Space. For each layer type, we list the relevant parameters and th values each parameter is allowed to take"}, {"section_index": "4", "section_name": "4.2 THE ACTION SPACE", "section_text": "We restrict the agent from taking certain actions to both limit the state-action space and make learn. ing tractable. First, we allow the agent to terminate a path at any point, i.e. it may choose a termi nation state from any non-termination state. In addition, we only allow transitions for a state with layer depth i to a state with layer depth i + 1, which ensures that there are no loops in the graph This constraint ensures that the state-action graph is always a DAG. Any state at the maximum layer. depth, as prescribed in Table 1, may only transition to a termination layer..\nAn agent at a state of type convolution (C) may transition to a state with any other layer type. An. agent at a state with layer type pooling (P) may transition to a state with any other layer type other than another P state because consecutive pooling layers are equivalent to a single, larger pooling. layer which could lie outside of our chosen state space. Furthermore, only states with representation. size in bins (8, 4| and (4, 1| may transition to an FC layer, which ensures that the number of weights does not become unreasonably huge. Note that a majority of these constraints are in place to enable. faster convergence on our limited hardware (see Section 5) and not a limitation of the method in. itself."}, {"section_index": "5", "section_name": "4.3 Q-LEARNING TRAINING PROCEDURE", "section_text": "During the entire training process (starting at e = 1.0), we maintain a replay dictionary which stores. (i) the network topology and (ii) prediction performance on a validation set, for all of the sampled\n2e = 0 indicates a completely deterministic policy. Because we would like to generate several good models for ensembling and analysis, we stop at e = 0.1, which represents a stochastic final policy..\n< 12 < 3 E {512, 256,128} Global Avg. Poolino/Softmax\nNext, we limit the number of fully connected (FC) layers to be at maximum two, because a large. number of FC layers can lead to too may learnable parameters. The agent at a state with type FC. may transition to another state with type FC if and only if the number of consecutive FC states is less than the maximum allowed. Furthermore, a state s of type FC with number of neurons d may. only transition to either a termination state or a state s' of type FC with number of neurons d' < d..\nFor the iterative Q-learning updates (Equation 3), we set the Q-learning rate (a) to 0.01. In addition we set the discount factor (y) to 1 to not over-prioritize short-term rewards. We decrease e from 1.0 to O.1 in steps, where the step-size is defined by the number of unique models trained (Table 2) At e = 1.0, the agent samples CNN architecture with a random walk along a uniformly weighted Markov chain. Every topology sampled by the agent is trained using the procedure described in Section 5, and the prediction performance of this network topology on the validation set is recorded. We train a larger number of models at e = 1.0 as compared to other values of e to ensure that the agent has adequate time to explore before it begins to exploit. We stop the agent at e = 0.1 (and not at e = O) to obtain a stochastic final policy, which generates perturbations of the global minimum. Ideally, we want to identify several well-performing model topologies, which can then be ensembled to improve prediction performance.\nTable 2: e Schedule. The learning agent trains the specified number of unique models at each e\nmodels. If a model that has already been trained is re-sampled, it is not re-trained, but instead the. previously found validation accuracy is presented to the agent. After each model is sampled and. trained, the agent randomly samples 100 models from the replay dictionary and applies the Q-value. update defined in Equation 3 for all transitions in each sampled sequence. The Q-value update is applied to the transitions in temporally reversed order, which has been shown to speed up Q-values. convergence (Lin, 1993)."}, {"section_index": "6", "section_name": "5 EXPERIMENT DETAILS", "section_text": "During the model exploration phase, we trained each network topology with a quick and aggressive. training scheme. For each experiment, we created a validation set by randomly taking 5,oo0 samples. from the training set such that the resulting class distributions were unchanged. For every network. a dropout layer was added after every two layers. The ith dropout layer, out of a total n dropou. Adam optimizer (Kingma & Ba, 2014) with 1 = 0.9, 2 = 0.999, E = 10-8. The batch size was set to 128, and the initial learning rate was set to O.001. If the model failed to perform better than a. random predictor after the first epoch, we reduced the learning rate by a factor of O.4 and restarted. training, for a maximum of 5 restarts. For models that started learning (i.e., performed better than a. random predictor), we reduced the learning rate by a factor of 0.2 every 5 epochs. All weights were initialized with Xavier initialization (Glorot & Bengio, 2010). Our experiments using Caffe (Jia. et al., 2014) took 8-10 days to complete for each dataset with a hardware setup consisting of 10. NVIDIA GPUs.\nAfter the agent completed the e schedule (Table 2), we selected the top ten models that were founc over the course of exploration. These models were then finetuned using a much longer training schedule, and only the top five were used for ensembling. We now provide details of the datasets and the finetuning process.\nThe Street View House Numbers (SVHN) dataset has 10 classes with a total of 73,257 samples. in the original training set, 26,032 samples in the test set, and 531,131 additional samples in the. extended training set. During the exploration phase, we only trained with the original training set. using 5,oo0 random samples as validation. We finetuned the top ten models with the original plus. extended training set, by creating preprocessed training and validation sets as described by Lee et al.. (2016). Our final learning rate schedule after tuning on validation set was 0.025 for 5 epochs, 0.0125. for 5 epochs, 0.0001 for 20 epochs, and 0.00001 for 10 epochs.\nCIFAR-10, the 10 class tiny image dataset, has 50,000 training samples and 10,000 testing samples During the exploration phase, we took 5,oo0 random samples from the training set for validation. The maximum layer depth was increased to 18. After the experiment completed, we used the same validation set to tune hyperparameters, resulting in a final training scheme which we ran on the entire training set. In the final training scheme, we set a learning rate of O.025 for 40 epochs, 0.0125 for 40 epochs, 0.0001 for 160 epochs, and 0.00001 for 60 epochs, with all other parameters unchanged. During this phase, we preprocess using global contrast normalization and use moderate data augmentation, which consists of random mirroring and random translation by up to 5 pixels.\nModel Selection Analysis: From Q-learning principles, we expect the learning agent to improve in its ability to pick network topologies as e reduces and the agent enters the exploitation phase. In\nMNIST, the 10 class handwritten digits dataset, has 60,000 training samples and 10,000 testing samples. We preprocessed each image with global mean subtraction. In the final training scheme we trained each model for 40 epochs and decreased learning rate every 5 epochs by a factor of 0.2. For further tuning details please see Appendix C..\nSVHN Q-Learning Performance CIFAR10 Q-Learning Performance 1.00 1.00 Average Accuracy Per Epsilon. Average Accuracy Per Epsilon 0.90 0.90 Rolling Mean Model Accuracy. Rolling Mean Model Accuracy. 0.80 0.80 0.70 0.70 0.60 0.60 ean 0.50 C 9 0.40 0.40 A 0.30 0.30 0.20 0.20 0.10 0.10 Epsilon = 1.0 9.8.7 6 .5 .4. .3 .2 Epsilon = 1.0 .9.8.7.6.5.4.3.2.1 0.00 0.00 0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000 3500 Iterations Iterations\nFigure 3: Q-Learning Performance. In the plots, the blue line shows a rolling mean of mode accuracy versus iteration, where in each iteration of the algorithm the agent is sampling a model Each bar (in light blue) marks the average accuracy over all models that were sampled during the exploration phase with the labeled e. As e decreases, the average accuracy goes up, demonstrating that the agent learns to select better-performing CNN architectures.\nTable 3: Error Rate Comparison with CNNs that only use convolution, pooling, and fully con nected layers. We report results for CIFAR-10 and CIFAR-100 with moderate data augmentation and results for MNIST and SVHN without any data augmentation.\nFigure 3, we plot the rolling mean of prediction accuracy over 100 models and the mean accurac. of models sampled at different e values, for the CIFAR-1O and SVHN experiments. The plots shov. that, while the prediction accuracy remains flat during the exploration phase (e = 1) as expected, th agent consistently improves in its ability to pick better-performing models as e reduces from 1 to 0.1. For example, the mean accuracy of models in the SVHN experiment increases from 52.25% at e = to 88.02% at e = 0.1. Furthermore, we demonstrate the stability of the Q-learning procedure witl. 10 independent runs on a subset of the SVHN dataset in Section D.1 of the Appendix. Additiona. analysis of Q-learning results can be found in Section D.2..\nThe top models selected by the Q-learning agent vary in the number of parameters but all demon. strate high performance (see Appendix Tables 1-3). For example, the number of parameters for th top five CIFAR-10 models range from 11.26 million to 1.10 million, with only a 2.32% decreas. in test error. We find design motifs common to the top hand-crafted network architectures as well For example, the agent often chooses a layer of type C(N, 1,1) as the first layer in the network. These layers generate N learnable linear transformations of the input data, which is similar in spiri to preprocessing of input data from RGB to a different color spaces such as YUV, as found in prio. work (Sermanet et al., 2012; 2013).\nPrediction Performance: We compare the prediction performance of the MetaQNN networks dis. covered by the Q-learning agent with state-of-the-art methods on three datasets. We report the accu . racy of our best model, along with an ensemble of top five models. First, we compare MetaQNN witl six existing architectures that are designed with standard convolution, pooling, and fully-connected. layers alone, similar to our designs. As seen in Table 3, our top model alone, as well as the com. mittee ensemble of five models, outperforms all similar models. Next, we compare our results with. six top networks overall, which contain complex layer types and design ideas, including generalized. pooling functions, residual connections, and recurrent modules. Our results are competitive witl. these methods as well (Table 4). Finally, our method outperforms existing automated network de.\nTable 4: Error Rate Comparison with state-of-the-art methods with complex layer types. We re port results for CIFAR-10 and CIFAR-100 with moderate data augmentation and results for MNIST and SVHN without any data augmentation..\nTable 5: Prediction Error for the top MetaQNN (CIFAR-1O) model trained for other tasks. Fine tuning refers to initializing training with the weights found for the optimal CIFAR-10 model..\nsign methods. MetaQNN obtains an error of 6.92% as compared to 21.2% reported by Bergstra et al (2011) on CIFAR-10; and it obtains an error of 0.32% as compared to 7.9% reported by Verbancsics. & Harguess (2013) on MNIST.\nThe difference in validation error between the top 10 models for MNIST was very small. so we also created an ensemble with all 10 models. This ensemble achieved a test error of 0.28%-which beats the current state-of-the-art on MNIST without data augmentation.\nThe best CIFAR-10 model performs 1-2% better than the four next best models, which is why the ensemble accuracy is lower than the best model's accuracy. We posit that the CIFAR-10 MetaQNN did not have adequate exploration time given the larger state space compared to that of the SVHN experiment, causing it to not find more models with performance similar to the best model. Fur- thermore, the coarse training scheme could have been not as well suited for CIFAR-10 as it was for SVHN, causing some models to under perform.\nTransfer Learning Ability: Network designs such as VGGnet (Simonyan & Zisserman, 2014) can. be adopted to solve a variety of computer vision problems. To check if the MetaQNN networks. provide similar transfer learning ability, we use the best MetaQNN model on the CIFAR-10 dataset. for training other computer vision tasks. The model performs well (Table 5) both when training from random initializations, and finetuning from existing weights.."}, {"section_index": "7", "section_name": "7 CONCLUDING REMARKS", "section_text": "*Results in this column obtained with the top MetaQNN architecture for CIFAR-10, trained from randon initialization with CIFAR-100 data.\nNeural networks are being used in an increasingly wide variety of domains, which calls for scalable solutions to produce problem-specific model architectures. We take a step towards this goal and. show that a meta-modeling approach using reinforcement learning is able to generate tailored CNN. designs for different image classification tasks. Our MetaQNN networks outperform previous meta-. modeling methods as well as hand-crafted networks which use the same types of layers..\nWhile we report results for image classification problems, our method could be applied to differ ent problem settings, including supervised (e.g., classification, regression) and unsupervised (e.g., autoencoders). The MetaQNN method could also aid constraint-based network design, by optimiz ing parameters such as size, speed, and accuracy. For instance, one could add a threshold in the state-action space barring the agent from creating models larger than the desired limit. In addition,\nThere are several future avenues for research in reinforcement learning-driven network design a. well. In our current implementation, we use the same set of hyperparameters to train all networl topologies during the Q-learning phase and further finetune the hyperparameters for top models selected by the MetaQNN agent. However, our approach could be combined with hyperparameter optimization methods to further automate the network design process. Moreover, we constrict the state-action space using coarse, discrete bins to accelerate convergence. It would be possible tc move to larger state-action spaces using methods for Q-function approximation (Bertsekas, 2015 Mnih et al., 2015)."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Peter Downs for creating the project website and contributing to illustrations. We ac. knowledge Center for Bits and Atoms at MIT for their help with computing resources. Finally, we thank members of Camera Culture group at MIT Media Lab for their help and support.."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Dimitri P Bertsekas. Convex optimization algorithms. Athena Scientific Belmont, 2015\nDjork-Arne Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep networ! learning by exponential linear units (ELUs). arXiv preprint arXiv:1511.07289. 2015..\nTobias Domhan, Jost Tobias Springenberg, and Frank Hutter. Speeding up automatic hyperparame ter optimization of deep neural networks by extrapolation of learning curves. IJCAI, 2015.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neura networks. A1STATS, 9:249-256, 2010\nIan J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron C Courville, and Yoshua Bengio. Max out networks. ICML (3), 28:1319-1327, 2013\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision, pp. 630-645. Springer, 2016.\nLeslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237-285, 1996.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprini arXiv:1412.6980, 2014.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\nSergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuc motor policies. JMLR, 17(39):1-40, 2016.\nLong-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching Machine Learning, 8(3-4):293-321, 1992.\nLong-Ji Lin. Reinforcement learning for robots using neural networks. Technical report, DTIC Document, 1993.\nAdriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiy preprint arXiv:1412.6550. 2014\nJ David Schaffer, Darrell Whitley, and Larry J Eshelman. Combinations of genetic algorithms and neural networks: A survey of the state of the art. International Workshop on Combinations oj Genetic Algorithms and Neural Networks, pp. 1-37, 1992\nPierre Sermanet, Soumith Chintala, and Yann LeCun. Convolutional neural networks applied tc house numbers digit classification. ICPR, pp. 3288-3291, 2012.\nPierre Sermanet, Koray Kavukcuoglu, Soumith Chintala, and Yann LeCun. Pedestrian detection with unsupervised multi-stage feature learning. CVPR. pp. 3626-3633. 2013.\nBobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando de Freitas. Taking th human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1 148-175, 2016.\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature. 529(7587):484-489. 2016\nJasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. NIPS, pp. 2951-2959, 2012.\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature. 518(7540):529-533. 2015\nJost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving fo simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014\nKevin Swersky, Jasper Snoek, and Ryan P Adams. Multi-task bayesian optimization. NIPS, pp 2004-2012, 2013.\nPhillip Verbancsics and Josh Harguess. Generative neuroevolution for deep learning. arXiv preprini arXiv:1312.5355. 2013.\nJoannes Vermorel and Mehryar Mohri. Multi-armed bandit algorithms and empirical evaluation European Conference on Machine Learning, pp. 437-448, 2005."}, {"section_index": "10", "section_name": "A ALGORITHM", "section_text": "We first describe the main components of the MetaQNN algorithm. Algorithm 1 shows the main loop, where the parameter M would determine how many models to run for a given e and the parameter K would determine how many times to sample the replay database to update Q-values on each iteration. The function TRAIN refers to training the specified network and returns a validation accuracy. Algorithm 2 details the method for sampling a new network using the e-greedy strategy where we assume we have a function TRANSITION that returns the next state given a state and action. Finally, Algorithm 3 implements the Q-value update detailed in Equation 3, with discounting factor set to 1, for an entire state sequence in temporally reversed order.\nAlgorithm 1 Q-learning For CNN Topologies\nFor CNN 1opoiogies Initialize: replay_memory Q{(s,u) Vs E S,u EU(s) : 0.5} for episode = 1 to M do S, U SAMPLE_NEW_NETWORK(e, Q) accuracy TRAIN(S) replay_memory.append((S, U, accuracy)) for memory = 1 to K do SSAMPLE, UsAMPLE, accuracySAMPLE Uniform{replay_memory} Q UPDATE_Q_VALUES(Q, SSAM PLE, USAMPLE, aCCuracySAMPLE) end for end for\nreplay_memory Q{(s,u) Vs E S,u EU(s) : 0.5} for episode = 1 to M do S, U < SAMPLE_NEW_NETWORK(e, Q) accuracy TRAIN(S) replay_memory.append((S, U, accuracy)) for memory = 1 to K do SSAMPLE, UsAMPLE, accuracySAM PLE Uniform{replay_memory} Q UPDATE_Q_VALUES(Q, SSAMPLE, USAMPLE, aCCuracySAMPLE) end for end for Algorithm 2 SAMPLE_NEW_NETWORK(e, Q) Initialize: state sequence S = [SsTART] action sequence U = [] while U[-1] terminate do Q ~ Uniform[0, 1) if a > e then u = argmaxueu(s[-1]) Q[(S[-1], u)] s' = TRANSITION(S[-1], u) else u ~ Uniform{U(S[-1])} s' = TRANSITION(S[-1], u) end if U.append(u) if u != terminate then S.append(s') end if end while return S, U Algorithm 3 UPDATE_Q_VALUES(Q, S, U, accuracy) Q[S[1], U[1]] = (1 a)Q[S[-1], U[1]] + : accuracy for i = length(S) - 2 to 0 do\nIx(C, q) Initialize: state sequence S = [SsTART] action sequence U = [] while U[-1] terminate do ~ Uniform[0, 1) if a > e then u = argmaxueu(s[-1]) Q[(S[-1], u)] s' = TRANSITION(S[-1], u) else u ~ Uniform{U(S[-1])} s' = TRANSITION(S[-1], u) end if U.append(u) if u != terminate then. S.append(s') end if end while. return S, U\nAlgorithm 3 UPDATE_Q_VALUES(Q, S, U, accuracy)\nAs mentioned in Section 4.1 of the main text, we introduce a parameter called representation size to prohibit the agent from taking actions that can reduce the intermediate signal representation to a size that is too small for further processing. However, this process leads to uncertainties in state transitions, as illustrated in Figure A1, which is handled by the standard Q-learning formulation.\nFigure A1: Representation size binning: In this figure, we show three example state transitions. The true representation size (R-size) parameter is included in the figure to show the true underlying. state. Assuming there are two R-size bins, R-size Bin1: [8, oo) and R-size Bin2: (0, 7], Figure A1. shows the case where the initial state is in R-size Binj and true representation size is 18. After th. agent chooses to pool with a 2 2 filter with stride 2, the true representation size reduces to 9 but the. R-size bin does not change. In Figure A1b, the same 2 2 pooling layer with stride 2 reduces the. actual representation size of 14 to 7, but the bin changes to R-size Bin2. Therefore, in figures A1a. and A1b, the agent ends up in different final states, despite originating in the same initial state anc. choosing the same action. Figure A1c shows that in our state-action space, when the agent takes ar action that reduces the representation size, it will have uncertainty in which state it will transition to."}, {"section_index": "11", "section_name": "D FURTHER ANALYSIS OF Q-LEARNING", "section_text": "Figure 3 of the main text and Figure A2 show that as the agent begins to exploit, it improves in architecture selection. It is also informative to look at the distribution of models chosen at each e Figure A4 gives further insight into the performance achieved at each e for both experiments."}, {"section_index": "12", "section_name": "D.1 Q-LEARNING STABILITY", "section_text": "States Actions R-size: 18 R-size:14 R-size bin: 1 R-size bin: 1 R-size bin: 1 P(2,2) P(2,2) P(2,2) ON R-size: 9 R-size: 7 R-size bin: 1 R-size bin: 2 R-size bin: 1 R-size bin: 2 (a) (b) (c)\nWe noticed that the final MNIST models were prone to overfitting, so we increased dropout and did a small grid search for the weight regularization parameter. For both tuning and final training we warmed the model with the learned weights from after the first epoch of initial training. The final models and solvers can be found on our project website https://bowenbaker.github.io/metaqnn/. Figure A2 shows the Q-Learning performance for the MNIST experiment.\nBecause the Q-learning agent explores via a random or semi-random distribution, it is natural to. ask whether the agent can consistently improve architecture performance. While the success of the three independent experiments described in the main text allude to stability, here we present further. evidence. We conduct 10 independent runs of the Q-learning procedure on 10% of the SVHN. dataset (which corresponds to ~7,000 training examples). We use a smaller dataset to reduce the. computation time of each independent run to 10GPU-days, as opposed to the 100GPU-days it would. take on the full dataset. As can be seen in Figure A3, the Q-learning procedure with the exploration. schedule detailed in Table 2 is fairly stable. The standard deviation at e = 1 is notably smaller. than at other stages, which we attribute to the large difference in number of samples at each stage\nMNisT Q-Learning Performance 1.00 0.90 0.80 0.70 0.60 e 0.50 AC 0.40 0.30 Average Accuracy Per Epsilon. 0.20 Rolling Mean Model Accuracy. 0.10 Epsilon = 1.0 9.8.7 .6 5 .4 3 .2 .1 0.00 0 500 1000 1500 2000 2500 3000 3500 Iterations\nQ-Learning Stability (Across 10 Runs) Q-Learning Individual Runs. 0.80 0.80 0.75 0.75 T 0.70 0.70 0.65 0.60 aean 0.60 0.55 0.55 0.50 0.50 0.45 0.45 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 Epsilon Epsilon (a) (b)\nFigure A3: Figure A3a shows the mean model accuracy and standard deviation at each e over 10 independent runs of the Q-learning procedure on 10% of the SVHN dataset. Figure A3b shows the mean model accuracy at each e for each independent experiment. Despite some variance due to a randomized exploration strategy, each independent run successfully improves architecture perfor mance.\nFurthermore, the best model found during each run had remarkably similar performance with a mear accuracy of 88.25% and standard deviation of 0.58%, which shows that each run successfully founc. at least one very high performing model. Note that we did not use an extended training schedule to. Improve performance in this experiment."}, {"section_index": "13", "section_name": "D.2 Q-VALUE ANALYSIS", "section_text": "We now analyze the actual Q-values generated by the agent during the training process. The learning agent iteratively updates the Q-values of each path during the e-greedy exploration. Each Q-value is initialized at O.5. After the e-schedule is complete, we can analyze the final Q-value associatec with each path to gain insights into the layer selection process. In the left column of Figure A5, we plot the average Q-value for each layer type at different layer depths (for both SVHN and CIFAR 10) datasets. Roughly speaking, a higher Q-value associated with a layer type indicates a highe probability that the agent will pick that layer type. In Figure A5, we observe that, while the average Q-value is higher for convolution and pooling layers at lower layer depths, the Q-values for fully connected and termination layers (softmax and global average pooling) increase as we go deepe into the network. This observation matches with traditional network designs.\nWe can also plot the average Q-values associated with different layer parameters for further analysis. In the right column of Figure A5, we plot the average Q-values for convolution layers with receptive\nFigure A2: MNIST Q-Learning Performance. The blue line shows a rolling mean of model accuracy versus iteration, where in each iteration of the algorithm the agent is sampling a model. Each bar (in light blue) marks the average accuracy over all models that were sampled during the exploration phase with the labeled e. As e decreases, the average accuracy goes up, demonstrating that the agent learns to select better-performing CNN architectures.\nfield sizes 1, 3, and 5 at different layer depths. The plots show that layers with receptive field size of 5 have a higher Q-value as compared to sizes 1 and 3 as we go deeper into the networks. This indicates that it might be beneficial to use larger receptive field sizes in deeper networks.\nIn summary, the Q-learning method enables us to perform analysis on the relative benefits of differ-. ent design parameters of our state space, and possibly gain insights for new CNN designs\nModel Architecture Test Error (%) # Params (106) [C(512,5,1), C(256,3,1), C(256,5,1), C(256,3,1), P(5,3), C(512,3,1), 6.92 11.18 C(512,5,1), P(2,2), SM(10)] [C(128,1,1), C(512,3,1), C(64,1,1), C(128,3,1), P(2,2), C(256,3,1), 8.78 2.17 P(2,2), C(512,3,1), P(3,2), SM(10)] [C(128,3,1), C(128,1,1), C(512,5,1), P(2,2), C(128,3,1), P(2,2), 8.88 2.42 C(64,3,1), C(64,5,1), SM(10)] [C(256,3,1), C(256,3,1), P(5,3), C(256,1,1), C(128,3,1), P(2,2), 9.24 1.10 C(128,3,1), SM(10)] [C(128,5,1), C(512,3,1), P(2,2), C(128,1,1), C(128,5,1), P(3,2), 11.63 1.66 C(512,3,1), SM(10)]\nTable A1: Top 5 model architectures: CIFAR-10\nModel Architecture Test Error (%) # Params (106) [C(128,3,1), P(2,2), C(64,5,1), C(512,5,1), C(256,3,1), C(512,3,1). 2.24 9.81 P(2,2), C(512,3,1), C(256,5,1), C(256,3,1), C(128,5,1), C(64,3,1), SM(10)] [C(128,1,1), C(256,5,1), C(128,5,1), P(2,2), C(256,5,1), C(256,1,1), 2.28 10.38 C(256,3,1),C(256,3,1), C(256,5,1),C(512,5,1), C(256,3,1), C(128,3,1), SM(10)] C(128,5,1), C(128,3,1), C(64,5,1), P(5,3), C(128,3,1), C(512,5,1), 2.32 6.83 C(256,5,1), C(128,5,1), C(128,5,1), C(128,3,1), SM(10)] [C(128,1,1), C(256,5,1), C(128,5,1), C(256,3,1), C(256,5,1), P(2,2), 2.35 6.99 C(128,1,1), C(512,3,1), C(256,5,1), P(2,2), C(64,5,1), C(64,1,1) SM(10)] [C(128,1,1), C(256,5,1), C(128,5,1), C(256,5,1), C(256,5,1), 2.36 10.05 C(256,1,1), P(3,2), C(128,1,1), C(256,5,1), C(512,5,1), C(256,3,1), C(128,3,1), SM(10)]\nTable A2: Top 5 model architectures: SVHN. Note that we do not report the best accuracy on test set from the above models in Tables 3 and 4 from the main text. This is because the model that achieved 2.28% on the test set performed the best on the validation set.\nIn Tables A1 through A3, we present the top five model architectures selected with Q-learning for each dataset, along with their prediction error reported on the test set, and their to- tal number of parameters. To download the Caffe solver and prototext files, please visit https://bowenbaker. github.io/metaqnn/.\nModel Architecture. Test Error (%) # Params (10) [C(64,1,1), C(256,3,1), P(2,2), C(512,3,1), C(256,1,1), P(5,3), 0.35 5.59 C(256.3.1). C(512.3.1). FC(512). SM(10)] [C(128,3,1), C(64,1,1), C(64,3,1), C(64,5,1), P(2,2), C(128,3,1), P(3,2), 0.38 7.43 C(512,3,1), FC(512), FC(128), SM(10)] [C(512,1,1), C(128,3,1), C(128,5,1), C(64,1,1), C(256,5,1), C(64,1,1), 0.40 8.28 P(5,3), C(512,1,1), C(512,3,1), C(256,3,1), C(256,5,1), C(256,5,1), SM(10)] [C(64,3,1), C(128,3,1), C(512,1,1), C(256,1,1), C(256,5,1), C(128,3,1), 0.41 6.27 P(5,3), C(512,1,1), C(512,3,1), C(128,5,1), SM(10)] [C(64,3,1), C(128,1,1), P(2,2), C(256,3,1), C(128,5,1), C(64,1,1), 0.43 8.10 C(512,5,1), C(128,5,1), C(64,1,1), C(512,5,1), C(256,5,1), C(64,5,1), SM(10)] [C(64,1,1), C(256,5,1), C(256,5,1), C(512,1,1), C(64,3,1), P(5,3); 0.44 9.67 C(256.5.1), C(256,5,1), C(512,5,1), C(64,1,1), C(128,5,1), C(512,5,1), SM(10)] [C(128,3,1), C(512,3,1), P(2,2), C(256,3,1), C(128,5,1), C(64,1,1), 0.44 3.52 C(64,5,1), C(512,5,1), GAP(10), SM(10)] [C(256,3,1), C(256,5,1), C(512,3,1), C(256,5,1), C(512,1,1), P(5,3), 0.46 12.42 C(256,3,1), C(64,3,1), C(256,5,1), C(512,3,1), C(128,5,1), C(512,5,1). SM(10)] [C(512,5,1), C(128,5,1), C(128,5,1), C(128,3,1), C(256,3,1), 0.55 7.25 C(512,5,1), C(256,3,1), C(128,3,1), SM(10)] [C(64,5,D), C(512,5,1), P(3,2), C(256,5,D), C(256,3,1), C(256,3,D), 0.56 7.55 C(128,1,1), C(256,3,1), C(256,5,1), C(64,1,1), C(256,3,1), C(64,3,1), SM(10)]\nTable A3: Top 10 model architectures: MNIST. We report the top 10 models for MNIST because we included all 10 in our final ensemble. Note that we do not report the best accuracy on test set from the above models in Tables 3 and 4 from the main text. This is because the model that achieved 0.44% on the test set performed the best on the validation set.\nModel Accuracy Distribution Model Accuracy Distribution (SVHN) (SVHN) 60 60 epsilon epsilon 0.1 0.6 0.1 1.0 50 50 0.2 0.7 0.3 0.8 Mopes 40 0.4 0.9 Moees 40 0.5 -- 1.0 30 30 20 10 10 8.1 8.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1.0 0.2 0.3 0.9 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Validation Accuracy. Validation Accuracy. (a) (b) Model Accuracy Distribution Model Accuracy Distribution (CIFAR-10) (CIFAR-10) 20 20 epsilon epsilon 0.1 0.6 0.1 1.0 0.2 0.7 15 0.3 0.8 15 0.4 0.9 0.5 1.0 10 10 5 5 8.1 0.2 8.1 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Validation Accuracy Validation Accuracy. (c) (d) Model Accuracy Distribution. Model Accuracy Distribution. (MNIST) (MNIST) 100 100 epsilon epsilon 0.1 0.6 0.1 1.0 80 0.2 0.7 80 0.3 0.8 0.4 0.9 Mooees Moees 60 0.5 1.0 60 40 do 40 20 20 %.1 0 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Validation Accuracy Validation Accuracy (e) (f)\nFigure A4: Accuracy Distribution versus e: Figures A4a, A4c, and A4e show the accuracy dis tribution for each e for the SVHN, CIFAR-10, and MNIST experiments, respectively. Figures A4b. A4d, and A4f show the accuracy distributions for the initial e = 1 and the final e = 0.1. One can see that the accuracy distribution becomes much more peaked in the high accuracy ranges at small e. for each experiment.\nAverage Q-Value vs. Layer Depth Average Q-Value vs. Layer Depth (SVHN) for Convolution Layers (SVHN) 1.0 1.0 Receptive Field Size 1 Receptive Field Size 3 0.8 0.9 Receptive Field Size 5 Convolution Fully Connected 0.2 Pooling 0.6 Global Average Pooling Softmax 0.00 0.5 2 10 12 14 10 12 Layer Depth Layer Depth (a) (b) Average Q-Value vs. Layer Depth Average Q-Value vs. Layer Depth (CIFAR10) for Convolution Layers (CIFAR10) 1.0 1.0 Receptive Field Size 1 Receptive Field Size 3 0.8 Receptive Field Size 5 0.9 Convolution Fully Connected 0.2 Pooling 0.6 GlobalAverage Pooling Softmax 0.0 0.50 10 15 20 8 10 12 14 16 18 Layer Depth Layer Depth (c) (d) Average Q-Value vs. Layer Depth Average Q-Value vs. Layer Depth (MNIST) for Convolution Layers (MNIST) 1.0 1.0 0.8 0.9 Convolution Fully Connected 0.2 Pooling 0.6 Receptive Field Size 1 Global Average Pooling Receptive Field Size 3 Softmax Receptive Field Size 5 0.0 0.50 8 10 12 14 8 10 12 Layer Depth Layer Depth (e) (f)\nFigure A5: Average Q-Value versus Layer Depth for different layer types are shown in the lef column. Average Q-Value versus Layer Depth for different receptive field sizes of the convolutio. layer are shown in the right column.."}] |
S1RP6GLle | [{"section_index": "0", "section_name": "AMORTISED MAP INFERENCE FOI IMAGE SUPER-RESOLUTION", "section_text": "Casper Kaae Sonderby1 * Jose Caballerot."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Image super-resolution (SR) is an underdetermined inverse problem, where a large number of plausible high resolution images can explain the same downsampled image. Most current single image SR methods use empirical risk minimisation often with a pixel-wise mean squared error (MSE) loss. However, the outputs from such methods tend to be blurry, over-smoothed and generally appear implausible A more desirable approach would employ Maximum a Posteriori (MAP) infer ence, preferring solutions that always have a high probability under the image prior, and thus appear more plausible. Direct MAP estimation for SR is non- trivial, as it requires us to build a model for the image prior from samples. Here we introduce new methods for amortised MAP inference whereby we calculate the MAP estimate directly using a convolutional neural network. We first introduce a novel neural network architecture that performs a projection to the affine subspace of valid SR solutions ensuring that the high resolution output of the network is always consistent with the low resolution input. Using this architecture, the amor- tised MAP inference problem reduces to minimising the cross-entropy between two distributions, similar to training generative models. We propose three methods to solve this optimisation problem: (1) Generative Adversarial Networks (GAN) (2) denoiser-guided SR which backpropagates gradient-estimates from denoising to train the network, and (3) a baseline method using a maximum-likelihood trained image prior. Our experiments show that the GAN based approach per forms best on real image data. Lastly, we establish a connection between GANs and amortised variational inference as in e. g. variational autoencoders."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Image super-resolution (SR) is the underdetermined inverse problem of estimating a high resolutior (HR) image given the corresponding low resolution (LR) input. This problem has recently attractec significant research interest due to the potential of enhancing the visual experience in many appli cations while limiting the amount of raw pixel data that needs to be stored or transmitted. While SR has many applications in for example medical diagnostics or forensics (Nasrollahi & Moeslund 2014, and references therein), here we are primarily motivated to improve the perceptual quality when applied to natural images. Most current single image SR methods use empirical risk minimi sation, often with a pixel-wise mean squared error (MsE) loss (Dong et al., 2016; Shi et al., 2016) However, MSE, and convex loss functions in general, are known to have limitations when presentec with uncertainty in multimodal and nontrivial distributions such as distributions over natural im ages. In SR, a large number of plausible images can explain the LR input and the Bayes-optima behaviour for any MSE trained model is to output the mean of the plausible solutions weighted ac cording to their posterior probability. For natural images this averaging behaviour leads to blurry and over-smoothed outputs that generally appear implausible, i.e. the produced estimates have low probability under the natural image prior.\nAn idealised method for our applications would use a full-reference perceptual loss function that. describes the sensitivity of the human visual perception system to different distortions. However the\nIn lieu of a satisfactory perceptual loss function, we leave the empirical risk minimisation frameworl. and present methods based only on natural image statistics. In this paper we argue that a desirable approach is to employ amortised Maximum a Posteriori (MAP) inference, preferring solutions tha. have a high posterior probability and thus high probability under the image prior while keeping the computational benefits of amortised inference. To motivate why MAP inference is desirable conside. the toy problem in Figure 1a, where the HR data is two-dimensional y = [y1, y2] and distributec. according to the Swiss-roll density. The LR observation is defined as the average of the two pixel. x = y1+y2. Consider observing a LR data point x = 0.5: the set of possible HR solutions is the. line y1 = 2x - y2, more generally an affine subspace, which is shown by the dashed line in Figure. 1a. The posterior distribution p(y[x) is thus degenerate, and corresponds to a slice of the prior along. this line, as shown by the red shading. If one minimise MSE or Mean Absolute Error (MAE), the. Bayes-optimal solution will lie at the mean or the median along the line, respectively. This example. illustrates that MSE and MAE can produce output with very low probability under that data prio. whereas MAP inference would always find the mode which by definition is in a high-probability region. See Section 5.6 for a discussion of possible limitations of the MAP inference approach..\nOur first contribution is a convolutional neural networks (CNN) architecture designed to exploit the structure of the SR problem. Image downsampling is a linear transformation, and can be modelled as a strided convolution. As Figure 1a illustrates, the set of HR images y that are compatible with any LR image x span an affine subspace. We show that by using specifically chosen linear convolutior and deconvolution layers we can implement a projection to this affine subspace. This ensures that ou. CNNs always output estimates that are consistent with the inputs. The affine projection layer can be added to any CNN, or indeed, any other trainable SR algorithm. Using this architecture we show tha training the model for MAP inference reduces to minimising the cross-entropy H[qG, Py] betweer the HR data distribution py and the implied distribution qg of the model's output when evaluatec at random LR images. As a result, we don't need corresponding HR and LR image pairs any more and training becomes more akin to training generative models. However direct minimisation of the cross-entropy is not possible and instead we develop three approaches, all depending on projectin? the model output to the affine subspace of valid solution, to approximate it directly from data:\na) b) MAP MSE MAE AffGAN AffDG H[qe,Py] lMSE(x,Ay) MSE-mean MAE-median MAP 3.15 .... MSE 9.10 1.25 : 10-2 MAE 6.30 4.04 : 10-2 MAP AffGAN 4.10 0.0 SoftGAN 4.25 8.87 : 10-2 AffDG 3.81 0.0 SoftDG 4.19 1.01 : 10-1\ncross-entropy HIqe , Py Val- Figure 1: Illustration of the SR problem via a toy example. Two-dimensional ues. The AffGAN and AffDG. HR data y = [y1, y2] is drawn from a Swiss-roll distribution (in gray). achieves cross-entropy values. Downsampling is modelled as x = y1+y2 . a) Given observation x = 0.5, close to the MAP solution con-. valid SR solutions lie along the line y2 = 1 y1 (---). The red shading firming that they minimize the. llustrates the magnitude of the posterior py|x=o.5. Bayes-optimal estimates desired quantity. The MSE and. inder MSE and MAE as well as the MAP estimate given x = 0.5 are marked MAE models performs worse vith labels. The MAP estimates for different values of x E -8,8| are also since they do not minimize shown(- o-). b) Trained model outputs for x E [-8, 8] and estimated gradi- the cross-entropy. Further the. ents from a denoising function trained on py. Note the AffGAN(- o-) and models using affine projections. AffDG(- -) models fit the posterior mode well whereas the MSE(- -) (Aff) performs better than the. and MAE (- - ) model outputs generally fall in low probability regions.. soft constrained models.\nmost widely used loss functions MSE and the related peak-signal-to-noise-ratio (PSNR) metric have been shown to correlate poorly with human perception of image quality (Laparra et al., 2016; Wang et al., 2004). Improved perceptual quality metrics have been proposed, the most popular being structural similarity (SSIM) (Wang et al., 2004) and its multi-scale variants (Wang et al., 2003). Although the correlation of these metrics with human perception has improved, they still do not provide a fully satisfactory alternative to MSE for training of neural networks (NN) for SR.\n1. We present a variant of the Generative Adversaria1 Networks (GAN) (Goodfellow et al., 2014) which approximately minimises the Kullback-Leibler divergence (KL) and cross-entropy be-. tween qg and py. Our analysis provides theoretical grounding for using GANs in image SR. (Ledig et al., 2016). We also introduce a trick that we call instance noise that can be generally. applied to address the instability of training GANs.. 2. We employ denoising as a way to capture natural image statistics. Bayes-optimal denoising. approximately learn to take a gradient step along the log-probability of the data distribution. (Alain & Bengio, 2014). These gradient estimates from denoising can be directly backpropagated. through the network to minimise cross-entropy between qg and py via gradient descent.. 3. We present an approach where the probability density of data is directly modelled via a generative. model trained by maximum likelihood. We use a differentiable generative model based on Pixel-. CNNs (Oord et al., 2016) and Mixture of Conditiona1 Gaussian Scale Mixtures (MCGSM, Theis et al., 2012) whose performance we believe is very close to the-state-of-the-art in this category.\nConsider a function fe(x) parametrised by 0 which maps a LR observation x to a HR estimate y Most current SR methods optimise model parameters via empirical risk minimization:\nargmin Ey,x[l(y, fe(x))\nIn section 5 we empirically demonstrate the behaviour of the proposed methods on both the two dimensional toy dataset and on real image datasets. Lastly, in Appendix F we show that a stochastic version of AffGAN performs amortised variational inference, which for the first time establishes a connection between GANs and variational inference as in e. g. variational autoencoders (Kingma & Welling, 2014).\nThe GAN framework was introduced by Goodfellow et al. (2014) which also showed that these models minimise the Shannon-Jensen Divergence between qg and py under certain conditions. In. Section 3.2, we present an update rule that corresponds to minising KL[qgpy]. Recently, Nowozin. et al. (2016) presented a more general treatment that connects GANs to f-divergence minimisation. In parallel to our contributions, theoretica1 work by Mohamed & Lakshminarayanan (2016) pre. sented a unifying view on learning in GAN-style algorithms, of which our variant can be regarded a . special case. The focus of several recent papers on GANs were algorithmic tricks to improve their. stability (Radford et al., 2015; Salimans et al., 2016). In Section 3.2.1 we introduce another such. trick we call instance noise. We discuss theoretical motivations for this and compare it to one-sided. label smoothing proposed by Salimans et al. (2016). We also refer to parallel work by Arjovsky &. Bottou (2017) proposing a similar method. Recently, several attempts have been made to improve. perceptual quality of SR using deep representations of natural images. Bruna et al. (2016) and Li &. Wand (2016) measure the Euclidean distance in the nonlinear feature space of a deep NN pre-trained. to perform object classification. Dosovitskiy & Brox (2016) and Ledig et al. (2016) use a similar. approach and also add an adversarial loss term. Unpublished work by Garcia (2016) explored com-. bining GANs with an L1 penalty between the LR input and the down-sampled output. We note. that the soft L2 or L1 penalties used in these methods can be interpreted as assuming Gaussian and. Laplace observation noise. In contrast, our approach assumes no observation noise and satisfies. the consistency of inputs and outputs exactly by using an affine projection as explained in Section. 3.1. In other work, Larsen et al. (2015) proposed to replace the pixel-wise MSE used for training. of variational autoencoders with a learned metric from the GAN discriminator. Our denoiser based. method exploits a fundamental connection between probabilistic modelling and learning to denoise. (see e. g. Vincent et al., 2008; Alain & Bengio, 2014; Sarela & Valpola, 2005; Rasmus et al., 2015;. Greff et al., 2016): a Bayes-optimal denoiser can be used to estimate the gradient of the log proba. bility of data. To our knowledge this work is the first time that the output of a denoiser is explicitly. back-propagated to train another network. Lastly, we note that denoising has been used to solve. inverse problems in compressed sensing as in approximate message passing (Metzler et al.. 2015)..\nWhere y is the true target and l is some loss function. The loss function is typically a simple convex\nwe seek to perform MAP inference instead. For a single LR observation the MAP estimate i\ny(x) = argmax log Py|x(y|x y\nargmax Ex log Py|x(fe(x)[x)\nEx logPx|y(x|fe(x)) +Ex logpy(fe(x)) - Ex logpx(x argmax Likelihood Prior Marginal Likelihood\nx = Ay,\nargmax Ex|logpy(fe(x)) 0 Vx:Afe(x)=x\nwhere fe is an arbitrary mapping from LR to HR space, IIA a projection to the affine subspace {y : yA = x}, and A+ is the Moore-Penrose pseudoinverse of A, which satisfies AA+A = A and A+AA+ = A+. Conveniently, if A is a strided two-dimensional convolution, then A+ becomes a deconvolution or up-convolution, which is a standard operation used in deep learning (e. g. Shi et al., 2016). It is important to stress that the optimal deconvolution A+ is not simply the transpose of A, Figure 2 illustrates the upsampling kernel (A+) that corresponds to a Gaussian downsampling kernel (A). For any A the deconvolution A+ can be easily found, here we used numerical methods as detailed in Appendix B. Intuitively, A+x can be thought of as a baseline SR solution, while (I - A+A)fe is the residual. The operation (I - A+A) is a projection to the null-space of A, therefore when we downsample the residual (I - A+A)fe we are guaranteed to get 0 no matter what fe is. By using functions of this form we can turn Eqn. (6) into an unconstrained optimization problem.\nInterestingly, the objective above can be expressed in terms of the probability distribution of th model output qe(y) := I s (y - IIA fe(x)) px(x)dx as follows..\nwhere H[q,p] denotes the cross-entropy between q and p and we used H[qe,Py]. Eg~qe [- log py(y)]. To minimise this objective, we do not need matched input-output pairs as. in empirical risk minimisation. Instead we need to match the marginal distribution of reconstructed images qe to that of the distribution of HR images. In this respect, the problem becomes more. akin to unsupervised learning or generative modelling. In the following sections we present three approaches to finding the optimal 0 utilising the properties of the affine projection..\nInstead of calculating y for each x separately we perform amortised inference, i. e. we would like to train the SR function fe(x) to calculate the MAP estimate. A natural loss function for learning the parameters 0 is the average log-posterior:\nwhere the expectation is taken over the distribution of LR observations x. This loss depends on the inknown posterior distribution py|x. We proceed by decomposing the log-posterior using Bayes rule as follows.\nNotice that the last term of Eqn. (4), the marginal likelihood, does not depend on 0, so we only have to deal with the likelihood and image prior. The observation model in SR can be described as follows.\nwhere A is a linear transformation used for image downsampling. In general, A can be modelled as a strided two-dimensional convolution. Therefore, the likelihood term in Eqn. (4) is degenerate p(x fe(x)) = 8(x - A fe(x)), and Eqn. (4) can be rewritten as constrained optimisation:\ngex)=IIAfe(x)=(I-A+A)fex)+A+x\nargmax Ex log py (IIA fe(x))\nargmax E log py(II fe(x)) = argmax Ey~qe log py(y) = argmin H[qo, Py]"}, {"section_index": "3", "section_name": "2 AFFINE PROJECTED GENERATIVE ADVERSARIAL NETWORKS", "section_text": "Generative Adversarial Networks (Goodfellow et al., 2014) consist of a generator G that turns noise sampled from some distribution z ~ pz into images G(z) via a parametric mapping, and a dis criminator D that learns to distinguish between real and synthetic images. The generator and dis- criminator are updated in tandem resulting in the generative distribution qg moving closer to the distribution of real data py. The behaviour of GANs depends on the specifics of how the generator and the discriminator are trained. We use the following obiective functions for D and G:\nL(D;G) =-E Og -Ezr log(1 - D(G(z)) D(G(z)) L(G;D) =-Ez~ 1-D(G(z))"}, {"section_index": "4", "section_name": "3.2.1 INSTANCE NOISE", "section_text": "To optimise the criterion Eqn. (6) via gradient descent we need its gradient with respect to 0:\nd Ex[logp(IIA fe(x)] = Eg de y=IIA fe(x)\nd F*y f* = argminEy~pylmsE(f(y+ oe),y) = Og dy 02\nThe algorithm iterates two steps: first, it updates D by lowering L(D; G) keeping G fixed, then i. updates G by lowering L(G; D) keeping D fixed. It can be shown that this amounts to minimising KL[qg||py], where qg is the distribution of samples generated by G. See Appendix A for a proof. In the context of SR, the affine projected SR function IIA fe takes the role of the generator. Instead o. noise, the generator is now fed low-resolution images x ~ px. Leaving everything else unchanged. we can deploy the GAN algorithm to minimise KL[qel|py]. We call this algorithm affine projectec GAN or AffGAN for short. Similarly, we introduce notation SoftGAN to denote the GAN algorithn. without the affine projection, which instead uses an additional soft-constraint lLR = MAE(x, Ay. as in (Garcia, 2016). Note that the difference between the cross-entropy and the KL divergenc is the entropy of qe: H[qe,Py] KL[qe|[py] = H[qe]. Hence, we can expect AffGAN to favou approximate MAP solutions that lead to higher entropy and thus more diverse solutions overall..\nThe theory suggests that GANs should be a convergent algorithm. If a unique optimal discriminator exists and it is reached by optimising D to perfection at each step, technically the whole algorithm corresponds to gradient descent on an estimate of KL[qepy with respect to 0. In practice, however, GANs tend to be highly unstable. So where does the theory go wrong? We think the main reason for the instability of GANs stems from qe and py being concentrated distributions whose support does not overlap. The distribution of natural images py is often assumed to concentrate on or around a low-dimensional manifold. In most cases, qe is degenerate and manifold-like by construction, such as in AffGAN. Therefore, odds are that especially before convergence is reached, qe and py can be perfectly separated by several Ds violating a condition for the convergence proof. We try to remedy this problem by adding instance noise to both SR and true image samples. This amounts to minimising the divergence do(qe, Py) = KL [po * qe|[Po * py], where po * qe denotes convolution of qe with the noise distribution po. The noise level o can be annealed during training, and the noise allows us to safely optimise D until convergence in each iteration. The trick is related to one-sided label noise introduced by Salimans et al. (2016), however without introducing a bias in the optimal discriminator, and we believe it is a promising technique for stabilising GAN training in general. For more details please see Appendix C\nwhere e ~ N(0, I) is Gaussian white noise, f* is the Bayes-optimal denoising function for noise level o. Using these results we can maximise Eqn. (9) by first training a neural network to denoise samples from py and then backpropagate the gradient estimates from Eqn. (12) via the chain rule ir Eqn. (11) to update 0. Well call this method AffDG, as it uses the affine subspace projection and is guided by the gradient from the DAE. Similar to above we'll call the similar algorithm soft-enforcing. Eqn. (5) SoftDG"}, {"section_index": "5", "section_name": "3.4 DENSITY GUIDED SUPER-RESOLUTION", "section_text": "As a more direct baseline model for amortised MAP inference we fit a tractable, yet powerful density model to py using maximum likelihood, and then use cross entropy with respect to the generative model to approximate Eqn. (9). We use a deep generative model similar to the pixelCNN (Oord et al., 2016) but with a continuous (and differentiable) MCGSM (Theis et al., 2012) likelihood. These type of models are state-of-the-art in density estimation, are relatively fast to evaluate and produce visually interesting samples (Oord et al., 2016). We call this method AffLL, as it uses the affine projection and is guided by the log-likelihood of a density model.\nWe designed our experiments to address the following questions\n. Are the methods proposed in Section 3 successful at minimising cross-entropy? -> Section 5. Does the affine projection layer hurt the performance of CNNs for image SR? -> Section 5.2 Do the proposed methods produce perceptually superior SR results? -> Sections 5.3-5.5\nWe initially illustrate the behaviour of the proposed algorithms on data where exact MAP infer. ence is computationally tractable. Here the HR data y = [y1, y2] is drawn from a two-dimensional noisy Swiss-roll distribution and the one-dimensional LR data x is simply the average of the two HR pixels. Next we tested the proposed algorithm in a series of experiments on natural images using 4 downsampling.. For the first dataset, we took random crops from HR images containing. grass texture. SR of random textures is known to be very hard using MSE or MAE loss func-. tions. Finally, we tested the proposed models on real image data of faces (Celeb-A) and natural images(ImageNet). All models were convolution neural networks implemented using Theano (Team et al., 2016) and Lasagne (Dieleman et al., 2015). We refer to Appendix D for full experi- mental details."}, {"section_index": "6", "section_name": "5.1 2D MAP INFERENCE: SWISS-ROLI", "section_text": "In this experiment we wanted to demonstrate that AffGAN and AffDG are indeed minimising the MAP objective in Eqn. (9). For this we used the two-dimensional toy problem where py can be evaluated using brute-force Monte Carlo. Figure 1b) shows the outputs for x = -8, 8| for models trained with different criterion. The AffGAN and AffDG solutions largely fit the dominant mode similar to MAP inference. For the MSE and MAE models the output generally falls in regions with low prior density. Table 1 shows the cross-entropy H[qe, Py] achieved by different methods averaged over 10 independent trials with random initialisation. The cross-entropy values for the GAN and DAE based models are relatively close to the optimal MAP solution, which in this case we can find in a brute-force way. As expected the MSE and MAE models perform worse as these models do not minimize H[qe,py]. We also calculated the average MSE between the network input and the downsampled network output. For the affine projected models, this error is exactly 0 The soft constrained models only approximately satisfy this constraint, even after extensive training (Table 1 second column). Further, we observe that the affine projected models generally found a lower cross-entropy H[qe, Py] when compared to soft-constrained versions.\nAdding the affine projection HA restricts the class of functions that the SR network can model. so it is important to verify that the network is still capable of achieving the same performance in\na) b) c) d) 0.004 0.91 10-2 AffMSE, (F,T) AffMSE, (T,T) AffMSE, (T,R) 10-4 MSE 0.89 (h'h)WISS (x`hv)3SW 0.003 10-6 MMSE 0.87 10-8 0.002 0.85 10-10 3 5 3 5 7 3 5 1 7 1 1 Samples 1e5 Samples 1e5 Samples 1e5\nFigure 2: CelebA performance for MSE models during training. The distance between HR model output y and true HR image y using MSE in a) and SSIM in b). MSE in LR space between input x and down-sampled model. output Ay in c). The tuple in the legend indicate: ((F)ixed / (T)rainable affine projection, (T)rained / (R)andom. initialised affine projections). The models using pre-trained affine projections (fixed:-. trainable: always performs better in all metrics compared to models using either random initialized affine projections -) or no projection ( ). Further, a fixed pre-trained affine projection ensures the best consistency. between input and down-sampled output as seen in figure c). A (top) and A+ (bottom) kernels of the affine. projection are seen in d).\nSR as unconstrained CNN architectures. To test this, we trained CNNs with and without affine projections to perform SR on the Ce1ebA dataset using MSE as the objective function. Results. are shown in Figure 2. First note that when using affine projections, a randomly initialised network. starts learning from a lower initial loss as the low-frequency components of the network output. already match those of the target image. We observed that the affine projected networks generally. train faster than unconstrained ones. Furthermore, the affine projected networks tend to find a better. solution as measured by MSE and SSIM (Figure 2a-b). To investigate which aspects of the network. architecture are responsible for the improved performance, we evaluated two further models: In one. variant, we initialise the affine projected CNN to implement the correct projection, but then treat. A+ as a trainable parameter. In the final variant, we keep the architecture the same, but initialise. the final deconvolution layer A+ randomly and allow it to be trained. We found that initialising. A+ to the correct Moore-Penrose inverse is important, and we get the similar results irrespective of. whether or not it is fixed during training. Figure 2c shows the error between the network input and. the downsampled network output. We can see that the exact affine projected network keeps this error. at virtually O.0 (up to numerical precision), whereas any other network will violate this consistency. In Figure 2d we show the downsampling kernel A and the corresponding optimal kernel for A+.."}, {"section_index": "7", "section_name": "5.3 GRASS TEXTURES", "section_text": "Random textures are known to be hard model using MsE loss function. Figure 3 shows 4 SR of grass texture patches using identical affine projected CNNs trained with different loss functions. When randomly initialised, affine projected CNNs always produce an output with the correct low-. frequency components,as illustrated by the third panel labelled Affinit in Figure 3. The AffGAN. model produces clearly the sharpest images, and we found the images to be plausible given the LR. inputs. Notice that the reconstruction is not perfect pixel-by-pixel, but it has the correct statistical properties for the human visual system to recognise it as grass texture. The AffDG and AffLL mod- els both produced blurry results which we where unable to improve upon using various optimization methods. Due to these findings we choose not to perform any further experiments with these models. and concentrate on AffGAN instead. We refer to Appendix E for discussion of the results of these models."}, {"section_index": "8", "section_name": "5.4 CELEBA FACES", "section_text": "In Figure 4 the SR results are seen for several models trained using different loss functions. Th MSE trained models outputs somewhat generic and over-smoothed images as expected. For the. GAN models the global content is correct for both the affine projected and soft constrained models Comparing the AffGAN and SoftGAN outputs the AffGAN model produces slightly sharper image.\nFigure 3: 4 SR of grass textures. Top row shows LR model input x, true HR image y and model outputs according to figure legend. Bottom row shows zoom in on except from the images in the top row. The AffGAN image is much sharper than the somewhat blurry AffMSE image. Note that both the AffDG and AffLL produces very blurry results. The Affinit shows the output from an untrained affine projected model, i.e. the baseline solution, illustrating the effect of the upsampling using A+..\nwhich however also seem to contain slightly more high frequency noise. We observed some colour drifting for the soft constrained models. Table 2 shows quantitative results for the same four models where, in terms of PSNR and SSIM, the MSE model achieves the best scores as expected. The consistency between input and output clearly shows that the models using the affine projections satisfy Eqn. (5) better than the soft constrained versions for both MSE and GAN losses.\nX MSE y AffMSE SoftGan AffGAN"}, {"section_index": "9", "section_name": "5.5 NATURAL IMAGES", "section_text": "In Figure 5 we show the results for 4 SR from 32 32 to 128 128 pixels for AffGAN trained on natural images from ImageNET. For most of the images the results are sharp and corresponds well. with the LR input. However we still see the high-frequency noise present in most GAN results in. some of the images. Interestingly the snake depicted in the third column is super resolved into water which is obviously wrong but still a very plausible image considering the LR input image. Further. water will likely have a higher density under the image prior than snakes which suggests that the. GAN model dreams up reasonable data.\nAffinit y AffMSE AffGAN AffDAE AffLL X\ntMsE (, Ay)lcmoueIs Figure 4: 4 SR of CelebA faces. Model input x, target y and model outputs using the affine projections (Aff). according to figure legend. Both the AffGAN and SoftGAN produces clearly clearly show better consistency. shaper images than the blurry MSE outputs. We found that AffGAN outputs between input x and down sam- slightly sharper images compared to SoftGAN, however also with slightly pled model output Ay than mod-. more high-frequency noise. els not using the projection.\nAffGAN X\nAffGAN\nFigure 5: 4 SR from 32 32 to 128 128 using AffGAN on the ImageNET. AffGAN outputs (top row), tru HR images y (middle row), model input x (bottom row). Generally the AffGAN produces plausible outputs. which are however still easily distinguishable from true images. Interestingly the snake depicted in the thirc. column is super resolved into water which is obviously wrong but still a very plausible image considering the. LR input image."}, {"section_index": "10", "section_name": "5.6 CRITICISM AND FUTURE DIRECTIONS", "section_text": "In this work we developed methods for approximate MAP inference in SR. We first introduced an. architectural restriction to neural networks projecting the model output to the affine subspace of valid solutions. We then proposed three methods, based on GANs, denoising or density models. for amortised MAP inference in SR using this affine projection. In high dimensions we empirically. found that the GAN based approach, AffGAN produced the most visually appealing results. Our. work follows successful demonstrations of GAN-based algorithms for image SR (Ledig et al., 2016). and we provide additional theoretical motivation for why this approach makes sense. In future work we plan to focus on a stochastic extension of AffGAN which can be seen as performing amortised variational inference.\nOne argument against MAP inference is that the mode of a distribution is dependent on the represen tation: transforming a variable through an invertible transformation and performing MAP inference. in the transformed space may lead to different answers depending on the transformation. As an ex-. treme example, consider transforming a continuous random scalar Y with its cumulative distribution. function F = P(Y < .). The resulting variable F(Y) is uniformly distributed, so any value in the interval (0, 1| can be the mode. Thus, the MAP estimate is not unique if one allows for alternative. representations, and there is no guarantee that the MAP estimate in 24-bit RGB pixel represen-. tation which we seek in this paper is in any way special. One may arrive at a different solution. when performing MAP estimation in the feature space of a convolutional neural network, or even. if merely an alternative colour space is used. Interestingly, AffGAN is more resilient to coordinate. transformations: Eqn. (10) includes the extra term H[qe] which is effected by transformations the. same way as H[qe, Py]. The second argument relates to the assumption that MAP estimates appear plausible. Although by definition the mode lies in a high-probability region, it does not guarantee. that its appearance is anything like that of a random sample. Consider for example data drawn from a d-dimensional standard Normal distribution. Due to concentration of measure, as d increases the norm of a typical sample will be approximately d with very high probability. The mode, however.. has a norm of 0. In this sense, the mode of the distribution is highly atypical. Indeed human ob- servers can easily tell apart a typical sample from the noise distribution and the mode, but would have a hard time noticing the difference between two random samples. This argument suggests. that sampling from the posterior py|x may be a good or even preferable way to obtain plausible. reconstructions. In Appendix F we establish a connection between variational inference, such as in. varational autoencoders (Kingma & Welling, 2014), and a stochastic version of AffGAN, however. leaving emperical studies as further."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Guillaume Alain and Yoshua Bengio. What regularized auto-encoders learn from the data-generating distribu tion. Journal of Machine Learning Research, 15(1):3563-3593, 2014.\nJoan Bruna, Pablo Sprechmann, and Yann LeCun. Super-resolution with deep convolutional sufficient statistics International Conference on Learning Representations, 2016.\nAlexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. arXiv preprint arXiv:1602.02644, 2016.\nDiederik P. Kingma and Max Welling. Auto-encoding variational bayes. In The International Conference or Learning Representations, 2014.\nShakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXiv preprin arXiv:1610.03483, 2016\nKamal Nasrollahi and Thomas B. Moeslund. Super-resolution: a comprehensive survey. Machine Vision and Applications, pp. 1423-1468, 2014.\nSebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-GAN: Training generative neural samplers usin, variational divergence minimization. arXiv preprint arXiv:1606.00709, 2016.\nMartin Arjovsky and Leon Bottou. Towards principled methods for training generative adversarial networks In International Conference on Learning Representations. 2017\nSander Dieleman. Jan Schluter. Colin Raffel. Eben Olson. Soren Kaae Sonderby. Daniel Nouri, and Eric Bat tenberg and. Lasagne: Firstrelease.. 2015. URL http://dx.doi.0rg/10.5281/zenodo.27878.\nChao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep convolu tional networks. IEEE Transactions on Pattern Analysis & Machine Intelligence, pp. 295-307, 2016\nKlaus Greff, Antti Rasmus, Mathias Berglund, Tele Hotloo Hao, Jurgen Schmidhuber, and Harri Valpola Tagger: Deep unsupervised perceptual grouping. In Advances in Neural Information Processing Systems. 2016.\nAaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In Proceea ings of The 33rd International Conference on Machine Learning, pp. 1747--1756, 2016.\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutiona generative adversarial networks. In International Conference on Learning Representations, 2015.\nAntti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learnin with ladder networks. In Advances in Neural Information Processing Systems, pp. 3546-3554, 2015.\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improvec techniques for training gans. In Advances in Neural Information Processing Systems, 2016..\nJaakko Sarela and Harri Valpola. Denoising source separation. Journal of Machine Learning Research, pp 233-272, 2005.\nThe Theano Development Team. Rami Al-Rfou. Guillaume Alain. Amiad Almahairi. Christof Angermueller Dzmitry Bahdanau, Nicolas Ballas, Frederic Bastien, Justin Bayer, Anatoly Belikov, et al. Theano: A python framework for fast computation of mathematical expressions. arXiv preprint arXiv:1605.02688, 2016.\nLucas Theis, Reshad Hosseini, and Matthias Bethge. Mixtures of conditional gaussian scale mixtures applie to multiscale image representations. PLoS ONE, 2012.\nZhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from erro visibility to structural similarity. IEEE transactions on image processing, pp. 600-612, 2004.\nWenzhe Shi, Jose Caballero, Ferenc Huszar, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolu- tional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1874-1883, 2016.\nucas Theis and Matthias Bethge. Generative image modeling using spatial lstms. In Advances in Neura Information Processing Systems, pp. 1927-1935, 2015."}, {"section_index": "12", "section_name": "A GENERATIVE ADVERSARIAL NETWORKS FOR MINIMISING KL-DIVERGENCE", "section_text": "First note that for a fixed generator G the discriminator D maximises:\nEy~py logDy(y) + Ez~N log(1 - Dw(Ge(z)) = Ey~py log Dy(y) + Ey~qg [log(1- Dy(y))] = py(y) log Dy(y) + qG(y) log(1 - Dy(y))dy\nEy~py logDy(y) + Ez~N log(1- Dy(Ge(z)) = Ey~py log Dy(y) + Ey~qG [log(1- Dy(y))] = py(y) log D,(y) + qG(y) log(1 - Dw(y))dy\nwhere qg is the generative distribution. A function of the form a log(x) + b log(1 x) always has maximum at- . and we find the Bayes-optimal discriminator to be (assuming equal prior class probabilities)\nLet's assume that this Bayes-optimal discriminator is unique and can be approximated closely by our neural network (see Appendix C for more discussion on this assumption)..\nUsing the modified update rule proposed here the combined optimization problem for the discriminator and generator is\nV(y,0) = maxEy~py log Dy(y) + Ez~N [log(Dy(Ge(z)) - log(1- Dy(Ge(z))\nStarting from the definition of KL[qg|py\nqG(y) KL[qG|py] = Ey~qG Py(y 1- D*(y) (insert Bayes-optimal classif Og D*(y) 1- Dy(y) Dw(y) 10g Ds(y) 1- Dy(y)\nWhich is equal to the terms affecting the generator in Eqn. (17)\nWhere Ng is the d-dimensional standard normal distribution, and d is the dimensionality of LR data x. l1 anc l2 can be thought of as a Monte Carlo estimate of the spectral norm of the transformations A - ABA an B BAB, respectively. The Monte Carlo formulation above has the advantage that it can be optimised via stochastic gradient descent. The operation ABA can be thought of as a three-layer fully linear convolutiona neural network, where A corresponds to a strided convolution with fixed kernels, while B is a trainable decon volution. We note that for certain downsampling kernels A the exact A+ would have an infinitely large kernel although it can always be approximated with a local kernel. At convergence we found l1 + l2 to be betwee! 10-12 and 10-8 depending on the down-sampling factor, width of the Gaussian kernel used for A and the filte. sizes of A and B.\npY(y) D*(y) = pY(y) + qG(y)\nIn practice we implement the down-sampling projection A as a strided convolution with a fixed Gaussian smoothing kernel where the stride corresponds to the down-sampling factor. A+ is implemented as a transposed convolution operation with parameters optimised numerically via stochastic gradient descent on the following objective function:\nl1(B) =Ey~Nrd|Ay-ABAy l2(B) =Ex~Nr|Bx- BABx|J A+ = argmin{l1(B)+l2(B)} B\n(B) = Ey~Nrd||Ay- ABAy|] (B) =Ex~Nr||Bx- BABx| A+ = argmin{l1(B) +l2(B)}"}, {"section_index": "13", "section_name": "B.2 GRADIENTS", "section_text": "The gradients of the affine pro. ojected SR models is derived by applying the chain rule\ndfe(x) afe(x) dge(x) Oge(x de dge(x) de de\nWhich is essentially the high-pass filtered version of the gradient of ge(x)\nGANs are notoriously unstable to train, and several papers exist that try to improve their convergence properties (Salimans et al., 2016; Radford et al., 2015) via various tricks. Consider the following idealised GAN algorithm, each iteration consisting of the following steps:\n1. we train the discriminator D via logistic regression between qe vs py, until convergence p(y) 3. we update 0 by taking a stochastic gradient step with objective function Ey~qe s(y).\nCrucially, the convergence of this algorithm relies on a few assumptions that don't always hold: (1) that th 1og-likelihood-ratio log o(y) is finite, (2) that the Jensen-Shannon divergence JS[qe|p] is a well-behaved func p(y) tion of 0 and (3) that the Bayes-optimal solution to the logistic regression problem is unique. We stipulate tha in real-world situations neither of these holds, mainly because qe and py are concentrated distributions whos support may not overlap. In image modelling, distribution of natural images py is often assumed to be con centrated on or around a lower-dimensional manifold. Similarly, qe is often degenerate by construction. Th odds that the two distributions share support in high-dimensional space, especially early in training, are ver small. If qe and py have non-overlapping support (1) the log-likelihood-ratio and therefore KL divergence i infinite (2) the Jensen-Shannon divergence is saturated so its maximum value and is locally constant in 0 an (3) there may be a large set of near-optimal discriminators whose logistic regression loss is very close to th Bayes optimum, but each of these possibly provides very different gradients to the generator. Thus, training th discriminator D might find a different near-optimal solution each time depending on initialisation, even for fixed qe and py.\nThe main ways to avoid these pathologies involve making the discriminator's job harder. For example, in most GAN implementations the discriminator is only partially updated in each iteration, rather than trained until convergence. Another way to cripple the discriminator is adding label noise, or equivalently, one-sided label smoothing as introduced by Salimans et al. (2016). In this technique the labels in the discriminator's training data are randomly flipped. However we do not believe these techniques adequately address all of the concerns described above.\nIn Figure 6a we illustrate two almost perfectly separable distributions. Notice how the large gap between th distributions means that there are large number of possible classifiers that tell the two distributions apart anc achieve similar logistic loss. The Bayes-optimal classifier may not be unique, and the set of near-optimal clas sifiers is very large and diverse. In Figure 6b we show the effect of one sided label smoothing or equivalently adding label noise. In this technique, the labels of some real data samples y ~ py are flipped so the dis criminator is trained thinking they were samples from qe. The discriminator indeed has a harder task now but all classifiers are penalised almost equally. As a result, there is still a large set of discriminators whicl achieve near-optimal loss, it's just that the near-optimal loss is now larger. Label smoothing does not help if th Bayes-optimal classifier is not unique.\nInstead we propose to add noise to the samples, rather than labels, which we denote instance noise. Using instance noise the support of the two distributions is broadened and they are no longer perfectly separable as illustrated in Figure 6c. Adding noise, the Bayes-optimal discriminator becomes unique, the discriminator is less prone to overfitting because it has a wider training distribution, and the log-likelihood-ratio becomes bette. behaved. The Jensen-Shannon divergence between the noisy distributions is now a non-constant function of 0 Using instance noise, is easy to construct an algorithm that minimises the following divergence:\nIf qe and py are well-conditioned distributions in a low-dimensional space. this algorithm performs gradien descent on an approximation to the KL divergence, so it should converge. So why is it highly unstable in practical situations?\ndo(qe,Py) = KL [po * qe[po * py]\nStandard 1.6 1.4 Py 1.2 qe 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 -3 0 3 One sided label smoothing 1.6 1.4 Py 1.2 (1-u)qe+ npy 1.0 0.8 0.6 0.4 0.2 0.0 -0.2 -3 1 2 3 Instance Noise 1.6 P o 1.4 * :py *qe 1.2 1.0 0.8 0.6 0.4 0.2 0.0 0.2 -3 -2 7 0 1 2 3\nFigure 6: Illustration of samples from two non-overlapping perfectly distributions distributions in a,) with one-sided label smoothing in b) and instance noise in c). One side-label smoothing shifts the optimal deci sion boundary but py still covers areas with no support in qe. Instance Noise broadens the support of both distributions without biasing the optimal discriminator.\nwhere o is the parameter of the noise distribution. Logistic regression on the noisy samples provides an estimat from ge. We know that, if po is Gaussian, d, is a Bregman-divergence, and that it is O if and only if the tw. distributions are equal. Because of the added noise, dg is less sensitive to local features of the distributior. We found that in our experiments instance noise helped the convergence of AffGAN. We have not tested th. instance noise in the generative modelling application. Because we don't have to worry about over-training the. discriminator, we can train it until convergence, or take more gradient steps between subsequent updates to th. generator. One critical hyper-parameter of this method is the noise distribution. We used additive Gaussia. noise, whose variance we annealed during training. We propose a heuristic annealing schedule where the nois. is adapted so as to keep the optimal discriminator's loss constant during training. It is possible that othe. noise distributions such as heavy-tailed or spike-and-slab would work better but we have not investigated thes. Options."}, {"section_index": "14", "section_name": "LOSS FUNCTIONS", "section_text": "For the GAN models the generative and discriminative parameters were updated using Eqn. (1O). For the models enforcing Eqn. (5) using a soft-constraint we added an extra MAE loss term to the generative parameters lM AE = , ||x Ay |l, where i i runs over the number of data samples N.\nThe denoiser guided models were trained in a two step procedure. Initially we pre-trained a DAE to denoise samples from the data distribution by minimising.\n1 |y,fbAE(9)| lDAE N y=y+e, e~N(0,oI)\nWhere a is the learning rate. During training we continuously load parameters of the DAE trained at increas ingly low noise levels to get gradients pointing in the approximate correct direction in beginning of training while covering a large data space and precise gradients close to the data manifold in the end of the training.\nFor the density guided models we first pre-train a density model by maximising the tractable log-likelihoo\nThe 2D target data y. [y1, y2] was sampled from the 2D Swiss-Roll defined as:\nV1 ~ N(1, 01), V2 ~ N(2,02) r = 0.4v1 +V2 y = [cos(v1) * r, sin(v1) * r]\nV1 ~ N(1,01), V2 ~ N(2, r = 0.4v1 +V2 y = [cos(v1) * r, sin(v1) * r]\nH|qe, py were calculated by estimating the probability density function using a Gaussian kernel density esti mator fitted to 50.000 samples from a noiseless Swiss Roll density i.e. 2 = 0, and setting the bandwidth of each kernel to 2 = 0.2. All generator and discriminators were 2-layered fully connected NNs with 64 units in each layer. For the AffDG model the DAE was a two layered NN with 256 units in each layer trained while. annealing the standard deviation of the Gaussian noise from 0.5 to 0.25.."}, {"section_index": "15", "section_name": "IMAGE DATA", "section_text": "For all image experiments we set A to a convolution using a Gaussian smoothing kernel of size 9 9 using stride of 4 corresponding to 4 down-sampling. A+ were set to a convolution operation with 4? kernels of size 5 5 followed by a reordering of the pixel with the output corresponding to 4 up-sampling convolution as described in (Shi et al., 2016). The parameters of the A+ was optimised numerically as described in Appendi B. All down-sampling were done using the A projection. For all image models we used convolutional models using ReLU nonlinearities and batch normalization in all layers except the output. All generators used skij connections similar to (Huang et al., 2016) and a final sigmoid non-linearity was applied to output of the mode which were either used directly or feed through the affine transformation layers parameterised by A and A+ The discriminators were standard convolutional networks followed by a final sigmoid layer.\nFor the grass texture experiments we used randomly extracted patches of data from high resolution grass tex ture images. The generators used 6 layers of convolutions with 32, 32, 64, 64, 128 and filter maps and skip connections after every second layer. The discriminators had four layers of strided convolutions with 32, 64 128 and 256 filter maps. For the AffDG model the DAE was a four layer convolutional network with 128 filte maps in each layer trained while annealing the standard deviation of the Gaussian noise from O.5 to 0.01. The density model was implemented as a pixelCNN similar to Oord et al. (2016) with four layers of convolutior\nDuring training we anneal the noise level o and continuously save the model parameters of the DAE f(DAE) trained at increasingly smaller noise levels. We then learn the parameters of the generator by following the gradient in Eqn. (11) using the DAE to estimate - 3. log p(y)\na a Ex[logp(y)] = Eg de U de fDAE(y)-y a E ae 2 a 0i+10+Q Ex[log p(y)] ae\n(y)=>] log p(yjy<j\nWhere the joint density have been decomposed using the chain rule and j runs over the pixels. Similar to the DAE we continuously save the parameters of the density model during training. We then learn the parameters of the generator by directly minimising the negative log-likelihood of the generated samples under the learned density model.\nl = -L(y) = -(fe(x))\n19.5 0.62 19.0 0.58 PNSd WISS 0.54 18.5 AffLL 0.50 AffDAE 18.0 0.0e+00 2.0e+05 4.0e+05 6.0e+05 0.0e+00 2.0e+05 4.0e+05 6.0e+ Samples Samples\n19.0 SNR 18.5 18.0 0.0e+0\nFigure 7: PSNR and SSIM results for the AffDG and AffLL models. Note that the step-like behaviour of the AffDG model is due to change of the DAE model with continuously lower noise levels\nFor the ImageNET experiments the 2012 dataset were randomly split into train, validation and test set with 104. samples in the test and validation sets. All images below 20kB were then discarded to remove images with to. low resolution. The images were center cropped and resized to 128 128 before down-sampling to 32 32. using A. The generator were a 8 layer convolutional network with 4 layers of 128 and 256 filter maps and skip. connections between every second layer. The discriminators were 8 layer convolution nets with two layers of 128, 256, 512 and 1024 filter maps using a stride of 2 for every second layer. To stabilise training we used. Gaussian instance noise linearly annealed from an initial standard deviation of 0.1 to 0. We were unable to stable train models without this extra regularization.."}, {"section_index": "16", "section_name": "ADDITIONAL RESULTS FOR DENOISER AND DENSITY GUIDEL SUPER-RESOLUTION", "section_text": "Figure 7 show the PSNR and SSIM scores during training for the AffDG and AffLL models trained on the. grass textures. Note that the models are converging, but as seen in Figure 3 the images are very blurry. For both. models we had problems with diverging training. For the DAE models with high noise levels the gradients are. only approximately correct but covers a large space around the data manifold whereas for small noise levels. the gradients are more accurate in a small space around the data manifold. For the density model we believe a. similar phenomenon is making the training diverge since for accurate density models the estimated density is likely very peaked around the data manifold making learning in the beginning of training difficult. To resolve. these issue we started training using models with high noise levels or low log-likelihood values and then loaded. model parameters during training with continuously smaller noise levels or better log-likelihood values. The. effect of this can be clearly seen during training as the step like behavior of the AffDG in Figure 7. We note that. the density model used for training the AffLL achieved a log-likelihood of -4.10 bits per dimension which. is comparable to values obtained in Theis & Bethge (2015) on a texture dataset. Further the AffLL model. achieved high log-likelihood values > -3.5 under this model suggesting that the density model is simply not. providing an accurate enough representation of py to provide precises scores for training the AffLL model..\nwith 64 filter map with kernel sizes of 5, except for the first layers which used 7. The original PixelCNN uses a non-differentiable categorical distribution as the likelihood model why it can not be used for gradient based optimization. Instead we used a MCGSM as the likelihood mode1 (Theis & Bethge, 2015), which have been shown to be a good density model for images (Theis et al., 2012), using 32 mixture components and 32 quadratic features to approximate the covariance matrices.\nFor the CelebA experiments the datasets were split into train, validation and test set using the standard splitting. All images were center cropped and resized to 64 64 before down-sampling to 16 16 using A. All generators. were 12 layer convolution networks with four layers of 128, 256 and 512 filter maps and skip connections between every fourth layer. The discriminators were 8 layer convolution nets with two layers of 128, 256, 512 and 1024 filter maps using a stride of 2 for every second layer..\nFigure 8: 4 SR from 32 32 to 128 128 using AffGAN on the ImageNET. AffGAN outputs (top row) true HR images y (middle row), model input x (bottom row).."}, {"section_index": "17", "section_name": "I AMORTISED VARIATIONAL INFERENCE USING AFFGAN", "section_text": "Here we'll show that a stochastic extension of the AffGAN model approximately minimises an amortised vari ational inference criterion as in e. g. variational autoencoders, which for the first time establishes a connectior between adversarial methods of inferences and and variational inference. We introduce a variant of AffGA where, in addition to the LR data x, the generator function also takes as input some independent noise variable z: we establish a connection between GANs and amortised variational\nz ~ pz y =II fe(x,z)\nSimilarly to how we defined qe in Section 3.1 we introduce the following notation\nHere the affine projection ensures that under qx,y;e, x and y are always consistent. Therefore, under qx,y;e. the conditional of x given y is the same as the likelihood px|y = d(x - Ay) by construction and the following equality holds:\nqx,Y;0 qY;0 : Px|Y px : qy|X;0\nqy;o(y) KL[qY;e|py] = Eqy;o log PY(y) qY;o(y) Eqx,Y;0 log PY(y) qY|X;o(y|x) PY|x(y|x) Epx KL[qy|x;e]PY|X\nTherefore we can conclude that the AffGAN algorithm described in Section 3.2 approximately minimizes the following amortised variational inference criterion:\nargmin KL [qy;e|py] = argmin Ex~px I KLqy[x;epy|x]\nand in doing so it only requires samples from Py and Px.\nAffGAN y X\nqY;e := Ex [Y|X:0 := qx,Y;e := px : qy|X;0\nqY;0PY|X=Ay = PY;0:QY|X=Ay;0"}] |
HkNEuToge | [{"section_index": "0", "section_name": "ENERGY-BASED SPHERICAL SPARSE CODING", "section_text": "Bailey Kong and Charless C. Fowlkes\nBalley Kong and. Department of Computer Science University of California, Irvine Irvine. CA 92697 USA\nIn this paper, we explore an efficient variant of convolutional sparse coding with unit norm code vectors where reconstruction quality is evaluated using an inner product (cosine distance). To use these codes for discriminative classification, we describe a model we term Energy-Based Spherical Sparse Coding (EB-SSC) in which the hypothesized class label introduces a learned linear bias into the coding step. We evaluate and visualize performance of stacking this encoder to make a deep layered model for image classification."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Sparse coding has been widely studied as a representation for images, audio and other vectorial data This has been a highly successful method that has found its way into many applications, from signal. compression and denoising (Donoho2006) Elad & Aharon2006) to image classification (Wright et al.2009), to modeling neuronal receptive fields in visual cortex (Olshausen & Field[|1997). Since. its introduction, subsequent works have brought sparse coding into the supervised learning setting by introducing classification loss terms to the original formulation to encourage features that are not. only able to reconstruct the original signal but are also discriminative (Jiang et al.]2011]|Yang et al.. 2010fZeiler et al.2 2010f Ji et al. 2013). 2011 Zhou et al.I 2012 Zhang et al.|\nWhile supervised sparse coding methods have been shown to find more discriminative features lead-. ing to improved classification performance over their unsupervised counterparts, they have received much less attention in recent years and have been eclipsed by simpler feed-forward architectures.\nThis is in part because sparse coding is computationally expensive. Convex formulations of sparse coding typically consist of a minimization problem over an objective that includes a least-squares (LSQ) reconstruction error term plus a sparsity inducing regularizer.\nBecause there is no closed-form solution to this formulation, various iterative optimization tech niques are generally used to find a solution (Zeiler et al.[20i0] Bristow et al.]2013,Yang et al. 2013f Heide et al.2015). In applications where an approximate solution suffices, there is work. that learns non-linear predictors to estimate sparse codes rather than solve the objective more di. rectly (Gregor & LeCun] 2010). The computational overhead for iterative schemes becomes quite significant when training discriminative models due to the demand of processing many training ex. amples necessary for good performance, and so sparse coding has fallen out of favor by not being. able to keep up with simpler non-iterative coding methods..\nIn this paper we introduce an alternate formulation of sparse coding using unit length codes and. a reconstruction loss based on the cosine similarity. Optimal sparse codes in this model can be computed in a non-iterative fashion and the coding objective lends itself naturally to embedding in. a discriminative, energy-based classifier which we term energy-based spherical sparse coding (EB. SSC). This bi-directional coding method incorporates both top-down and bottom-up information. where the features representation depends on both a hypothesized class label and the input signal.. Like[Cao et al.(2015), our motivation for bi-directional coding comes from the \"Biased Competition. Theory', which suggests that visual processing can be biased by other mental processes (e.g., top-. down influence) to prioritize certain features that are most relevant to current task. Fig.[1illustrates. the flow of computation used by our SSC and EB-SSC building blocks compared to a standard. feed-forward layer."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Our energy based approach for combining top-down and bottom-up information is closely tied to. the ideas of Larochelle & Bengio (2008); Ji et al.(2011);Zhang et al.[(2013);Li & Guo[(2014] although the model details are substantially different (e.g., Ji et al.(2011) andZhang et al. (2013) use sigmoid non-linearities while Li & Guo (2014) use separate representations for top-down anc bottom-up information). The energy function of Larochelle & Bengio(2008) is also similar but. includes an extra classification term and is trained as a restricted Boltzmann machine..\nFigure 1: Building blocks for coding networks explored in this paper. Our coding model uses non-linearities that are closely related to the standard ReLU activation function. (a) Keeping both positive and negative activations provides a baseline feed-forward model termed concatenated ReLU (CReLU). (b) Our spherical sparse coding layer has a similar structure but with an extra bias and normalization step. Our proposed energy-based model uses (c) energy-based spherical sparse coding (EB-SSC) blocks that produces sparse activations which are not only positive and negative, but are class-specific. These blocks can be stacked to build deeper architectures."}, {"section_index": "3", "section_name": "1.1 NOTATION", "section_text": "Matrices are denoted as uppercase bold (e.g., A), vectors are lowercase bold (e.g., a), and scalars. are lowercase (e.g., a). We denote the transpose operator with t, the element-wise multiplicatior. operator with O, the convolution operator with *, and the cross-correlation operator with *. For vec tors where we dropped the subscript k (e.g., d and z), we refer to a super vector with K components stacked together (e.g., z = [zI, ..., zK]).\nEnergy-based models capture dependencies between variables using an energy function that measure the compatibility of the configuration of variables (LeCun et al.l|2006). To measure the compatibility between the top-down and bottom-up information, we define the energy function of EB-SSC to be. the sum of bottom-up coding term and a top-down classification term:."}, {"section_index": "4", "section_name": "2.1 BOTTOM-UP RECONSTRUCTION", "section_text": "To measure the compatibility between the input signal x and the latent feature maps z, we introduce. a novel variant of sparse coding that is amenable to efficient feed-forward optimization. While the idea behind this variant can be applied to either patch-based or convolutional sparse coding, we. specifically use the convolutional variant that shares the burden of coding an image among nearby. overlapping dictionary elements. Using such a shift-invariant approach avoids the need to learn dic. tionary elements which are simply translated copies of each other, freeing up resources to discover. more diverse and specific filters (seeKavukcuoglu et al.(2010)).\nx+ Convolution x - Convolution Pos. Neg. Convolution Class Bias Class Bias Neg. ReLU Neg. ReLU ReLU Neg. ReLU ReLU ReLU Concatenation Concatenation Concatenation Normalization Normalization (a) CReLU (b) SSC (c) EB-SSC\nK arg min ||x -dz * zkl2 + B||z||1. Z k=1\nUnlike standard feed-forward CNN models that convolve the input signal x with the filters, this energy function corresponds to a generative model where the latent feature maps {z1, ..., Zk} are convolved with the filters and compared to the input signal (Bristow et al.]2013] Heide et al.]2015 Zeiler et al.2010).\nTo motivate our novel variant of CsC, consider expanding the squared reconstruction error |x r||? = ||x|l? - 2x'r + ||r|l2. If we constrain the reconstruction r to have unit norm, the recon struction error depends entirely on the inner product between x and r and is equivalent to the cosine similarity (up to additive and multiplicative constants). This suggests the closely related unit-length reconstruction problem:\nK dk * Zk) - R|[z| arg max xT Z k=1 K Idx*Zk|21 s.t. k=1\nIn Appendix A we show that, given an optimal unit length reconstruction r* with corresponding codes z*, the solution to the least squares reconstruction problem (Eq.2) can be computed by a simple scaling r* = (xr* _ ||z*|1)r*\nK argmax Ecode(x, z) = argmax x( (`dz*zk)D|[z|[1 l|z|21 |z||21 k=1\nldx*Zk|l|dx *Zkl|2|dx||1|zk|1Dl|zkll k k k k\nwhere the factor D is the dimension of zk and arises from switching from the 1-norm to the 2-norm Since D ,||zkll2 < 1 is a tighter constraint we have\nEcode(x,z) Ecode(x,z) max max ll k dz*Zk||2<1 k||zk||2D\nHowever, this relaxation is very loose, primarily due to the triangle inequality. Except in special. cases (e.g., if the dictionary elements have disjoint spectra) the SSC codes will be quite different from the standard least-squares reconstruction..\n1we note that our formulation is also closely related to the dynamical model suggested byRozell et al [2008), but without the dictionary-dependent lateral inhibition between feature maps. Lateral inhibition can solve the unit-length reconstruction formulation of standard sparse coding but requires iterative optimization.\nConvolutional sparse coding (CsC) attempts to find a set of dictionary elements {d1, ..., dk} and. accurately represents the input signal x. This is traditionally framed as a least-squares minimization with a sparsity inducing prior on z:.\nThe unit-length reconstruction problem is no easier than the original least-squares optimization due. to the constraint on the reconstruction which couples the codes for different filters. Instead consider a simplified constraint on z which we refer to as spherical sparse coding (SsC):\nThis problem is a relaxation of convolutional sparse coding since it ignores non-orthogonal inter actions between the dictionary elements'| Alternately, assuming unit norm dictionary elements, the code norm constraint can be used to upper-bound the reconstruction length. We have by the triangle and Young's inequality that:\nTo measure the compatibility between the class label y and the latent feature maps z, we use a set of one-vs-all linear classifiers. To provide more flexibility, we generalize this by splitting the code vector into positive and negative components:.\nThe classifier thus is parameterized by a pair of weight vectors (wk and wyk) for each class label y and k-th channel of the latent feature map..\nThis splitting, sometimes referred to as full-wave rectification, is useful since a dictionary elemen and its negative do not necessarily have opposite visual semantics. This splitting also allows the classifier the flexibility to assign distinct meanings or alternately be completely invariant to contras reversal depending on the problem domain. For example, Shang et al.(2016) found CNN models with ReLU non-linearities which discard the negative activations tend to learn pairs of filters whicl are related by negation. Keeping both positive and negative responses allowed them to halve the number of dictionary elements.\nWe note that it is also straightforward to introduce spatial average pooling prior to classification b introducing a fixed linear operator P used to pool the codes (e.g., w+t Pz). This is motivated by a variety of hand-engineered feature extractors and sparse coding models, such as Ren & Ramanar (2013), which use spatially pooled histograms of sparse codes for classification. This fixed pooling can be viewed as a form of regularization on the linear classifier which enforces shared weights ove spatial blocks of the latent feature map. Splitting is also quite important to prevent information los. when performing additive pooling since positive and negative components of zk can cancel eacl Other out."}, {"section_index": "5", "section_name": "2.3 CODING", "section_text": "Bottom-up reconstruction and top-down classification each provide half of the story, coupled by the latent feature maps. For a given input x and hypothesized class y, we would like to find the optimal activations z that maximize the joint energy function E(x, y, z). This requires solving the following Optimization:\nK K K dx*zk)-||1+wz+ arg max x'. l|z||21 k=1 k=1 k=1\nwhere x E RD is an image and y E V is a class hypothesis. zk E RF is the k-th component. latent variable being inferred; zf and z are the positive and negative coefficients of zk, such that. positive coefficient classifier, and negative coefficient classifier for the k-th component respectively. A key aspect of our formulation is that the optimal codes can be found very efficiently in closed form--in a feed-forward manner (see[Appendix B|for a detailed argument).."}, {"section_index": "6", "section_name": "2.3.1 ASYMMETRIC SHRINKAGE", "section_text": "To describe the coding processes, let us first define a generalized version of the shrinkage function. commonly used in sparse coding. Our asymmetric shrinkage is parameterized by upper and lower thresholds - < 3+\nifv-+ >0 U shrink(+,- 0 otherwise v+B- ifv+-<0\nZk = ZT +Z Z > 0 zF 0\nand allow the linear classifier to operate on each component separately. We express the classifier score for a hypothesized class label y by:\nK K L Eclass(y,z) = W k=1 k=1\n(a) -- 0 + (b) 0 -- + (c) -- + < 0 (d) - 0-+\nFigure 2: Comparing the behavior of asymmetric shrinkage for different settings of 3+ and - (a)-(c) satisfy the condition that -- + while (d) does not..\nFig.2 shows a visualization of this function which generalizes the standard shrinkage proximal. operator by allowing for the positive and negative thresholds. In particular, it corresponds to the proximal operator for a version of the l1-norm that penalizes the positive and negative components. with different weights |v|asym = +||v+1 + -|v-|1. The standard shrink operator corresponds to shrink(,-)(v) while the rectified linear unit common in CNNs is given by a limiting case. shrink(o,-)(v). We note that -- + is required for shrink(+,-) to be a proper function. (see Fig.2)."}, {"section_index": "7", "section_name": "2.3.2 FEED-FORWARD CODING", "section_text": "We now describe how codes can be computed in a simple feed-forward pass. Let\n- Byk=-w ykj yk\nbe vectors of positive and negative biases whose entries are associated with a spatial location in the feature map k for class y. The optimal code z can be computed in three sequential steps.\n1. Cross-correlate the data with the filterbank d * x 2. Apply an asymmetric version of the standard shrinkage opera.\nshrink(3+.3-(x) = ReLU(x +)- ReLU(-(x + -))\nshrink(+.3-)(x) = ReLU(x +) - ReLU(-(x + ))\nSSC coding can thus be seen as a CNN in which the ReLU activation has been replaced with shrink age followed by a global normalization..\nZk = shrink dk*X yk yk\nz * Z 1z|2\nWe note that this formulation of coding has a close connection to single layer convolutional neural network (CNN). A typical CNN layer consists of convolution with a filterbank followed by a non-. linear activation such as a rectified linear unit (ReLU). ReLUs can be viewed as another way of inducing sparsity, but rather than coring the values around zero like the shrink function, ReLU. truncates negative values. On the other hand, the asymmetric shrink function can be viewed as the sum of two ReLUs applied to appropriately biased inputs:."}, {"section_index": "8", "section_name": "3 LEARNING", "section_text": "We formulate supervised learning using the softmax log-loss that maximizes the energy for the true class label y; while minimizing energy of incorrect labels y.\nNote that unlike classical sparse coding, where is a hyperparameter that is usually set using cross validation, we treat it as a parameter of the model that is learned to maximize performance"}, {"section_index": "9", "section_name": "3.1 OPTIMIZATION", "section_text": "In order to solve Eq.13] we explicitly formulate our model as a directed-acyclic-graph (DAG) neura network with shared weights, where the forward-pass computes the sparse code vectors and the backward-pass updates the parameter weights. We optimize the objective using stochastic gradient descent (SGD).\nK K K K E'(x,y,z)=xT(dz*zx)+b1zx-wtz+ W... yk Zk k=1 k=1 k=1 k=1\nwhere b is constant offset for each code channel. The modified linear \"classification'' terms now take on a dual role of inducing sparsity and measuring the compatibility between z and y\nThis yields a modified learning objective that can easily be solved with existing implementations fc learning convolutional neural nets:.\nN 1 max E(xi,Yi,z)+ log max Xiy N l|z|2<1 l|z||21 i=1 yEV s.t. 3-W (-w y, k yk\nwhere a is the hyperparameter regularizing w, w, and d. We constrain the relationship between 3 and the entries of w and w in order for the asymmetric shrinkage to be a proper function (see Sec.2.3.1and Appendix B for details)..\nIn classical sparse coding, it is typical to constrain the l2-norm of each dictionary filter to unit length. Our spherical coding objective behaves similarly. For any optimal code z*, there is a 1-dimensional. subspace of parameters for which z* is optimal given by scaling d inversely to w, . For simplicity of the implementation, we opt to regularize d to assure a unique solution. However, as Tygert et al. (2015) point out, it may be advantageous from the perspective of optimization to explicitly constrain. the norm of the filter bank.\nin Eq.10 However, the inequality constraint on their relationship to keep the shrinkage function a. proper function is difficult to enforce when optimizing with SGD. Instead, we introduce a central. offset parameter and reduce the ordering constraint to pair of positivity constraints. Let\n-bk W yk =yk+bk yk k\nbe the modified linear \"classifiers\"' relative to the central offset bk. It is straightforward to see that if 3yk and 3yk that satisfy the constrain in Eq.13 then adding the same value to both sides of the inequality will not change that. However, taking bk to be a midpoint between them, then both 3+k bk and k + bk will be strictly non-negative.\nN 1 max eE'(xi,y,z N l|z||21 l|z||21 i=1 yEV\nwhere w+ and w- are the new sparsity inducing classifiers, and b are the arbitrary origin points. In particular, adding the K origin points allows us to enforce the constraint by simply projecting w+ and w- onto the positive orthant during SGD"}, {"section_index": "10", "section_name": "3.1.1 STACKING BLOCKS", "section_text": "We also examine stacking multiple blocks of our energy function in order to build a hierarchical representation. As mentioned in Sec. [3.1.1] the optimal codes can be computed in a simple feed- forward pass-this applies to shallow versions of our model. When stacking multiple blocks of our energy-based model, solving for the optimal codes cannot be done in a feed-forward pass since the codes for different blocks are coupled (bilinearly) in the joint objective. Instead, we can proceed in an iterative manner, performing block-coordinate descent by repeatedly passing up and down the hierarchy updating the codes. In this section we investigate the trade-off between the number of passes used to find the optimal codes for the stacked model and classification performance.\nFor this purpose, we train multiple instances of a 2-block version of our energy-based model that differ in the number of iterations used when solving for the codes. For recurrent networks such as this, inference is commonly implemented by \"unrolling'' the network, where the parts of the net work structure are repeated with parameters shared across these repeated parts to mimic an iterative algorithm that stops at a fixed number of iterations rather than at some convergence criteria.\nFigure 3: Comparing the effects of unrolling a 2-block version of our energy-based model. (Best viewed in color.)\nIn Fig.[3] we compare the performance between models that were unrolled zero to four times. We see that there is a difference in performance based on how many sweeps of the variables are made In terms of the training objective, more unrolling produces models that have lower objective values with convergence after only a few passes. In terms of testing error, however, we see that full code inference is not necessarily better, as unrolling once or twice has lower errors than unrolling three or four times. The biggest difference was between not unrolling and unrolling once, where both the training objective and testing error goes down. The testing error decreases from 0.0131 to 0.0074 While there is a clear benefit in terms of performance for unrolling at least once, there is also a trade-off between performance and computational resource. especially for deeper models."}, {"section_index": "11", "section_name": "4 EXPERIMENTS", "section_text": "We evaluate the benefits of combining top-down and bottom-up information to produce class- specific features on the CIFAR-10 (Krizhevsky & Hinton2009) dataset using a deep version of. our EB-SSC. All experiments were performed using MatConvNet (Vedaldi & Lenc2015) frame- work with the ADAM optimizer (Kingma & Ba]2014). The data was preprocessed and augmented following the procedure inGoodfellow et al.(2013). Specifically, the data was made zero mean and. whitened, augmented with horizontal flips (with a O.5 probability) and random cropping. No weight decay was used, but we used a dropout rate of 0.3 before every convolution layer except for the first For these experiments we consider a single forward pass (no unrolling)..\n10 0.12 not unrolled not unrolled -unrolled 1 -unrolled 1 unrolled 2 0.1 unrolled 2 unrolled 3 unrolled 3 100 -unrolled 4 unrolled 4 0.08 10 test 0.04 0.02 103 0 0 10 20 30 40 50 0 10 20 30 40 50 epoch epoch (a) Train Objective (b) Test Error\nTable 1: Underlying block architecture common across all models we evaluated. SSC networks add an extra normalization layer after the non-linearity. And EB-SSC networks insert class-specific bias layers between the convolution layer and the non-linearity. Concatenated ReLU (CReLU) splits positive and negative activations into two separate channels rather than discarding the negative com ponent as in the standard ReLU."}, {"section_index": "12", "section_name": "4.1 CLASSIFICATION", "section_text": "We compare our proposed EB-SSC model to that of Springenberg et al. (2015), which uses rectified. linear units (ReLU) as its non-linearity. This model can be viewed as a basic feed-forward versior of our proposed model which we take as a baseline. We also consider variants of the baseline mode.. that utilize a subset of architectural features of our proposed model (e.g., concatenated rectified. linear units (CReLU) and spherical normalization (SN)) to understand how subtle design changes of. the network architecture affects performance..\nWe describe the model architecture in terms of the feature extractor and classifier. Table[1shows th overall network architecture of feature extractors, which consist of seven convolution blocks and tw pooling layers. We test two possible classifiers: a simple linear classifier (LC) and our energy-basec classifier (EBC), and use softmax-loss for all models. For linear classifiers, a numerical subscrip indicates which of the seven conv blocks of the feature extractor is used for classification (e.g., LC indicates the activations out of the last conv block is fed into the linear classifier). For energy-basec classifiers, a numerical subscript indicates which conv blocks of the feature extractor are replace with a energy-based classifier (e.g., EBC6-7 indicates the activations out of conv5 is fed into the energy-based classifier and the energy-based classifier has a similar architecture to the conv block it replaces). The notation differ because for energy-based classifiers, the optimal activations are a function of the hypothesized class label, whereas for linear classifiers, they are not.\nTable 2: Comparison of the baseline ReLU+LC- model, its derivative models, and our proposec model on CIFAR-10.\nThe results shown in Table2 compare our proposed model to the baselines ReLU+LC- (Springen- berg et al.]2015) and CReLU+LC7 (Shang et al.J2016), and to intermediate variants. The base- line models all perform very similarly with some small reductions in error rates over the baseline CReLU+LC7. However, CReLU+LC- reduces the error rate over ReLU+LC- by more than one percent (from 11.40% to 10.17%), which confirms the claims by Shang et al.(2016) and demon- strates the benefits of splitting positive and negative activations. Likewise, we see further decrease in the error rate (to 9.74%) from using spherical normalization. Though normalizing the activations doesn't add any capacity to the model, this improved performance is likely because scale-invariant activations makes training easier. On the other hand, further sparsifying the activations yielded nc"}, {"section_index": "13", "section_name": "4.2 DECODING CLASS-SPECIFIC CODES", "section_text": "A unique aspect of our model is that it is generative in the sense that each layer is explicitly trying tc encode the activation pattern in the prior layer. Similar to the work on deconvolutional networks built on least-squares sparse coding (Zeiler et al.|2010), we can synthesize input images from activations. in our spherical coding network by performing repeated deconvolutions (transposed convolutions). back through the network. Since our model is energy based, we can further examine how the top. down information of a hypothesized class effects the intermediate activations..\nno bias plane car bird cat deer dog frog horse ship truck Conva ConrA Convs Convr\nFigure 4: The reconstruction of an airplane image from different levels of the network (rows) across different hypothesized class labels (columns). The first column is pure reconstruction, i.e., unbiased by a hypothesized class label, the remaining columns show reconstructions of the learned class bias at each layer for one of ten possible CIFAR-10 class labels. (Best viewed in color.)\nThe first column in Fig.4|visualizes reconstructions of a given input image based on activations from different layers of the model by convolution transpose. In this case we put in zeros for class biases (i.e., no top-down) and are able to recover high fidelity reconstructions of the input. In the remaining columns, we use the same deconvolution pass to construct input space representations o1 the learned classifier biases. At low levels of the feature hierarchy, these biases are spatially smooth since the receptive fields are small and there is little spatial invariance capture in the activations. A higher levels these class-conditional bias fields become more tightly localized.\nFinally, in Fig.5|we shows decodings from the conv2 and conv5 layer of the EB-SSC model for a. given input under different class hypotheses. Here we subtract out the contribution of the top-down. bias term in order to isolate the effect of the class conditioning on the encoding of input features As visible in the figure, the modulation of the activations focused around particular regions of the. image and the differences across class hypotheses becomes more pronounced at higher layers of the. network."}, {"section_index": "14", "section_name": "5 CONCLUSION", "section_text": "We presented an energy-based sparse coding method that efficiently combines cosine similarity, convolutional sparse coding, and linear classification. Our model shows a clear mathematical con- nection between the activation functions used in CNNs to introduce sparsity and our cosine similar- ity convolutional sparse coding formulation. Our proposed model outperforms the baseline model and we show which attributes of our model contributes most to the increase in performance. We also demonstrate that our proposed model provides an interesting framework to probe the effects of class-specific coding."}, {"section_index": "15", "section_name": "REFERENCES", "section_text": "benefit. We tested values = {0.001, 0.01} and found 0.001 to perform better. Replacing the linear classifier with our energy-based classifier further decreases the error rate by another half percent (to. 9.23%).\n2 (a) conv2 (b) conv5\nFigure 5: Visualizing the reconstruction of different input images (rows) for each of 10 different class hypotheses (cols) from the 2nd and 5th block activations for a model trained on MNIST digit classification.\nDavid L Donoho. Compressed sensing. IEEE Transactions on information theory. 2006\nKarol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In International Conference on Machine Learning (ICML), 2010.\nFelix Heide, Wolfgang Heidrich, and Gordon Wetzstein. Fast and flexible convolutional sparse coding. In Computer Vision and Pattern Recognition (CVPR). 2015\nZhengping Ji, Wentao Huang, G. Kenyon, and L.M.A. Bettencourt. Hierarchical discriminative sparse coding via bidirectional connections. In International Joint Converence on Neural Net. works (IJCNN), 2011.\nZhuolin Jiang, Zhe Lin, and Larry S Davis. Learning a discriminative dictionary for sparse coding via label consistent K-SVD. In Computer Vision and Pattern Recognition (CVPR), 2011.\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nMichael Elad and Michal Aharon. Image denoising via sparse and redundant representations ove. learned dictionaries. IEEE Transactions on Image processing, 2006.\nHugo Larochelle and Yoshua Bengio. Classification using discriminative restricted boltzmann ma chines. In International conference on Machine learning (ICML), 2008.\nYann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato, and F Huang. A tutorial on energy-base learning. Predicting structured data, 2006.\nBruno A Olshausen and David J Field. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision research, 1997.\nJianchao Yang, Kai Yu, and Thomas Huang. Supervised translation-invariant sparse coding. Ir Computer Vision and Pattern Recognition (CVPR), 2010.\nNing Zhou, Yi Shen, Jinye Peng, and Jianping Fan. Learning inter-related visual dictionary for object recognition. In Computer Vision and Pattern Recognition (CVPR), 2012.\nXiaofeng Ren and Deva Ramanan. Histograms of sparse codes for object detection. In Computer Vision and Pattern Recognition (CVPR), 2013.\nChristopher J Rozell, Don H Johnson, Richard G Baraniuk, and Bruno A Olshausen. Sparse coding via thresholding and local competition in neural circuits. Neural computation. 2008\nYangmuzi Zhang, Zhuolin Jiang, and Larry S Davis. Discriminative tensor sparse coding for image classification. In British Machine Vision Conference (BMVC), 2013."}, {"section_index": "16", "section_name": "APPENDIX A", "section_text": "Here we show that spherical sparse coding (SsC) with a norm constraint on the reconstruction is equivalent to standard convolutional sparse coding (CsC). Expanding the least squares reconstruc-. tion error and dropping the constant term x 2 gives the CSC problem:\nLet e = Ik=1 dx * zk|2 be the norm of the reconstruction for some code z and let u be th reconstruction scaled e to have unit norm so that:\nWe rewrite the least-sq! uares objective in terms of these new variables:\nTaking the derivative of q w.r.t. e yields the optimal scaling e* as a function of z\nmax g(z,e) = max x`u z 2 z,e>0 z,1|u|2=1\nDiscarding solutions with e < 0 can be achieved by simply dropping the square which results in the final constrained problem:"}, {"section_index": "17", "section_name": "APPENDIX B", "section_text": "We show in this section that coding in the EB-SSC model can be solved efficiently by a combination of convolution, shrinkage and projection, steps which can be implemented with standard libraries on a GPU. For convenience, we first rewrite the objective in terms of cross-correlation rather than convolution (i.e., , x(d * zk) = (dx * x)z). For ease of understanding, we first consider the coding problem when there is no classification term.\n= arg max vz |z1 Z l|z|?1\nFrom the partial subderivative of the Lagrangian w.r.t. z; we derive the optimal solution as a functior of X; and from that find the conditions in which the solutions hold, giving us:\nK K K max 2x(dk*Zk)-|>dz*zk|2->|zk|1 Z k=1 k=1 k=1\nK K lk * Zk 1 k L - dz*Zk with u z z= -z *Zk e k=1\nmax 2e(xu le1) 2 z,e>0\ne(Z)*=xu Ie1: 2\nK K |ze|1 arg max x' dk 2 z k=1 k=1 K s.t. k=1\nL(z,) =v'z-||z||1+ X(1- ||zI?)\nUi Uj > 1 0 otherwise 2\\ Vi+B Vi <\nThis can also be compactly written as:\nSubstituting z(X)* back into the Lagrangian we get:\n1 1 L(z(X)*,X) = lel1+ X(1 212) 2X 2\\ 4\\2\n1 z, 2\\ v - 39\nwhere s = sign(z*) E {-1, 0,1}|z| and s2 = s O s E {0,1}|zl. The sign vector of z* can be determined without knowing , as A is a Lagrangian multiplier for an inequality it must be non-. negative and therefore does not change the sign of the optimal solution. Lastly, we define the squared l2-norm of z, a result that will be used later:.\n|z|l2 = z(s2 Ov) - zs =zTvD|z1\ndL(z(X)* 1 1 e1 1+1+ ax 2\\2 2\\2 4X2\n1 1 1 14|1 llell? 2 4 1 2 2\nz * z lz|2"}] |
r1y1aawlg | [{"section_index": "0", "section_name": "ITERATIVE REFINEMENT FOR MACHINE TRANSLATION", "section_text": "Roman Novak\nMichael Auli. Facebook AI Research Menlo Park, CA.\nEcole polytechnique Palaiseau, France\nExisting machine translation decoding algorithms generate translations in a. strictly monotonic fashion and never revisit previous decisions. As a result, ear. lier mistakes cannot be corrected at a later stage. In this paper, we present a. translation scheme that starts from an initial guess and then makes iterative im provements that may revisit previous decisions. We parameterize our model as. a convolutional neural network that predicts discrete substitutions to an existing. translation based on an attention mechanism over both the source sentence as wel as the current translation output. By making less than one modification per sen. tence, we improve the output of a phrase-based translation system by up to O.. BLEU on WMT15 German-English translation.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Existing decoding schemes for translation generate outputs either left-to-right, such as for phrase- based or neural translation models, or bottom-up as in syntactic models (Koehn et al.]2003f |Galley et al.[[2004]Bahdanau et al.][2015). All decoding algorithms for those models make decisions which cannot be revisited at a later stage, such as when the model discovers that it made an error earlier on.\nOn the other hand, humans generate all but the simplest translations by conceiving a rough draft of the solution and then iteratively improving it until it is deemed complete. The translator may modify a clause she tackled earlier at any point and make arbitrary modifications to improve the translation.\nIn this paper, we present models that tackle translation similar to humans. The model iterative] edits the target sentence until it cannot improve it further. As a preliminary study, we address tl roblem of finding mistakes in an existing translation via a simple classifier that predicts if a wol n a translation is correct (2). Next, we model word substitutions for an existing translation via convolutional neural network that attends to the source when suggesting substitutions (3). Finall we devise a model that attends both to the source as well as to the existing translation (d4). W epeatedly apply the models to their own output by determining the best substitution for each wor n the previous translation and then choosing either one or zero substitutions for each sentence. Fc the latter we consider various heuristics as well as a classifier-based selection method (d5).\nOur results demonstrate that we can improve the output of a phrase-based translation system o WMT15 German-English data by up to 0.4 BLEU (Papineni et al.2002) by making on averag only 0.6 substitutions per sentence (d6)\nOur approach differs from automatic post-editing since it does not require post-edited text which is a scarce resource (Simard et al.|2007|Bojar et al.2016). For our first model (3) we merely require parallel text and for our second model (d4) the output of a baseline translation system.\nFacebook AI Research Menlo Park, CA\nIt can be argued that beam search allows to recover from mistakes, simply by providing alternative translations. However, reasonable beam sizes encode only a small number of binary decisions. A beam of size 50 contains fewer than six binary decisions, all of which frequently share the same prefix (Huang2008"}, {"section_index": "2", "section_name": "2 DETECTING ERRORS", "section_text": "In the following, we use lowercase boldface for vectors (e.g. x), uppercase boldface for matrices (e.g. F) and calligraphy for sets (e.g. X). We use superscripts for indexing or slicing, e.g., x, Fi,J Fi - (Fi,1, ..., Fi,|F'I). We further denote x as the source sentence, yg as the guess translation from which we start and which was produced by a phrase-based translation system (6.1), and yref as the reference translation. Sentences are vectors of indices indicating entries in a source vocabulary or a target vocabulary V. For example, x = (x1,..., x*) E | with I = {1,..., |t[}. We omit biases of linear layers to simplify the notation.\nError detection focuses on word-level accuracy, i.e., we predict for each token in a given translation. if it is present in the reference or not. This metric ignores word order, however, we hope that perfor. mance on this simple task provides us with a sense of how difficult it will be to modify translations to a positive effect. A token y' in the candidate translation yg is deemed correct iff it is present in. the reference translation: y? E yref. We build a neural network f to predict correctness of each token. in yg given the source sentence x:.\nArchitecture. We use an architecture similar to the word alignment model of Legrand et al.(2016). The source and the target sequences are embedded via a lookup table that replace each word type. with a learned vector. The resulting vector sequences are then processed by alternating convolutions. and non-linearities. This results in a vector S (x)' representing each position i in the source x and. a vector T (yg)' representing each position j in the target yg. These vectors are then compared via. a dot product. Our prediction estimates the probability of a target word being correct as the largest dot product between any source word and the guess word. We apply the logistic function o to this. Score,\nf(x,yg S(x)T(yg -0 max 1<j<|x\nTraining. At training time we minimize the cross-entropy loss, with the binary supervision 1 for y% E yref, 0 otherwise.\nTesting. At test time we threshold the model prediction f(x, yg)' to detect mistakes. We compar the performance of our network to the following baselines:\nWe report word-level accuracy metrics in Table[1] While the model significantly improves over th baselines, the probability of correctly labeling a word as a mistake remains low (62.71%). The tas of predicting mistakes is not easy as previously shown in confidence estimation (Blatz et al.2004 Ueffing & Ney2007). Also, one should bear in mind that this task cannot be solved with 100% accuracy since a sentence can be correctly in multiple different ways and we only have a sing. reference translation. In our case, our final refinement objective might be easier than error detectio as we do not need to detect all errors. We need to identify some of the locations where a substitutio could improve BLEU. At the same time, our strategy should also suggest these substitutions. Thi is the objective of the model introduced in the next section.\nWe introduce a model to predict modifications to a translation which can be trained on bilingual text. In d5|we discuss strategies to iteratively apply this model to its own output in order to improve a translation.\n1. Predicting that all candidate words are always correct fcor = 1, or always incorrect fwrong = 0. 2. The prior probability of a word being correct based on the training data fstat(y) (P [y E yref y E yg> 0.5).\nMetric (%) fcor fwrong f J stat Accuracy 68.0 32.0 71.3 76.0 Recall 0.00 100.00 36.0 61.3 Precision 100.0 32.0 58.4 62.7 F1 0.00 48.4 44.5 62.0\nTable 1: Accuracy of the error detection model f compared to baselines on the concatenation of the WMT test sets from 2008 to 2015. For precision, recall and F1 we consider a positive prediction. as labeling a word as a mistake. Baseline fcor labels all words as correct, fwrong labels all words as. incorrect, fstat labels a word from yg based on the prior probability estimated on the training data..\nOur model F takes as input a source sentence x and a target sentence y, and outputs a distribution word j E Y, F(x, y)i, estimates P(y' = j |x, y-i), the probability of word j being at position i given the source and the target context y-i = (y1 ,..., yi-1, yi+1.. , y|y|) surrounding i. In other words, we learn a non-causal language model (Bengio et al.| 2003) which is also conditioned on the source x.\nArchitecture. We rely on a convolutional model with attention. The source sentence is embeddec. into distributional space via a lookup table, followed by convolutions and non-linearities. The targe. sentence is also embedded in distributional space via a lookup table, followed by a single convolutior. and a succession of linear layers and non-linearities. The target convolution weights are zeroed at the. center so that the model does not have access to the center word. This means that the model observe where 2k + 1 refers to the convolution kernel width. These operations result in a vector S repre. senting each position j in the source sentence x and a vector T' representing each target contex. y-i|k\nGiven a target position i, an attention module then takes as input these representation and outputs a weight for each target position\nexp(S . Ti a(i,j) z C X '=1 exp(Sj' . Ti)\nThese weights correspond to dot-product attention scores 1Luong et al 2015 Rush et al.| 2015) The attention weights allow to compute a source summary specific to each target context through a weighted sum,\nx `a(i,j) S j=1\nFinally, this summary a(y-i|k,x) is concatenated with the embedding of the target context y-i|k obtained from the target lookup table,\nTraining. The model is trained to maximize the (log) likelihood of the pairs (x, yref) from th training set.\nTesting. At test time the model is given (x, yg), i.e., the source and the guess sentence. Similar to. maximum likelihood training for left-to-right translation systems (Bahdanau et al.2015), the model is therefore not exposed to the same type of context in training (reference contexts from yref) and. testing (guess contexts from yg)\nDiscussion. Our model is similar to the attention-based translation approach of Bahdanau et al. 2015). In addition to using convolutions, the main difference is that we have access to both left and\nOur model F takes as input a source sentence x and a target sentence y, and outputs a distribution\nand a multilayer perceptron followed by a softmax computes F(x, y)' from a(y-i|k, x), L(y-i|k) Note that we could alternatively use T' instead of L(y-i|k) but our preliminary validation experi- ments showed better result with the lookup table output.\nright target context y-i|k since we start from an initial guess translation. Right target words are of. course good predictors of the previous word. For instance, an early validation experiment with the setup from d6.1|showed a perplexity of 5.4 for this model which compares to 13.9 with the same model trained with the left context only.."}, {"section_index": "3", "section_name": "4 DUAL ATTENTION MODEI", "section_text": "We introduce a dual attention architecture to also make use of the guess at training time. Thi contrasts with the model introduced in the previous section where the guess is not used durin. training. Also, we are free to use the entire guess, including the center word, compared to th reference where we have to remove the center word..\nAt training time, the dual attention model takes 3 inputs, that is, the source, the guess and the reference. At test time, the reference input is replaced by the guess. Specifically, the model sentence.\nArchitecture. The model builds upon the single attention model from the previous section by. having two attention functions a with distinct parameters. The first function asource takes the. source sentence x and the reference context yrf to produce the source summary for this con-. guess sentence yg and the reference context yref and produces a guess summary for this context. (y-i|k, yg) . These two summaries are then concatenated with the lookup representation of aguess the reference context L (yref~i|k) and input to a final multilayer perceptron followed by a softmax. The reference lookup table contains the only parameters shared by the two attention functions\nTraining. This model is trained similarly to the single attention model, the only difference bein the conditioning on the guess yg.\nTesting. At test time, the reference is unavailable and we replace with ya. i.e.. the model i"}, {"section_index": "4", "section_name": "ITERATIVE REFINEMENT", "section_text": "Applying a single substitution changes the context of the surrounding words and requires updating the model predictions. We therefore perform multiple rounds of substitution. At each round, the model computes its predictions, then our refinement strategy selects a substitution and performs it unless the strategy decides that it can no longer improve the target sentence. This means that the refinement procedure should be able to (i) prioritize the suggested substitutions, and (ii) decide to stop the iterative process.\nWe determine the best edit for each position i in yg by selecting the word with the highest probability estimate: ypred = arg maxjey F (x, yg),j . Then we compute a confidence score in this prediction s(yg, ypred)' , possibly considering the prediction for the current guess word at the same position.\nTesting. At test time, the reference is unavailable and we replace yref with yg, i.e., the model is -i|k) to make a prediction at position i. In this case, the distribution shift when going given (x, yg, Yg from training to testing is less drastic than in 3|and the model retains access to the whole yg via attention.\nDiscussion. Compared to the single attention model (&3), this model reduces perplexity from 5.4. to 4.1 on our validation set. Since the dual attention model can attend to all guess words, it can copy any guess word if necessary. In our dataset, 68% of guess words are in the reference and can. therefore be copied. This also means that for the remaining 32% of reference tokens the model should not copy. Instead, the model should propose a substitution by itself (d6.1). During testing. the fact that the guess is input twice (x, yg, yg t|^. -ik) ) means that the guess and the prediction context. always match. This makes the model more conservative in its predictions, suggesting tokens from yg more often than the single attention model. However, as we show in 6l this turns out beneficial. in our setting.\nThese scores are used to select the next position to edit, i* = arg max; s(yg, Ypred)' and to stop the iterative process, i.e., when the confidence falls below a validated threshold t. We also limit the number of substitutions to a maximum of N. We consider different heuristics for s,\nrent word y: Spr(yg, Ypred)' = F(x, yg);,ypred X.Y\nWe compare the above strategies, different score thresholds t, and the maximum number of modifi cations per sentence allowed N in 6.2\nWe first describe our experimental setup and then discuss our results\nData. We perform our experiments on the German-to-English WMT15 task (Bojar et al.] 2015 and benchmark our improvements against the output of a phrase-based translation system (PBMT Koehn et al. 2007) on this language pair. In principle, our approach may start from any initial guess translation. We chose the output of a phrase-based system because it provides a good starting point that can be computed at high speed. This allows us to quickly generate guess translations for the millions of sentences in our training set.\nAll data was lowercased and numbers were mapped to a single special \"number\"' token. Infrequen tokens were mapped to an \"unknown\"' token which resulted in dictionaries of 120K and 170K words. for English and German respectively.\nFor training we used 3.5M sentence triples (source, reference, and the guess translation output by the PBMT system). A validation set of 180K triples was used for neural network hyper-parameter selection and learning rate scheduling. Finally, two 3K subsets of the validation set were used to train the classifier discussed in g5land to select the best model architecture (single vs dual attention) and refinement heuristic.\nImplementation. All models were implemented in Torch (Collobert et al.2011) and trained witl stochastic gradient descent to minimize the cross-entropy loss..\nFor the error detection model in 2|we used two temporal convolutions on top of the lookup table each followed by a tanh non-linearity to compute S(x) and T(yg). The output dimensions o each convolution was set to 256 and the receptive fields spanned 5 words, resulting in final output summarizing a context of 9 words.\nFor the single attention model we set the shared context embedding dimension dim S = dim T? - 512 and use a context of size k = 4 words to the left and to the right, resulting in a window of size 9 for the source and 8 for the target. The final multilayer perceptron has 2 layers with a hidden dimension of 512, see (3)\nFor the dual attention model we used 2-layer context embeddings (a convolution followed by a linear with a tanh in between), each having output dimension 512, context of size k = 4. The\nScore positions based On the model confidence in i.e., Ored Sconf(Yg, Ypred)' = F(x, yg)i,y pred . Look for high confidence in the suggested substitution y?. red and low confidence in the. current word y?: spr(yg, Ypred)' = F(x, yg),yped Train a simple binary classifier taking as input the score of the best predicted word and the. current guess word: sc(yg, Ypred)' = nn (log F(x, yg)iypred, log F(x, yg)o,y! ,where nn is a 2-layer neural network trained to predict whether a substitution leads to an increase in BLEU or not.\nThe initial guess translations were generated with phrase-based systems trained on the same training data as our refinement models. We decoded the training data with ten systems, each trained on 90% of the training data in order to decode the remaining 10%. This procedure avoids the bias of. generating guess translation with a system that was trained on the same data..\nTable 2: Validation BLEU (selecting substitution heuristics, decision thresholds t, and number ol maximum allowed modifications N). BLEU is reported on a 3,041 validation sentences..\nTable 3: Test accuracy on WMT test sets after applyi our refinement procedure\nfinal multilayer perceptron has 2 layers with a hidden dimension of 1024, see d4). In this setup, we replaced dot-product attention with MLP attention (Bahdanau et al.[2015) as it performed better on the validation set.\nAll weights were initialized randomly apart from the word embedding layers, which were pre computed with Hellinger Principal Component Analysis (Lebret & Collobert2014) applied to th bilingual co-occurrence matrix constructed on the training set. The word embedding dimension wa set to 256 for both languages and all models.\nTable2|compares BLEU of the single and dual attention models (F vs Fdual) over the validation se1 It reports the performance for the best threshold t E {0, 0.1, ..., 1} and the best maximum numbe of modifications per sentence N E {0, 1,..., 10} for the different refinement heuristics. The bes performing configuration is Fdual with the product-based heuristic spr thresholded at t = 0.5 for uj to N = 5 substitutions. We report test performance of this configuration in table[3] Tables|4[|5|an 6|show examples of system outputs. Overall the system obtains a small but consistent improvemen over all the test sets.\nFigure[1(left) plots accuracy versus the number of allowed substitutions and Figure[1(right) shows the percentage of actually modified tokens. The dual attention model (d4) outperforms single atten. tion (3). Both models achieve most of improvement by making only 1-2 substitutions per sentence.. Thereafter only very few substitutions are made with little impact on BLEU. Figure[1(right) shows that the models saturate quickly, indicating convergence of the refinement output to a state where the models have no more suggestions..\nTo isolate the model contribution from the scoring heuristic, we replace the scoring heuristic witl an oracle while keeping the rest of the refinement strategy the same. We consider two types o. oracle: The full oracle takes the suggested substitution for each position and then selects whicl. single position should be edited or whether to stop editing altogether. This oracle has the potentia to find the largest BLEU improvement. The partial oracle does not select the position, it just take. the heuristic suggestion for the current step and decides whether to edit or stop the process. Notice. that both oracles have very limited choice, as they are only able to perform substitutions suggestec. by our model.\nModel Heuristic Best t Best N BLEU PBMT Baseline 30.02 Sconf 0.8 3 30.21 F Spr 0.7 3 30.20 Sc1 0.5 1 30.19 Sconf 0.6 7 30.32 F dual Spr 0.5 5 30.35 Sc1 0.4 2 30.33\nnewstest PBMT BLEU Our BLEU 2008 21.29 21.60 0.31 2009 20.42 20.74 0.32 2010 22.82 23.13 0.31 2011 21.43 21.65 0.22 2012 21.78 22.10 0.32 2013 24.99 25.37 0.38 2014 22.76 23.07 0.31 2015 24.40 24.80 0.40 Mean 22.49 22.81 0.32\n30.4 3 2.5 30.3 2 30.2 BEEU 1.5 30.1 1 0.5 30 PBMT baseline Single confidence Single_confidence Dual product Dual product 0 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Maximum substitutions per sentence allowed Substitution step\nFigure 1: Left: BLEU as a function of the total number of substitutions allowed per sentence Values are reported on a small 3K validation set for the single and dual attention models using the best scoring heuristic s and threshold t. Right: Percentage of modified tokens on the validation sel as a function of the total number of substitutions allowed per sentence. All models modify fewer than 2.5% of tokens.\n32.5 32.5 32 32 31.5 31.5 L37 31 31 30.5 30.5 Dual product Single confidence 30 30 Dual full oracle Single full oracle Dual partial oracle Single partial oracle 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Maximum substitutions per sentence allowed Maximum substitutions per sentence allowed\nIn the single-attention setup the oracles yields a higher improvement (+2.37 and +1.3) and they alsc perform more substitutions. This supports our earlier conjecture () that Fdual is more conservative and prone to copying words from the guess yg compared to the single attention model. While helpful in validation, the cautious nature of the dual model restricts the options of the oracle.\nWe make several observations. First, word-prediction models provide high-quality substitutions Ypred that can lead to a significant improvements in BLEU (despite that both oracles are limited in. their choice of ypred). This is supported by the simple heuristic Sconf performing very close to more sophisticated strategies (Table2)\nFigure 2: BLEU as a function of the total number of substitutions allowed per sentence Left: best dual-attention refinement strategy (Dual_product) versus two oracles. Thefull oracle (Dual_full_oracle) accepts as input ypred and selects a single i to substitute yg := arg max1<j<|yg! S(yg, Ypred) but has the ability to prevent substitution y? := ypred if it does not. improve BLEU. Right: same for the best single attention setup..\nFigure 2 reports the performance of our best single and dual attention models compared to both oracles on the validation set; Figure|3|shows the corresponding number of substitutions. The full and partial oracles result in an improvement of +1.7 and +1.09 BLEU over the baseline in the dual attention setting (compared to +0.35 with spr)\n3 3 2.5 2.5 2 2 1.5 1.5 1 1 0.5 0.5 Dual product. Single confidence Dual full oracle Single full oracle Dual_partial_oracle Single_partial_oracle 0 0 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Substitution step Substitution step.\nFigure 3: Percentage of modified tokens as a function of total number of substitutions allowed pei sentence for the dual attention model (left) and the single attention model (right) compared to the full and partial oracles (cf. Figure2)\nSecond, it is important to have a good confidence estimate on whether a substitution will improve. BLEU or not. The full oracle, which yields +1.7 BLEU, acts as an estimate to having a real-. valued confidence measure and replaces the scoring heuristic s. The partial oracle, yielding +1.09 BLEU, assesses the benefit of having a binary-valued confidence measure. The latter oracle can only. prevent our model from making a BLEU-damaging substitution. However, confidence estimation is. a difficult task as we found in 2\nFinally, we demonstrate that a significant improvement in BLEU can be achieved through very few substitutions. The full and partial oracle modify only 1.69% and 0.99% of tokens, or 0.4 and 0.24 modifications per sentence, respectively. Of course, oracle substitution assumes access to the reference which is not available at test time. At the same time, our oracle is more likely to generate fluent sentences since it only has access to substitutions deemed likely by the model as opposed to an unrestricted oracle that is more likely to suggest improvements leading to unreasonable sentences Note that our oracles only allow substitutions (no deletions or insertions), and only those that raise BLEU in a monotonic fashion, with each single refinement improving the score of the previous translation."}, {"section_index": "5", "section_name": "CONCLUSION AND FUTURE WORK", "section_text": "We present a simple iterative decoding scheme for machine translation which is motivated by the inability of existing models to revisit incorrect decoding decisions made in the past. Our model. improve an initial guess translation via simple word substitutions over several rounds. At each round. the model has access to the source as well as the output of the previous round, which is an entire translation of the source. This is different to existing decoding algorithms which make predictions based on a limited partial translation and are unable to revisit previous erroneous decoding decisions\nOur results increase translation accuracy by up to 0.4 BLEU on WMT15 German-English translation. and modify only 0.6 words per sentence. In our experimental setup we start with the output of a phrase-based translation system but our model is general enough to deal with arbitrary guess. translations.\nWe see several future work avenues from here. Experimenting with different initial guess transla. tions such as the output of a neural translation system, or even the result of a simple dictionary-based word-by-word translation scheme. Also one can envision editing a number of guess translations si- multaneously by expanding the dual attention mechanism to attend over multiple guesses..\nSo far we only experimented with word substitution, one may add deletion, insertion or even swaps of single or multi-word units. Finally, the dual-attention model in d4may present a good starting point for neural multi-source translation (Schroeder et al.|[2009).\nThe authors wish to thank Marc' Aurelio Ranzato and Sumit Chopra for their advice and comments"}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. What's in a translation rule? pp 273-280, Boston, MA, USA, May 2004.\nPhilipp Koehn, Franz Josef Och, and Daniel Marcu. Statistical Phrase-Based Translation. pp. 127. 133, Edmonton, Canada, May 2003.\nPhilipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondej Bojar. Alexandra Constantin, and Evan Herbst. Moses: Open source toolkit for statistical machine. translation. In Proc. of ACL, 2007.\nKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 4Oth Annual Meeting on Association for Computational Linguistics, ACL '02, pp. 311-318, Stroudsburg, PA, USA, 2002. Association for Computational Linguistics. doi: 10.3115/1073083.1073135. URLhttp : / /dx. doi. org/ 10.3115/1073083.1073135\nLiang Huang. Forest-based algorithms in natural language processing. PhD thesis, University of Pennsylvania, 2008.\nAlexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for sentence sum marization. In Proc. of EMNLP. 2015\nNicola Ueffing and Hermann Ney. Word-level confidence estimation for machine translation. Com. putational Linguistics, 33:9-40, 2007.\nTable 4: Examples of good refinements performed by our system on our test sets. The model clearl mproves the quality of the initial guess translations.\nJosh Schroeder, Trevor Cohn, and Philipp Koehn. Word lattices for multi-source translation. Ir Proc. of EACL, 2009\nMichel Simard, Cyril Goutte, and Pierre Isabelle. Statistical phrase-based post-editing. In Proc. o NAACL. 2007\nTable 5: Refinements of mixed quality. Our model is not able to insert new words, and so sometimes it replaces a relevant word with another relevant word. In other cases, improvements are insignifi. cant, or good word replacements are mixed with poor ones..\nTable 6: Examples of poor refinements. Our model does not improve the translation or decrease the quality of the translation.\nX er war auch kein klempner X mit 38 aber beging er selbstmord .. Yref nor was he a pipe lagger .. Yref but at 38 , he committed suicide . yg he was also a plumber .. yg with 38 but he committed suicide our he was not a plumber .. our in 38 , he committed suicide. X ich habe schon 2,5 millionen in die kampagne gesteckt .. Yref i have already put 2.5 million into the campaign .. y g i have already 2.5 million in the campaign .. our i have put 2.5 million into campaign .. X dieses jahr werden amerikaner etwa 106 millionen dollar fur kurbisse ausgeben , so das us census bureau .. Yref this year , americans will spend around $ 106 million on pumpkins , according to the u.s.. census bureau . yg this year , the americans are approximately 106 million dollars for pumpkins , so the us. census bureau . our this year , the americans spend about 106 million dollars to pumpkins , so the us census bureau . X das thema unterliegt bestimmungen , denen zufolge fluggesellschaften die sicherheit jed-. erzeit aufrechterhalten und passagiere die vom kabinenpersonal gegebenen sicherheitsan-. weisungen befolgen mussen .. Yref the issue is covered by regulations which require aircraft operators to ensure safety is. maintained at all times and passengers to comply with the safety instructions given by. crew members . yg the issue is subject to rules , according to which airlines and passengers to maintain the security at any time by the cabin crew safety instructions given to follow .. our the issue is subject to rules , according to which airlines and passengers must follow their. security at any time by the cabin crew safety instructions given to follow .."}] |
rJLS7qKel | [{"section_index": "0", "section_name": "LEARNING TO ACT BY PREDICTING THE FUTURE", "section_text": "Alexey Dosovitskiy\nIntel Labs\nWe present an approach to sensorimotor control in immersive environments. Our approach utilizes a high-dimensional sensory stream and a lower-dimensional measurement stream. The cotemporal structure of these streams provides a rich supervisory signal, which enables training a sensorimotor control model by in teracting with the environment. The model is trained using supervised learning techniques, but without extraneous supervision. It learns to act based on raw sen sory input from a complex three-dimensional environment. The presented formu lation enables learning without a fixed goal at training time, and pursuing dynam ically changing goals at test time. We conduct extensive experiments in three dimensional simulations based on the classical first-person game Doom. The results demonstrate that the presented approach outperforms sophisticated prior formulations, particularly on challenging tasks. The results also show that trained models successfully generalize across environments and goals. A model trainec using the presented approach won the Full Deathmatch track of the Visual Doom AI Competition, which was held in previously unseen environments."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Machine learning problems are commonly divided into three classes: supervised, unsupervised, and reinforcement learning. In this view, supervised learning is concerned with learning input-output mappings, unsupervised learning aims to find hidden structure in data, and reinforcement learning deals with goal-directed behavior (Murphy2012). Reinforcement learning is compelling because it considers the natural setting of an organism acting in its environment. It is generally taken to comprise a class of problems (learning to act), the mathematical formalization of these problems (maximizing the expected discounted return), and a family of algorithmic approaches (optimizing an objective derived from the Bellman equation) (Kaelbling et al.]1996f [Sutton & Barto] 2017).\nWhile reinforcement learning (RL) has achieved significant progress (Mnih et al.]2015), key chal. lenges remain. One is sensorimotor control from raw sensory input in complex and dynamic three. dimensional environments, learned directly from experience. Another is the acquisition of genera skills that can be flexibly deployed to accomplish a multitude of dynamically specified goals (Lake. et al.]2016).\nIn this work, we propose an approach to sensorimotor control that aims to assist progress towards overcoming these challenges. Our approach departs from the reward-based formalization commonly used in RL. Instead of a monolithic state and a scalar reward, we consider a stream of sensory input {st} and a stream of measurements {mt}. The sensory stream is typically high-dimensional and may include the raw visual, auditory, and tactile input. The measurement stream has lower dimen sionality and constitutes a set of data that pertain to the agent's current state. In a physical system measurements can include attitude, supply levels, and structural integrity. In a three-dimensional computer game, they can include health, ammunition levels, and the number of adversaries over- come.\nOur guiding observation is that the interlocked temporal structure of the sensory and measurement streams provides a rich supervisory signal. Given present sensory input, measurements, and goal. the agent can be trained to predict the effect of different actions on future measurements. Assuming that the goal can be expressed in terms of future measurements, predicting these provides all the information necessary to support action. This reduces sensorimotor control to supervised learning while supporting learning from raw experience and without extraneous data. Supervision is pro-\nVladlen Koltun\nIntel Labs\nThis approach has two significant benefits. First, in contrast to an occasional scalar reward assumed in traditional RL, the measurement stream provides rich and temporally dense supervision that can stabilize and accelerate training. While a sparse scalar reward may be the only feedback available in a board game (Tesauro1994] Silver et al.]2016), a multidimensional stream of sensations is a more appropriate model for an organism that is learning to function in an immersive environment (Adolph & Berger2006).\nThe second advantage of the presented formulation is that it supports training without a fixed goal and pursuing dynamically specified goals at test time. Assuming that the goal can be expressed ir terms of future measurements, the model can be trained to take the goal into account in its predictior of the future. At test time, the agent can predict future measurements given its current sensory input. measurements, and goal, and then simply select the action that best suits its present goal.\nWe evaluate the presented approach in immersive three-dimensional simulations that require visu- ally navigating a complex three-dimensional environment, recognizing objects, and interacting with dynamic adversaries. We use the classical first-person game Doom, which introduced immersive three-dimensional games to popular culture (Kushner 2003). The presented approach is given only raw visual input and the statistics shown to the player in the game, such as health and ammunition levels. No human gameplay is used, the model trains on raw experience.\nExperimental results demonstrate that the presented approach outperforms state-of-the-art deep RI models, particularly on complex tasks. Experiments further demonstrate that models learned by the presented approach generalize across environments and goals, and that the use of vectorial measure ments instead of a scalar reward is beneficial. A model trained with the presented approach won the Full Deathmatch track of the Visual Doom AI Competition, which took place in previously unseer environments. The presented approach outperformed the second best submission, which employec a substantially more complex model and additional supervision during training, by more than 50%\nThe supervised learning (SL) perspective on learning to act by interacting with the environment. dates back decades.Jordan & Rumelhart(1992) analyze this approach, review early work, and. argue that the choice of SL versus RL should be guided by the characteristics of the environment Their analysis suggests that RL may be more efficient when the environment provides only a sparse. scalar reward signal, whereas SL can be advantageous when temporally dense multidimensional. feedback is available.\nOur approach has similarities to Monte Carlo methods. The convergence of such methods was analyzed early on and they were seen as theoretically advantageous, particularly when function ap- proximators are used (Bertsekas 1995}Sutton 1995Singh & Sutton1 1996).The choice of TD learning over Monte Carlo methods was argued on practical grounds, based on empirical perfor- mance on canonical examples (Sutton] 1995). While the understanding of the convergence of both types of methods has since improved (Szepesvari & Littman 1999Tsitsiklis2002Even-Dar & Mansour!2003), the argument for TD versus Monte Carlo is to this day empirical (Sutton & Barto 2017). Sharp negative examples exist (Bertsekas2010). Our work deals with the more general setting of vectorial feedback and parameterized goals, and shows that a simple Monte-Carlo-type method performs extremely well in a compelling instantiation of this setting.\nSutton(1988) analyzed temporal-difference (TD) learning and argued that it is preferable to SL for prediction problems in which the correctness of the prediction is revealed many steps after the pre- diction is made. Sutton's influential analysis assumes a sparse scalar reward. TD and policy gradient methods have since come to dominate the study of sensorimotor learning (Kober et al.|2013f Mnih et al. 2015, Sutton & Barto2017). While the use of SL is natural in imitation learning (LeCun. et al. 2005 Ross et al.]2013) or in conjunction with model-based RL (Levine & Koltun2013), the formulation of sensorimotor learning from raw experience as supervised learning is rare (Levine et al.2016). Our work suggests that when the learner is exposed to dense multidimensional sen- sory feedback, direct future prediction can support effective sensorimotor coordination in complex dynamic environments.\nLearning to act in simulated environments has been the focus of significant attention following the successful application of deep RL to Atari games by[Mnih et al.(2015). A number of recent efforts applied related ideas to three-dimensional environments. Lillicrap et al.(2016) considered continu- ous and high-dimensional action spaces and learned control policies in the TORCS simulator. Mnih. et al.(2016) described asynchronous variants of deep RL methods and demonstrated navigation in. a three-dimensional labyrinth.Oh et al.(2016) augmented deep Q-networks with external mem- ory and evaluated their performance on a set of tasks in Minecraft. In a recent technical report.. Kulkarni et al.(2016b) proposed end-to-end training of successor representations and demonstrated navigation in a Doom-based environment. In another recent report, Blundell et al.(2016) considered. a nonparametric approach to control and conducted experiments in a three-dimensional labyrinth. Experiments reported in Section|4demonstrate that our approach significantly outperforms state-of- the-art deep RL methods.\nPrediction of future states in dynamical systems was considered by Littman et al.(2001) and Singh. et al.(2003). Predictive representations in the form of generalized value functions were advocated. bySutton et al.[(2011). More recently, Oh et al.(2015) learned to predict future frames in Atari. games. Prediction of full sensory input in realistic three-dimensional environments remains an open. challenge, although significant progress is being made (Mathieu et al.l|2016]|Finn et al.[|2016||Kalch- brenner et al.|2016). Our work considers prediction of future values of meaningful measurements. from rich sensory input and shows that such prediction supports effective sensorimotor control."}, {"section_index": "2", "section_name": "3 MODEL", "section_text": "Consider an agent that interacts with the environment over discrete time steps. At each time step t. the agent receives an observation o, and executes an action at based on this observation. We assume that the observations have the following structure: ot = (st, mt), where st is raw sensory input and mt is a set of measurements. In our experiments, st is an image: the agent's view of its three dimensional environment. More generally, st can include input from multiple sensory modalities The measurements mt can indicate the attitude, supply levels, and structural integrity in a physical system, or health, ammunition, and score in a computer game.\nThe distinction between sensory input s and measurements m, is somewhat artificial: both st an m constitute sensory input in different forms. In our model, the measurement vector mt is distir guished from other sensations in two ways. First, the measurement vector is the part of the observa tion that the agent will aim to predict. At present, predicting full sensory streams is beyond our ca pabilities (although see the work of|Kalchbrenner et al.(2016) and van den Oord et al.(2016) for im pressive recent progress). We therefore designate a subset of sensations as measurements that will b predicted. Second, we assume that the agent's goals can be defined in terms of future measurement Specifically, let T1, ..., Tn be a set of temporal offsets and let f = (mt+1 mt, . .. , mt+Tn - mt be the corresponding differences of future and present measurements. We assume that any goa that the agent will pursue can be defined as maximization of a function u(f; g). Any parametri function can be used. Our experiments use goals that are expressed as linear combinations of futur measurements:\nu(f;g) =g`f\nwhere the vector g parameterizes the goal and has the same dimensionality as f. This model gener alizes the standard reinforcement learning formulation: the scalar reward signal can be viewed as a measurement, and exponential decay is one possible configuration of the goal vector.\nVector-valued feedback has been considered in the context of multi-objective decision-making [Gabor et al.]1998] Roijers et al.] 2013). Transfer across related tasks has been analyzed by Konidaris et al.[(2012). Parameterized goals have been studied in the context of continuous mo tor skills such as throwing darts at a target (da Silva et al.] 2012) Kober et al.]2012)Deisenroth et al.2014). A general framework for sharing value function approximators across both states and goals has been described by Schaul et al.(2015). Our work is most closely related to the framework of Schaul et al.(2015), but presents a specific formulation in which goals are defined in terms of intrinsic measurements and control is based on direct future prediction. We provide an architecture that handles realistic sensory and measurement streams and achieves state-of-the-art performance in complex and dynamic three-dimensional environments.\nHere a E A is an action, 0 are the learned parameters of F, and p' is the resulting prediction. The dimensionality of p matches the dimensionality of f and g. Note that the prediction is a function of the current observation, the considered action, and the goal. At test time, given learned parameters. 0. the agent can choose the action that yields the best predicted outcome:.\nThe goal vector used at test time need not be identical to any goal seen during training"}, {"section_index": "3", "section_name": "3.1 TRAINING", "section_text": "The predictor F is trained on experiences collected by the agent. Starting with a random policy, the. agent begins to interact with its environment. This interaction takes place over episodes that last for a fixed number of time steps or until a terminal event occurs..\nConsider a set of experiences collected by the agent, yielding a set D of training examples D = {o, ai, gi, f)}1. Here (oi, ai, g) is the input and f, is the output of example i. The pre- dictor is trained using a regression loss:.\nN (0)=|F(0i,ai,gi;0)-fi|2 i=1\nA classification loss can be used for predicting categorical measurements, but this was not necessary in our experiments.\nAs the agent collects new experiences, the training set D and the predictor used by the agent change We maintain an experience memory of the M most recent experiences out of which a mini-batch of N examples is randomly sampled for every iteration of the solver. The parameters of the predictor used by the agent are updated after every k new experiences. This setup departs from pure on. policy training and we have not observed any adverse effect of using a small experience memory Additional details are provided in Appendix A\nWe have evaluated two training regimes:\n1. Single goal: the goal vector is fixed throughout the training process.. 2. Randomized goals: the goal vector for each episode is generated at randor"}, {"section_index": "4", "section_name": "3.2 ARCHITECTURE", "section_text": "The predictor F is a deep network parameterized by 0. The network architecture we use is showr. in Figure 1] The network has three input modules: a perception module S(s), a measurement module M(m) and a goal module G(g). In our experiments, s is an image and the perception. module S is implemented as a convolutional network. The measurement and goal modules are fully-connected networks. The outputs of the three input modules are concatenated, forming the joint input representation used for subsequent processing:\nj = J(s,m,g) =(S(s),M(m),G(g))\nFuture measurements are predicted based on this input representation. The network emits predic. tions of future measurements for all actions at. once. This could be done by a fully-connected module that absorbs the input representation and outputs predictions. However, we found that introducing. additional structure into the prediction module enhances its ability to learn the fine differences be. tween the outcomes of different actions. To this end, we build on the ideas of|Wang et al.(2016) and.\npt = F(ot,a,g;0)\n'F(0t,a,g;0) at = argmax g aEA\nIn both regimes, the agent follows an e-greedy policy: it acts greedily according to the current goal with probability 1 - , and selects a random action with probability e. The value of c is initially set to 1 and is decreased during training according to a fixed schedule.\nExpectation E(j) S(s) Image j Duplicate S Prediction M(m) Measurements A(j) A(j) Target m Action f taken Normalize p Goal G(g) g Ai(j) Ai(j) Action\nFigure 1: Network structure. The image s, measurements m, and goal g are first processed sep. arately by three input modules. The outputs of these modules are concatenated into a joint repre sentation j. This joint representation is processed by two parallel streams that predict the expected measurements E(j) and the normalized action-conditional differences {Ai (j)}, which are then com-. bined to produce the final prediction for each action..\nsplit the prediction module into two streams: an expectation stream E(j) and an action stream A(j) The expectation stream predicts the average of the future measurements over all potential actions The action stream concentrates on the fine differences between actions: A(j) = (A1(j), ..., A\" (j)) where w = A| is the number of actions. We add a normalization layer at the end of the action stream that ensures that the average of the predictions of the action stream is zero for each future measurement:\nThe output of the network has the same dimensionality as the output of the action stream"}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "We evaluate the presented approach in immersive three-dimensional simulations based on the classi- cal game Doom. In these simulations. the agent has a first-person view of the environment and must act based on the same visual information that is shown to human players in the game. To interface with the game engine, we use the ViZDoom platform developed by Kempka et al.(2016). One of the advantages of this platform is that it allows running the simulation at thousands of frames per second on a single CPU core, which enables training models on tens of millions of simulation steps in a single day.\nWe compare the presented approach to state-of-the-art deep RL methods in four scenarios of in- creasing difficulty, study generalization across environments and goals, and evaluate the importance of different aspects of the model.\nScenarios. We use four scenarios of increasing difficulty.\nW 1 Ai(j)=A'(j) Ak(j) W k=1\nfor all i. The normalization layer subtracts the average over all actions from each prediction, forcing the expectation stream E to compensate by predicting these average values. The output of the expectation stream has dimensionality dim(f), where f is the vector of future measurements. The output of the action stream has dimensionality w . dim(f).\nThe output of the network is a prediction of future measurements for each action, composed by summing the output of the expectation stream and the normalized action-conditional output of the. action stream:\nA1(j)+E(j),...,Aw(j)+E(j p =\nMM ARMF ARMOR D1: Basic D2: Navigation AMMOD HEALTH ARMS RMR AMMO OHEALTH ARMS ARMOR D3: Battle D4: Battle 2\nFigure 2: Example frames from the four scenarios\nThe first two scenarios are provided with the ViZDoom platform. In D1, the agent is in a squar. room and its health is declining at a constant rate. To survive, it must move around and collect healt kits, which are distributed abundantly in the room. This task is easy: as long as the agent learns t avoid walls and keep traversing the room, performance is good. In D2, the agent is in a maze an. its health is again declining at a constant rate. Here it must again collect health kits that increase it. health, but it must also avoid blue poison vials that decrease health. This task is harder: the agen. must learn to traverse irregularly shaped passageways, and to distinguish health kits from poiso. vials. In both tasks, the agent has access to three binary sub-actions: move forward, turn left, an. turn right. Any combination of these three can be used at any given time, resulting in 8 possibl. actions. The only measurement provided to the agent in these scenarios is health..\nThe last two scenarios, D3 and D4, are more challenging and were designed by us using elements oj the ViZDoom platform. Here the agent is armed and is under attack by alien monsters. The monster. spawn abundantly, move around in the environment, and shoot fireballs at the agent. Health kits and ammunition are sporadically distributed throughout the environment and can be collected by the agent. The environment is a simple maze in D3 and a more complex one in D4. In both scenarios the agent has access to eight sub-actions: move forward, move backward, turn left, turn right, strafe left, strafe right, run, and shoot. Any combination of these sub-actions can be used, resulting ir\nModel. The future predictor network used in our experiments was configured to be as close a possible to the DQN model of|Mnih et al.(2015), to ensure a fair comparison. Additional details o the architecture are provided in Appendix|A\nTraining and testing. The agent is trained and tested over episodes. Each episode terminates after 525 steps (equivalent to 1 minute of real time) or when the agent's health drops to zero. Statistics reported in figures and tables summarize the final values of respective measurements at the end of episodes.\nWe set the temporal offsets T1, ..., Tn of predicted future measurements to 1, 2, 4, 8, 16, and 32. steps in all experiments. Only the latest three time steps contribute to the objective function, witl coefficients (0.5, 0.5, 1). More details are provided in Appendix|A.\nComparison to prior work. We have compared the presented approach to three deep RL methods DQN (Mnih et al.]2015), A3C (Mnih et al.]2016), and DSR (Kulkarni et al.]2016b). DQN is standard baseline for visuomotor control due to its impressive performance on Atari games. A30. is more recent and is commonly regarded as the state of the art in this area. DSR is described i. a recent technical report and we included it because the authors also used the ViZDoom platforr in experiments, albeit with a simple task. Further details on the setup of the prior approaches ar. provided in AppendixB\nThe performance of the different approaches during training is shown in Figure[3] In reporting the results of these experiments, we refer to our approach as DFP (direct future prediction). For the first two scenarios, all approaches were trained to maximize health. For these scenarios, Figure 3|reports average health at the end of an episode over the course of training. For the last two scenarios, all approaches were trained to maximize a linear combination of the three normalized measurements (ammo, health, and frags) with coefficients (0.5, 0.5, 1). For these scenarios, Figure 3|reports average frags at the end of an episode. Each presented curve averages information from three independent training runs, and each data point is computed from 3 50,000 steps of testing\nTable 1: Comparison to prior work. We report average health at the end of an episode for scenarios. D1 and D2. and average frags at the end of an episode for scenarios D3 and D4\nDQN, A3C, and DFP were trained for 50 million steps. The training procedure for DSR is much slower and can only process roughly 1 million simulation steps per day. For this reason, we were only able to evaluate DSR on the Basic scenario and were not able to perform extensive hyperparam- eter tuning. We report results for this technique after 10 days of training. (This time was sufficient to significantly exceed the number of training steps reported in the experiments of Kulkarni et al. (2016b), but not sufficient to approach the number of steps afforded by the other approaches.).\nTable[1reports the performance of the models after training. Each fully trained model was tested over 1 million simulation steps. The table reports average health at the end of an episode for sce-. narios D1 and D2, and average frags at the end of an episode for D3 and D4. We also report the average training speed for each approach, in millions of simulation steps per day of train-. ing. The performance of the different models is additionally illustrated in the supplementary video.\nD1 (health) D2 (health) D3 (frags) D4 (frags) steps/day DQN 89.1 6.4 25.4 7.8 1.20.8 0.4 0.2 7M A3C 97.5 0.1 59.3 2.0 5.6 0.2 6.72.9 80M DSR 4.6 0.1 1M DFP 97.7 0.4 84.1 0.6 33.5 0.4 16.5 1.1 70M\nFigure 3: Performance of different approaches during training. DQN, A3C, and DFP achieve sim. ilar performance in the Basic scenario. DFP outperforms the prior approaches in the other three scenarios, with a multiplicative gap in performance in the most complex ones (D3 and D4)..\nIn the Basic scenario, DQN, A3C, and DFP all perform well. As reported in Table[1] the performance of A3C and DFP is virtually identical at 97.5%, while DQN reaches 89%. In the more complex Navigation scenario, a significant gap opens up between DQN and A3C; this is consistent with the. experiments of|Mnih et al.(2016). DFP achieves the best performance in this scenario, with a 25 percentage point advantage during testing. Note that in these first two scenarios, DFP was only. given a single measurement per time step (health)..\nIn the more complex Battle and Battle 2 scenarios (D3 and D4), DFP dominates the other ap. proaches. It outperforms A3C at test time by a factor of 6 in D3 and by a factor of 2.5 in D4. Note that the advantage of DFP is particularly significant in the scenarios that provide richer mea- surements: three measurements per time step in D3 and D4. The effect of multiple measurements is further evaluated in controlled experiments reported below..\nGeneralization across environments. We now evaluate how the behaviors learned by the pre. sented approach generalize across different environments. To this end, we have created 100 ran domly textured versions of the mazes from scenarios D3 and D4. We used 90 of these for training. and 10 for testing, with disjoint sets of textures in the training and testing environments. We call. these scenarios D3-tx and D4-tx.\nTable 2 shows the performance of the approach for different combinations of training and testing regimes. For example, the entry in the D4-tx row of the D3 column shows the performance (in. average number of frags at the end of an episode) of a model trained in D3 and tested in D4-tx. Not surprisingly, a model trained in the simple D3 environment does not learn sufficient invariance to. surface appearance to generalize well to other environments. Training in the more complex multi. texture environment in D4 yields better generalization: the trained model performs well in D3 and. exhibits non-trivial performance in D3-tx and D4-tx. Finally, exposing the model to significant. variation in surface appearance in D3-tx or D4-tx during training yields very good generalization..\nD1: Basic D2: Navigation 100 100 80 80 hetlh 60 60 40 40 DFP A3C 20 DQN DSR 0 10 20 30 40 50 0 10 20 30 40 50 Millions of steps Millions of steps D3: Battle D4: Battle 2 36 30 24 S 12 0 10 20 30 40 50 10 20 30 40 50 Millions of steps Millions of steps\nTrain D3 D4 D3-tx D4-tx D4-tx-L D3 33.6 17.8 29.8 20.9 22.0 D4 1.6 17.1 5.4 10.8 12.4 D3-tx 3.9 8.1 22.6 15.6 19.4 D4-tx 1.7 5.1 6.2 10.2 12.7\nTable 2: Generalization across environments\nThe last column of Table[2|additionally reports the performance of a higher-capacity model trained in D4-tx. This combination is referred to as D4-tx-L. As shown in the table, this model performs even better. The architecture is detailed in Appendix|A.\nVisual Doom AI Competition. To further evaluate the presented approach, we participated in. the Visual Doom AI Competition, held during September 2016. The competition evaluated sen- sorimotor control models that act based on raw visual input. The competition had the form of a tournament: the submitted agents play multiple games against each other, their performance mea- sured by aggregate frags. The competition included two tracks. The Limited Deathmatch track was. held in a known environment that was given to the participants in advance at training time. The. Full Deathmatch track evaluated generalization to previously unseen environments and took place. in multiple new environments that were not available to the participating teams at training time. We. only enrolled in the Full Deathmatch track. Our model was trained using a variant of the D4-tx-L regime.\nOur model won, outperforming the second best submission by more than 50%. That submission, de scribed byLample & Chaplot (2016), constitutes a strong baseline. It is a deep recurrent Q-network that incorporates an LSTM and was trained using reward shaping and extra supervision from the game engine. Specifically, the authors took advantage of the ability provided by the ViZDoom plat form to use the internal configuration of the game, including ground-truth knowledge of the presence of enemies in the field of view, during training. The authors' report shows that this additional su pervision improved performance significantly. Our model, which is simpler, achieved even highei performance without such additional supervision.\nGoal-agnostic training. We now evaluate the ability of the presented approach to learn without a. fixed goal at training time, and adapt to varying goals at test time. These experiments are performed. in the Battle scenario. We use three training regimes: (a) fixed goal vector during training, (b). random goal vector with each value sampled uniformly from |0, 1| for every episode, and (c) random goal vector with each value sampled uniformly from [-1, 1] for every episode. More details are provided in Appendix [A] Intuitively, in the second regime the agent is instructed to maximize the. different measurements, but has no knowledge of their relative importance. The third regime makes. no assumptions as to whether the measured quantities are desirable or not..\nThe results are shown in Table[3] Each group of columns corresponds to a training regime and each row to a different test-time goal. Goals are given by the weights of the three measurements (ammo, health, and frags) in the objective function. The first test-time goal in Table[3lis the goal vector used in the battle scenarios in the prior experiments, the second seeks to maximize the frag count, the third is a pacifist (maximize ammo and health, minimize frags), the fourth seeks to aimlessly drain ammunition, and the fifth aims to maximize health. For each row, each group of columns reports the average value of each of the three measurements at the end of an episode. Note that health level at the end of an episode can be negative if the agent suffered major damage in the pre-terminal step.\nWe draw two main conclusions. First, on the main task (first row), models trained without knowing the goal in advance (b,c) perform nearly as well as a dedicated model trained specifically for the eventual goal (a). Without knowing the eventual goal during training, the agent performs the task almost as well as when it was specifically trained for it. Second, all models generalize to new goals but not equally well. Models trained with a variety of goals (b,c) generalize much better than a model trained with a fixed goal.\n(a) fixed goal (0.5, 0.5, 1) (b) random goals [0, 1] (c) random goals [1, 1] test goal ammo health frags ammo health frags ammo health frags (0.5, 0.5, 1) 83.4 97.0 33.6 92.3 96.9 31.5 49.3 94.3 28.9 (0, 0, 1) 0.3 3.7 11.5 4.3 30.0 20.6 21.8 70.9 24.6 (1, 1, -1) 28.6 2.0 0.0 22.1 4.4 0.2 89.4 83.6 0.0 (-1,0,0) 1.0 8.3 1.7 1.9 7.5 1.2 0.9 8.6 1.7 (0,1,0) 0.7 2.7 2.6 9.0 77.8 6.6 3.0 69.6 7.9\nTable 3: Generalization across goals. Each group of three columns corresponds to a training regime each row corresponds to a test-time goal. The results in the first row indicate that the approach performs well on the main task even without knowing the goal at training time. The results in the other rows indicate that goal-agnostic training supports generalization across goals at test time.\nAblation study. We now perform an ablation study. using the D3-tx scenario. Specifically, we evaluate the. importance of vectorial feedback versus a scalar reward.. all n and the effect of predicting measurements at multiple. all n temporal offsets. The results are summarized in Ta-. ble 4]The table reports the performance (in average frags frags at the end of an episode) of our full model (predict-. frag ing three measurements at six temporal offsets) and of. ablated variants that only predict frags (a scalar reward). Table and/or only predict at the farthest temporal offset. As the. all m results demonstrate, predicting multiple measurements yield significantly improves the performance of the learned model, even when it is evaluated by only one of those. measurements. Predicting measurements at multiple future tim the intuition that a dense flow of multivariate measurements is a reward."}, {"section_index": "6", "section_name": "5 DISCUSSION", "section_text": "We presented an approach to sensorimotor control in immersive environments. Our approach is simple and demonstrates that supervised learning techniques can be adapted to learning to act ii complex and dynamic three-dimensional environments given raw sensory input and intrinsic mea surements. The model trains on raw experience, by interacting with the environment without extra neous supervision. Natural supervision is provided by the cotemporal structure of the sensory anc measurement streams. Our experiments have demonstrated that this simple approach outperform sophisticated deep reinforcement learning formulations on challenging tasks in immersive environ ments. Experiments have further demonstrated that the use of multivariate measurements provide. a significant advantage over conventional scalar rewards and that the trained model can effectivel pursue new goals not specified during training.\nThe presented work can be extended in multiple ways that are important for broadening the range of behaviors that can be learned. First, the presented model is purely reactive: it acts based on the current frame only, with no explicit facilities for memory and no test-time retention of internal representations. Recent work has explored memory-based models (Oh et al.|2016) and integrating such ideas with the presented approach may yield substantial advances. Second, significant progress in behavioral sophistication will likely require temporal abstraction and hierarchical organization of learned skills (Barto & Mahadevan 2003] Kulkarni et al.|2016a). Third, the presented model was developed for discrete action spaces; applying the presented ideas to continuous actions would be interesting (Lillicrap et al.]2016). Finally, predicting features learned directly from rich sensory input can blur the distinction between sensory and measurement streams (Mathieu et al.2016f Finn et al. [ 2016 Kalchbrenner et al.2016).\nIags all measurements all offsets 22.6 all measurements one offset 17.2 frags only all offsets 10.3 frags only one offset 5.0\nTable 4: Ablation study.. Predicting all measurements at all temporal offsets yields the best results.."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Karen E. Adolph and Sarah E. Berger. Motor development. In Handbook of Child Psychology, volume 2, pp 161-213. Wiley, 6th edition, 2006.\nAndrew G. Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning. Discre. Event Dynamic Systems. 13(1-2). 2003\nDimitri P. Bertsekas. A counterexample to temporal differences learning. Neural Computation, 7(2), 1995\nCharles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z. Leibo, Jack Rae. Daan Wierstra. and Demis Hassabis. Model-free episodic control. arXiv:1606.04460. 2016\nEyal Even-Dar and Yishay Mansour. Learning rates for Q-learning. JMLR, 5, 2003\nZoltan Gabor, Zsolt Kalmar, and Csaba Szepesvari. Multi-criteria reinforcement learning. In ICML, 1998\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human level performance on ImageNet classification. In ICCV, 2015\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015\nJens Kober, J. Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. IJRR, 32(11) 2013.\nTejas D. Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J. Gershman. Deep successor reinforcemer learning. arXiv:1606.02396. 2016b\nGuillaume Lample and Devendra Singh Chaplot. Playing FPS games with deep reinforcement learning arXiv:1609.05521, 2016.\nSergey Levine and Vladlen Koltun. Guided policy search. In ICML, 2013.\nChelsea Finn, Ian J. Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. In NIPS, 2016.\nMichael I. Jordan and David E. Rumelhart. Forward models: Supervised learning with a distal teacher. Cogni tive Science, 16(3), 1992\nNal Kalchbrenner. Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. Video pixel networks. arXiv:1610.00527, 2016.\nMichal Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Jaskowski. ViZDoom: A Doom-based AI research platform for visual reinforcement learning. In IEEE Conference on Computational Intelligence and Games, 2016.\nJens Kober, Andreas Wilhelm, Erhan Oztop, and Jan Peters. Reinforcement learning to adjust parametrized motor primitives to new situations. Autonomous Robots, 33(4), 2012.\nSergey Levine, Peter Pastor, Alex Krizhevsky, and Deirdre Quillen. Learning hand-eye coordination for robot grasping with deep learning and large-scale data collection. In ISER, 2016.\nTimothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver and Daan Wierstra. Continuous control with deep reinforcement learning. In ICLR, 2016\nMichael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean squar error. In ICLR, 2016.\nKevin P. Murphy. Machine Learning: A Probabilistic Perspective. MIT Press, 2012\nJunhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L. Lewis, and Satinder P. Singh. Action-conditional videc prediction using deep networks in Atari games. In NIPs, 2015..\nJunhyuk Oh, Valliappa Chockalingam, Satinder P. Singh, and Honglak Lee. Control of memory, active percep tion, and action in Minecraft. In ICML, 2016.\nDiederik M. Roijers, Peter Vamplew, Shimon Whiteson, and Richard Dazeley. A survey of multi-objective sequential decision-making. JAIR, 48, 2013.\nStephane Ross, Narek Melik-Barkhudarov, Kumar Shaurya Shankar, Andreas Wendel, Debadeepta Dey, J. An drew Bagnell, and Martial Hebert. Learning monocular reactive UAV control in cluttered natural environ ments. In ICRA, 2013.\nSatinder P. Singh and Richard S. Sutton. Reinforcement learning with replacing eligibility traces. Machin Learning, 22(1-3), 1996.\nSatinder P. Singh, Michael L. Littman, Nicholas K. Jong, David Pardoe, and Peter Stone. Learning predictiv state representations. In ICML, 2003.\nRichard S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3, 1988\nJohn N. Tsitsiklis. On the convergence of optimistic policy iteration. JMLR, 2002\nZiyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas. Dueling network architectures for deep reinforcement learning. In ICML, 2016."}, {"section_index": "8", "section_name": "A.1 NETWORK ARCHITECTURES", "section_text": "The detailed architectures of two network variants - basic and 1arge - are shown in Tables|A1 and[A2] The basic network follows the architecture of|Mnih et al.[(2015) as closely as possible. The large network is similar, but all layers starting from the third are wider by a factor of two. In all networks we use the leaky ReLU nonlinearity LReLU(x) = max(x, 0.2x) after each non terminal layer. We initialize the weights as proposed byHe et al.(2015).\nmodule input dimension channels kernel stride 84 x 84 x 1 32 8 4 21 x 21 x 32 64 4 2 Perception 10 x 10 64 64 3 1 10 : 10 : 64 512 3 128 Measurement 128 128 128 128 3.6 128 Goal 128 128 128 128 512 + 128 + 128 512 Expectation 512 3.6 512 + 128 + 128 512 Action 512 3 . 6 . 256\nTable A1: The basic architecture\nmodule input dimension channels kernel stride 128 x 128 x 1 32 8 4 32 x 32 x 32 64 4 2 Perception 16 x 16 x 64 128 3 1 16 : 16 : 128 1024 3 128 Measurement 128 128 128 128 3.6 128 Goal 128 128 128 128 1024 + 128 + 128 1024 Expectation 1024 3.6 1024 + 128 + 128 1024 Action 1024 3 . 6 . 256\nWe empirically validate the architectural choices in the D3-tx regime. We compare the full basic architecture to three variants:\nTable A2: The 1arge architecture\nNo normalization: normalization at the end of the action stream is not performed No split: no expectation/action split, simply predict future measurements with a fully connected network\nfull no normalization no split no input measurements Score 22.6 21.6 16.5 19.4\nTable A3: Evaluation of different network architectures"}, {"section_index": "9", "section_name": "A.2 OTHER DETAILS", "section_text": "We performed frame skipping during both training and testing. The agent observes the environment and selects an action every 4th frame. The selected action is repeated during the skipped frames.. This accelerates training without sacrificing accuracy. In the paper, \"step\" always refers to steps. after frame skipping (equivalent to every 4th step before frame skipping). When played by a human,. Doom runs at 35 frames per second, so one step of the agent is equivalent to 114 milliseconds of. real time. Therefore, frame skipping has the added benefit of bringing the reaction time of the agent. closer to that of a human.\nWe set the temporal offsets T1, ..., Tn of predicted future measurements to 1, 2, 4, 8, 16, and 32. steps in all experiments. The longest temporal offset corresponds to 3.66 seconds of real time. In al. experiments, only the latest three predictions (after 8, 16, and 32 steps) contributed to the objectiv. function, with fixed coefficients (0.5, 0.5, 1.0). Therefore, in scenarios with multiple measurement available to the agent (D3 and D4), the goal vector was specified by three numbers: the relative. weights of the three measurements (ammo, health, frags) in the objective function. In goal-directe. training, these were fixed to (0.5, 0.5, 1.0), and in goal-agnostic training they were sampled uni. formly at random from [0, 1] or [-1, 1].\nWe compared our approach to three prior methods: DQN (Mnih et al.. 2015),DSR (Kulka rni et al.]2016b), and A3C (Mnih et al.]2016). We used the authors'implementations of DQN (https://github.com/kuz/DeepMind-Atari-Deep-Q-Learner) and DSR (https://github.com/Ardavans/DsR), and an independent implementation of A30. (https: //github. com/muupan/async-r1). For scenarios D1 and D2 we used the change. in health as reward. For D3 and D4 we used a linear combination of changes of the three normalizec measurements with the same coefficients as for the presented approach: (0.5, 0.5, 1). For DQN and. DSR we tested three learning rates: the default one (0.00025) and two alternatives (0.00005 anc. 0.00002). Other hyperparameters were left at their default values. For A3C, which trains faster, we. performed a search over a set of learning rates ({2, 4, 8, 16, 32} : 10-4) for the first two tasks; fo. the last two tasks we trained 20 models with random learning rates sampled log-uniformly betweer. 10-4 and 10-2 and random (entropy regularization) sampled log-uniformly between 10-4 and. 10-1. For all baselines we report the best results we were able to obtain..\nNo input measurements: the input measurement stream is removed, and current measure- ments are not provided to the network..\nThe results are reported in Table[A3] All modifications of the ba s i c architecture hurt performance. howing that the two-stream formulation is beneficial and that providing the current measurement o the network increases performance but is not crucial..\nThe raw sensory input to the agent is the observed image, in grayscale, without any additional pre processing. The resolution is 8484 pixels for the bas i c model and 128128 pixels for the 1arge. one. We normalized the measurements by their standard deviations under random exploration. More precisely, we divided ammo count, health level, and frag count by 7.5, 30.0, and 1.0, respectively.\nWe used an experience memory of M = 20,000 steps, and sampled a mini-batch of N = 64 samples after every k = 64 new experiences added. We added the experiences to the memory using 8 copies of the agent running in parallel. The networks in all experiments were trained using the Adam algorithm (Kingma & Ba]2015) with 1 = 0.95, 2 = 0.999, and e = 10-4. The initial learning rate is set to 10-4 and is gradually decreased during training. The basic networks were trained for 800,000 mini-batch iterations (or 51.2 million steps), the 1arge one for 2,000,000 iterations."}] |
rJxDkvqee | [{"section_index": "0", "section_name": "MULTI-VIEW RECURRENT NEURAI ACOUSTIC WORD EMBEDDINGS", "section_text": "Wanjia He\nVallld ll Department of Computer Scienc University of Chicago. Chicago, IL 60637, USA"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Word embeddings-continuous-valued vector representations of words--are an almost ubiquitous component of recent natural language processing (NLP) research. Word embeddings can be learned using spectral methods (Deerwester et al 1990) or, more commonly in recent work, v1a neural networks (Bengio et al. 2003 Mnih & Hinton 2007]Mikolov et al.] 2013]Pennington et al. 2014).Word embeddings can also be composed to form embeddings of phrases, sentences, or documents (Socher et al. 2014 Kiros et al. 2015 W1eting et al. 2016 Iyyer et al.2015)\nIn typical NLP applications, such embeddings are intended to represent the semantics of the cor. responding words/sequences. In contrast, embeddings that represent the way a word or sequence sounds are rarely considered. In this work we address this problem, starting with embeddings of in dividual words. Such embeddings could be useful for tasks like spoken term detection (Fiscus et al.. 2007), spoken query-by-example search (Anguera et al.f2014), or even speech recognition using. a whole-word approach (Gemmeke et al.2011 Bengio & Heigold2014). In tasks that involve. comparing speech segments to each other, vector embeddings can allow more efficient and more ac-. curate distance computation than sequence-based approaches such as dynamic time warping (Levin et al. 20132015f Kamper et al.|2016[ Settle & Livescu2016 Chung et al.2016).\nWe consider the problem of learning vector representations of acoustic sequences and orthographi. (character) sequences corresponding to single words, such that the learned embeddings represen. the way the word sounds. We take a multi-view approach, where we jointly learn the embedding. for character and acoustic sequences. We consider several contrastive losses, based on learning. from pairs of matched acoustic-orthographic examples and randomly drawn mismatched pairs. Th. losses correspond to different goals for learning such embeddings; for example, we might want th embeddings of two waveforms to be close when they correspond to the same word and far when the. correspond to different ones, or we might want the distances between embeddings to correspond t some ground-truth orthographic edit distance..\nWeiran Wang & Karen Livescu\nToyota Technological Institute at Chicago Chicago, IL 60637, USA"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Recent work has begun exploring neural acoustic word embeddings-fixed-. dimensional vector representations of arbitrary-length speech segments corre-. sponding to words. Such embeddings are applicable to speech retrieval and recog-. nition tasks, where reasoning about whole words may make it possible to avoid. ambiguous sub-word representations. The main idea is to map acoustic sequences. to fixed-dimensional vectors such that examples of the same word are mapped. to similar vectors, while different-word examples are mapped to very different. vectors. In this work we take a multi-view approach to learning acoustic word. embeddings, in which we jointly learn to embed acoustic sequences and their cor- responding character sequences. We use deep bidirectional LSTM embedding. models and multi-view contrastive losses. We study the effect of different loss. variants, including fixed-margin and cost-sensitive losses. Our acoustic word em-. beddings improve over previous approaches for the task of word discrimination.. We also present results on other tasks that are enabled by the multi-view approach.. including cross-view word discrimination and word similarity..\nOne of the useful properties of this multi-view approach is that, unlike earlier work on acoustic word embeddings, it produces both acoustic and orthographic embeddings that can be directly compared This makes it possible to use the same learned embeddings for multiple single-view and cross-view tasks. Our multi-view embeddings produce improved results over earlier work on acoustic word discrimination, as well as encouraging results on cross-view discrimination and word similarity\nIn this section, we first introduce our approach for learning acoustic word embeddings in a multi view setting, after briefly reviewing related approaches to put ours in context. We then discuss the particular neural network architecture we use, based on bidirectional long short-term memory (LSTM) networks (Hochreiter & Schmidhuber!1 1997)\nN 1 min objclassify l(h(f(xi)),yi) f,h N i\nwhere network f maps an acoustic segment into a fixed-dimensional feature vector/embedding, h i. a classifier that predicts the corresponding word label from the label set of the training data, and the. loss l measures the discrepancy between the prediction and ground-truth word label (one can use. any multi-class classification loss here, and a typical choice is the cross-entropy loss where h has a. softmax top layer). The two networks f and h are trained jointly. Equivalently, one could conside the composition h(f(x)) as a classifier network, and use any intermediate layer's activations as the. features. We refer to the objective in (1) as the \"classifier network\"' objective, which has been used ir. several prior studies on acoustic word embeddings (Bengio & Heigold]2014) Kamper et al.]2016 Settle & Livescu2016).\nThis objective, however, is not ideal for learning acoustic word embeddings. This is because the set of possible word labels is huge, and we may not have enough instances of each label to train a good classifier. In downstream tasks, we may encounter acoustic segments of words that did not appear in the embedding training set, and it is not clear that the classifier-based embeddings will have reasonable behavior on previously unseen words.\nAn alternative approach, based on Siamese networks (Bromley et al.1993), uses supervision of the. form \"segment x' is similar to segment x?, and is not similar to segment x3\", where two segments. are considered similar if they have the same word label and dissimilar otherwise. Models based on Siamese networks have been used for a variety of representation learning problems in NLP (Hu. et al.2014] Wieting et al.]2016), vision (Hadsell et al.[2006), and speech (Synnaeve et al.2014 Kamper et al.2015) including acoustic word embeddings (Kamper et al.2016[ Settle & Livescu 2016). A typical objective in this category enforces that the distance between (x, x) is larger than. the distance between (x', x2) by some margin:\nN 1 min objsiamese := max(0, m+ dis(f(x),f(x?))- dis(f(x), f(x))) N\nUnlike the classification-based loss, the Siamese network loss does not enforce hard decisions or the label of each segment. Instead it tries to learn embeddings that respect distances between worc\nOur tensorflow implementation is available at\nPrevious approaches have focused on learning acoustic word embeddings in a \"single-view' setting.. In the simplest approach, one uses supervision of the form \"acoustic segment x is an instance of the word y', and trains the embedding to be discriminative of the word identity. Formally, given a dataset of paired acoustic segments and word labels {(xi, yi)}-1, this approach solves the follow- ing optimization:\nwhere the network f extracts the fixed-dimensional embedding, the distance function dis (., :) mea sures the distance between the two embedding vectors, and m > 0 is the margin parameter. The term. 'Siamese\" (Bromley et al.|1993} Chopra et al.|2005) refers to the fact that the triplet (x', x2, x3). share the same embedding network f.\npairs, which can be helpful for dealing with unseen words. The Siamese network approach also use more examples in training, as one can easily generate many more triplets than (segment, label) pairs and it is not limited to those labels that occur a sufficient number of times in the training set\nThe above approaches treat the word labels as discrete classes, which ignores the similarity betwee different words, and does not take advantage of the more complex information contained in th character sequences corresponding to word labels. The orthography naturally reflects some aspect of similarity between the words' pronunciations, which should also be reflected in the acousti embeddings. One way to learn features from multiple sources of complementary information i using a multi-view representation learning setting. We take this approach, and consider the acousti segment and the character sequence to be two different views of the pronunciation of the word.\nWhile many deep multi-view learning objectives are applicable (Ngiam et al. 2011 Srivastava & Salakhutdinov 2014} Sohn et al.|2014] Wang et al.2015), we consider the multi-view contrastive loss objective of (Hermann & Blunsom2014), which is simple to optimize and implement and performs well in practice. In this algorithm, we embed acoustic segments x by a network f and character label sequences c by another network g into a common space, and use weak supervi sion of the form \"for paired segment x+ and its character label sequence c+, the distance between their embedding is much smaller than the distance between embeddings of x+ and an unmatched character label sequence c-'. Formally, we optimize the following objective with such supervision: N\nN 1 min objo := max(0, m+dis(f(x),g(c+))- dis(f(x),g(c))) f,g N\nN min ob max(0, m+ dis(f(x), g(c+))- dis(g(c), g(c))) f,g N i N `max(0, m+ dis(f(x), g(c))- dis(f(x), g(c))) nin oh f , 9 N\nwhere tmax > O is a threshold for edit distances (all edit distances above tmax are considered equall bad), and mmax is the maximum margin we impose. The margin is set to mmax if the edit distanc between two character sequences is above tmax; otherwise it scales linearly with the edit distanc. editdis(c+, c-)). We use the Levenshtein distance as the edit distance. Here we explore the cost. sensitive margin with objo, but it could in principle be used with other objectives as well..\nNote that in the multi-view setting, we have multiple ways of generating triplets that contain one positive pair and one negative pair each. Below are the other three objectives we explore in this paper:\nN nin obj' max(0, m+ dis(f(x),g(c+))- dis(f(x),f(x))) N f,g\nFinally, thus far we have considered losses that do not explicitly take into account the degree of difference between the positive and negative pairs (although the learned embeddings may implicitly learn this through the relationship between sequences in the two views). We also consider a cost- sensitive objective designed to explicitly arrange the embedding space such that word similarity is respected. In (3), instead of a fixed margin m, we use:\nmin (tmax, editdis(c+, c-) m(c+, c) := mmax\nand more directly reflects the cosine similarity. This is equivalent to adding a nonlinear normalization layer on top of f.\nrecurrent connections recurrent connections eel Cer f1(x) output acoustic embedding g1(c) LSTT f(x) =[fi(x) f2(x)] saeeee aaeeees Cer srTSe WIT1 f2(x) g2(c) LsSsMee ls ree err 5 input acoustic input character output characters embedding features sequences g(c) = [g1(c) g2(c)] W c x 0\nFigure 1: Illustration of our embedding architecture and contrastive multi-view approach"}, {"section_index": "3", "section_name": "2.2 RECURRENT NEURAL NETWORK ARCHITECTURE", "section_text": "Since the inputs of both views have a sequential structure, we implement both f and g with recui. rent neural networks and in particular long-short term memory networks (LSTMs). Recurrent net ral networks are the state-of-the-art models for a number of speech tasks including speech recogn. tion|Graves et al.(2013), and LSTM-based acoustic word embeddings have produced the best resul1 on one of the tasks in our experiments (Settle & Livescu2016).\nAs shown in Figure [1] our f and g are produced by multi-layer (stacked) bidirectional LSTMs The inputs can be any frame-level acoustic feature representation and vector representation of the. characters in the orthographic input. At each layer, two LSTM cells process the input sequence fror. left to right and from right to left respectively. At intermediate layers, the outputs of the two LSTMs. at each time step are concatenated to form the input sequence to the next layer. At the top layer, the last time step outputs of the two LSTMs are concatenated to form a fixed-dimensional embedding. of the view, and the embeddings are then used to calculate the cosine distances in our objectives..\nWe are aware of no prior work on multi-view learning of acoustic and character-based word embed dings. However, acoustic word embeddings learned in other ways have recently begun to be studied. Levin et al.(2013) proposed an approach for embedding an arbitrary-length segment of speech as. a fixed-dimensional vector, based on representing each word as a vector of dynamic time warping. (DTW) distances to a set of template words. This approach produced improved performance on a. word discrimination task compared to using raw DTW distances, and was later also applied success. fully for a query-by-example task (Levin et al.]2015). One disadvantage of this approach is that.. while DTW handles the issue of variable sequence lengths, it is computationally costly and involves. a number of DTW parameters that are not learned..\nKamper et al.[(2016) and Settle & Livescu(2016) later improved on Levin et al.'s word discrimi- nation results using convolutional neural networks (CNNs) and recurrent neural networks (RNNs) trained with either a classification or contrastive loss.Bengio & Heigold (2014) trained convolu-. tional neural network (CNN)-based acoustic word embeddings for rescoring the outputs of a speech recognizer, using a loss combining classification and ranking criteria. Maas et al. (2012) trained. a CNN to predict a semantic word embedding from an acoustic segment, and used the resulting. embeddings as features in a segmental word-level speech recognizer. Harwath and Glass |Harwath & Glass(2015); Harwath et al.(2016);Harwath & Glass(2017) jointly trained CNN embeddings of images and spoken captions, and showed that word-like unit embeddings can be extracted from the speech model. CNNs require normalizing the duration of the input sequences, which has typ-. ically been done via padding. RNNs, on the other hand, are more flexible in dealing with very. different-length sequences.Chen et al.(2015) used long short-term memory (LSTM) networks with. a classification loss to embed acoustic words for a simple (single-query) query-by-example search. task.Chung et al.(2016) learned acoustic word embeddings based on recurrent neural network. (RNN) autoencoders, and found that they improve over DTW for a word discrimination task similar to that of|Levin et al.[(2013). Audhkhasi et al.(2017) learned autoencoders for acoustic and written words, as well as a model for comparing the two, and applied these to a keyword search task..\nEvaluation of acoustic word embeddings in downstream tasks such as speech recognition and searcl can be costly, and can obscure details of embedding models and training approaches. Most eval uations have been based on word discrimination - the task of determining whether two speecl segments correspond to the same word or not - which can be seen as a proxy for query-by-exampl search (Levin et al.f2013] Kamper et al.]2016f Settle & Livescu2016f Chung et al.]2016).On difference between word discrimination and search/recognition tasks is that in word discriminatio the word boundaries are given. However, prior work has been able to apply results from word dis crimination Levin et al. (2013) to improve a query-by-example system without known word bound ariesLevin et al.(2015), by simply applying their embeddings to non-word segments as well.\nThe only prior work focused on vector embeddings of character sequences explicitly aimed at repre senting their acoustic similarity is that of|Ghannay et al.(2016), who proposed evaluations based or nearest-neighbor retrieval, phonetic/orthographic similarity measures, and homophone disambigua tion. We use related tasks here, as well as acoustic word discrimination for comparison with prior work on acoustic embeddings\nThe ultimate goal is to gain improvements in speech systems where word-level discrimination is needed, such as speech recognition and query-by-example search. However, in order to focus on the content of the embeddings themselves and to more quickly compare a variety of models, it is desir able to have surrogate tasks that serve as intrinsic measures of performance. Here we consider three forms of evaluation, all based on measuring whether cosine distances between learned embeddings correspond well to desired properties.\nIn the first task, acoustic word discrimination, we are given a pair of acoustic sequences and must decide whether they correspond to the same word or to different words. This task has been used in several prior papers on acoustic word embeddings Kamper et al.(2015] 2016);Chung et al. (2016);Settle & Livescu (2016) and is a proxy for query-by-example search. For each given spoken word pair, we calculate the cosine distance between their embeddings. If the cosine distance is below a threshold, we output \"yes\"' (same word), otherwise we output \"no' (different words). The performance measure is the average precision (AP), which is the area under the precision-recall curve generated by varying the threshold and has a maximum value of 1.\nIn our multi-view setup, we embed not only the acoustic words but also the character sequence This allows us to use our embeddings also for tasks involving comparisons between written ar spoken words. For example, the standard task of spoken term detection (Fiscus et al.l[2007) involv searching for examples of a given text query in spoken documents. This task is identical to quer oy-example except that the query is given as text. In order to explore the potential of multi-vie embeddings for such tasks, we design another proxy task, cross-view word discrimination. He we are given a pair of inputs, one a written word and one an acoustic word segment, and our tas is to determine if the acoustic signal is an example of the written word. The evalution proceec analogously to the acoustic word discrimination task: We output \"yes' if the cosine distance b tween the embeddings of the written and spoken sequences are below some threshold, and measu performance as the average precision (AP) over all thresholds.\nFinally, we also would like to obtain a more fine-grained measure of whether the learned embeddings. capture our intuitive sense of similarity between words. Being able to capture word similarity may. also be useful in building query or recognition systems that fail gracefully and produce human-. like errors. For this purpose we measure the rank correlation between embedding distances and. character edit distances. This is analogous to the evaluation of semantic word embeddings via the rank correlation between embedding distances and human similarity judgments (Finkelstein et al. 2001,[Hill et al.]2015). In our case, however, we do not use human judgments since the ground-truth edit distances themselves provide a good measure. We refer to this as the word similarity task and we apply this measure to both pairs of acoustic embeddings and pairs of character sequence embeddings. Similar measures have been proposed byGhannay et al.[(2016) to evaluate acoustic word embeddings, although they considered only near neighbors of each word whereas we consider the correlation across the full range of word pairs..\nWe use the same experimental setup and data as in Kamper et al.(2015]2016); Settle & Livescu (2016). The task and setup were first developed by (Carlin et al.]2011). The data is drawn from the Switchboard English conversational speech corpus (Godfrey et al.. 1992).The spoken word segments range in duration from 50 to 200 frames (0.5 - 2 seconds). The train/dev/test splits contain 9971/10966/11024 pairs of acoustic segments and character sequences, corresponding tc 1687/3918/3390 unique words. In computing the AP for the dev or test set, all pairs in the set are. used, yielding approximately 60 million word pairs.."}, {"section_index": "4", "section_name": "4.2 MODEL DETAILS AND HYPERPARAMETER TUNING", "section_text": "We experiment with different neural network architectures for each view, varying the number of. stacked LSTM layers, the number of hidden units for each layer, and the use of single- or bidirec. tional LSTM cells. A coarse grid search shows that 2-layer bidirectional LSTMs with 512 hidden. units per direction per layer perform well on the acoustic word discrimination task, and we keep. this structure fixed for subsequent experiments (see Appendix[Afor more details). We use the out-. puts of the top-layer LSTMs as the learned embedding for each view, which is 1024-dimensional if. bidirectional LSTMs are used.\nIn training, we use dropout on the inputs of the acoustic view and between stacked layers for both views. The architecture is illustrated in Figure[1] For each training example, our contrastive losses require a corresponding negative example. We generate a negative character label sequence by uni formly sampling a word label from the training set that is different from the positive label. We perform a new negative label sampling at the beginning of each epoch. Similarly, negative acoustic feature sequences are uniformly sampled from all of the differently labeled acoustic feature se quences in the training set.\nThe network weights are initialized with values sampled uniformly from the range [-0.05, 0.05] We use the Adam optimizer (Kingma & Ba]2015) for updating the weights using mini-batches of 20 acoustic segments, with an initial learning rate tuned over {0.0001, 0.001}. Dropout is used at each layer, with the rate tuned over {0, 0.2, 0.4, 0.5}, in which 0.4 usually outperformed others. The margin in our basic contrastive objectives 0-3 is tuned over {0.3, 0.4, 0.5, 0.6, 0.7}, out of which 0.4 and 0.5 typically yield best results. For obj with the cost-sensitive margin, we tune the maximum margin mmax over {0.5, 0.6, 0.7} and the threshold tmax over {9, 11, 13}. We train each model for up to 1000 epochs. The model that gives the best AP on the development set is used for evaluation on the test set."}, {"section_index": "5", "section_name": "1.3 EFFECTS OF DIFFERENT OBJECTIVES", "section_text": "Table 1shows the development set AP for acoustic and cross-view word discrimination achieved. using the various objectives. We tuned the objectives for the acoustic discrimination task, and then used the corresponding converged models for the cross-view task. Of the simple contrastive objec-. tives, obj and obj? (which involve only cross-view distances) slightly outperform the other two on. the acoustic word discrimination task. The best-performing objective is the \"symmetrized\"' objective. objo + obj?, which significantly outperforms all individual objectives (and the combination of the. four). Finally, the cost-sensitive objective is very competitive as well, while falling slightly short of the best performance. We note that a similar objective to our obj + obj2 was used byVendrov. et al.[(2016) for the task of caption-image retrieval, where the authors essentially use all non-paired\nIn the experiments described below, we first focus on the acoustic word discrimination task for pur poses of initial exploration and hyperparameter search, and then largely fix the models for evaluation using the cross-view word discrimination and word similarity measures.\nThe input to the embedding model in the acoustic view is a sequence of 39-dimensional vectors (one per frame) of standard mel frequency cepstral coefficients (MFCCs) and their first and second derivatives. The input to the character sequence embedding model is a sequence of 26-dimensional one-hot vectors indicating each character of the word's orthography.\n0.8 0.7 0.6 0.5 0.4 0.3 0.2 obj 0 obj 2 0.1 obj 0 + obj 2 0.0 0 200 400 600 800 1000 Epochs\nFigure 2: Development set AP for several objec- tives on acoustic word discrimination.."}, {"section_index": "6", "section_name": "Method", "section_text": "examples from the other view in the minibatch as negative examples (instead of random samplin one negative example as we do) to be contrasted with one paired example.\nFigure |2|shows the progression of the development set AP for acoustic word discrimination over 1000 training epochs, using several of the objectives, where AP is evaluated every 5 epochs. We observe that even after 1000 epochs, the development set AP has not quite saturated, indicating that it may be possible to further improve performance\nOverall, our best-performing objective is the combined objo + obj?, and we use it for reporting final. test-set results. Table2 shows the test set AP for both the acoustic and cross-view tasks using our final model (\"multi-view LSTM'). For comparison, we also include acoustic word discrimination results reported previously byKamper et al.(2016); Settle & Livescu(2016). Previous approaches have not addressed the problem of learning embeddings jointly with the text view, so they can not be evaluated on the cross-view task.."}, {"section_index": "7", "section_name": "4.4 WORD SIMILARITY TASKS", "section_text": "Table3|gives our results on the word similarity tasks, that is the rank correlation (Spearman's p) be. tween embedding distances and orthographic edit distance (Levenshtein distance between characte . sequences). We measure this correlation for both our acoustic word embeddings and for our tex embeddings. In the case of the text embeddings, we could of course directly measure the Leven. shtein distance between the inputs; here we are simply measuring how much of this information the. text embeddings are able to retain.\nObjective Dev AP Dev AP (acoustic) (cross-view) obj0 0.659 0.791 obj1 0.654 0.807 obj2 0.675 0.788 obj3 0.640 0.782 obj0 + obj2 0.702 0.814 Li=0 13 obj 0.672 0.804 cost-sensitive 0.671 0.802\neveral objec- Table 1: Word discrimination performance n. with different objectives\nTable 2: Final test set AP for different word discrimination approaches. The first line is a baseline. using no word embeddings, but rather applying dynamic time warping (DTW) to the input MFCC. features. The second and third lines are prior results using no word embeddings (but rather using DTW with learned correspondence autoencoder-based or phone posterior features, trained on larger. external (in-domain) data). The remaining prior work corresponds to using cosine similarity between acoustic word embeddings.\nAlthough we have trained our embeddings using orthographic labels, it is also interesting to con sider how closely aligned the embeddings are with the corresponding phonetic pronunciations. For comparison, the rank correlation between our acoustic embeddings and phonetic edit distances is 0.226, and for our text embeddings it is 0.241, which are relatively close to the rank correlations with orthographic edit distance. A future direction is to directly train embeddings with phonetic sequence supervision rather than orthography; this setting involves somewhat stronger supervision, but it is easy to obtain in many cases.\nAnother interesting point is that the performance is not a great deal better for the text embeddings. than for the acoustic embeddings, even though the text embeddings have at their disposal the text input itself. We believe this has to do with the distribution of words in our data: While the data includes a large variety of words, it does not include many very similar pairs. In fact, of all pos- sible pairs of unique training set words, fewer than 2% have an edit distance below 5 characters.. Therefore, there may not be sufficient information to learn to distinguish detailed differences among. character sequences, and the cost-sensitive loss ultimately does not learn much more than to separate different words. In future work it would be interesting to experiment with data sets that have a larger. variety of similar words."}, {"section_index": "8", "section_name": "4.5 VISUALIZATION OF LEARNED EMBEDDINGS", "section_text": "Figure [3]gives a 2-dimensional t-SNE (van der Maaten & Hinton]2008) visualization of selected acoustic and character sequences from the development set, including some that were seen in the training set and some previously unseen words. The previously seen words in this figure were selected uniformly at random among those that appear at least 15 times in the development set. (the unseen words are the only six that appear at least 15 times in the development set). This. visualization demonstrates that the acoustic embeddings cluster very tightly and are very close to. the text embeddings, and that unseen words cluster nearly as well as previously seen ones..\nWhile Figure [3 shows the relationship among the multiple acoustic embeddings and the text em- beddings, the words are all very different so we cannot draw conclusions about the relationships. between words. Figure 4|provides another visualization, this time exploring the relationship among the text embeddings of a number of closely related words, namely all development set words end-. ing in \"-ly\", \"-ing\", and \"-tion\". This visualization confirms that related words are embedded close together, with the words sharing a suffix forming fairly well-defined clusters.."}, {"section_index": "9", "section_name": "5 CONCLUSION", "section_text": "We have presented an approach for jointly learning acoustic word embeddings and their orthographic counterparts. This multi-view approach produces improved acoustic word embedding performance over previous approaches, and also has the benefit that the same embeddings can be applied for both spoken and written query tasks. We have explored a variety of contrastive objectives: ones with a fixed margin that aim to separate same and different word pairs, as well as a cost-sensitive loss that aims to capture orthographic edit distances. While the losses generally perform similarly for word discrimination tasks, the cost-sensitive loss improves the correlation between embedding distances and orthographic distances. One interesting direction for future work is to directly use knowledge about phonetic pronunciations, in both evaluation and training. Another direction is to extend our approach to directly train on both word and non-word segments.\nTable 3: Word similarity results using fixed-margin and cost-sensitive objectives, given as rank. correlation (Spearman's p) between embedding distances and orthographic edit distances\nInterestingly, while the cost-sensitive objective did not produce substantial gains on the word dis crimination tasks above, it does greatly improve the performance on this word similarity measure. This is a satisfying observation, since the cost-sensitive loss is trying to improve precisely this rela- tionship between distances in the embedding space and the orthographic edit distance.\n20 15 decided 10 $ervice. goodness 5 business 0 RANGERS CAMPING RESTANRAUSPHERE - 5 mething -10 COLORADO MOUNTAINS -15 aprogram -20 -25 1 25 -20 -15 -10 -5 0 5 10 15\nFigure 3: Visualization via t-SNE of acoustic word embeddings (colored markers) and correspond ing character sequence embeddings (text), for a set of development set words with at least 15 acoustic tokens. Words seen in training are in lower-case; unseen words are in upper-case.\n15 ceiiaadling weetingg peratiaraxtuition rebuldinag 10 at conventior hhaaftiaaysting ppae onon oation rioton appghapaneraresen brotessggg obvioes apn bowhogsolraa addna #eaton 5 ulatif encentration sorsethatging pccaeler trampling acciden' halding hopefully.dis.. eely basic selection mgeably traditionanitattmatel 0 unfairly natural populationeceinisterin barking .graduation -5 remembering Jegislation HesaPely demonstration -10 -15 -4 -2 0 2 4 6 8 10 12 14\nFigure 4: Visualization via t-SNE of character sequence embeddings for words with the suffixes -ly\" (blue), \"-ing\" (red), and \"-tion' (green)."}, {"section_index": "10", "section_name": "ACKNOWLEDGMENTS", "section_text": "This research was supported by a Google Faculty Award and by NSF grant IIS-1321015. The. opinions expressed in this work are those of the authors and do not necessarily reflect the views o the funding agency. This research used GPUs donated by NVIDIA Corporation. We thank Hermar. Kamper and Shane Settle for their assistance with the data and experimental setup."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Xavier Anguera, Luis Javier Rodriguez-Fuentes, Igor Szoke, Andi Buzo, and Florian Metze. Query by example search on speech at mediaeval 2014. In MediaEval, 2014..\nKartik Audhkhasi, Andrew Rosenberg, Abhinav Sethy, Bhuvana Ramabhadran, and Brian Kings. bury. End-to-end ASR-free keyword search from speech. arXiv preprint arXiv:1701.04313, 2017\nSamy Bengio and Georg Heigold. Word embeddings for speech recognition. In IEEE Int. Conf Acoustics, Speech and Sig. Proc.. 2014.\nYoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. Journal of Machine Learing Research, 3(Feb):1137-1155, 2003.\nJane Bromley, Isabelle Guyon, Yann Lecun, Eduard Sackinger, and Roopak Shah. Signature verifi cation using a siamese time delay neural network. In Advances in Neural Information Processing. Systems (NIPS), pp. 737-744, 1993.\nMichael A Carlin, Samuel Thomas, Aren Jansen, and Hynek Hermansky. Rapid evaluation of speecl representations for spoken term discovery. In Proc. Interspeech, 2011..\nGuoguo Chen, Carolina Parada, and Tara N Sainath. Query-by-example keyword spotting using long short-term memory networks. In Proc. ICASSP, 2015.\nSumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, witl application to face verification. In IEEE Computer Society Conf. Computer Vision and Patter. Recognition, pp. 539-546, 2005.\nJonathan G Fiscus, Jerome Ajot, John S Garofolo, and George Doddingtion. Results of the 200e spoken term detection evaluation. In Proc. SiGIR, volume 7, pp. 51-57. Citeseer, 2007..\nJort F Gemmeke, Tuomas Virtanen, and Antti Hurmalainen. Exemplar-based sparse representations for noise robust automatic speech recognition. IEEE Transactions on Acoustics, Speech, ana. Language Processing, 19(7):2067-2080. 2011.\nSahar Ghannay, Yannick Esteve, Nathalie Camelin, and Paul Deleglise. Evaluation of acoustic word embeddings. In Proc. ACL Workshop on Evaluating Vector-Space Representations for NLP, 2016\nJohn J Godfrey, Edward C Holliman, and Jane McDaniel. SWITCHBOARD: Telephone speecl corpus for research and development. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc.. 1992.\nAlex Graves, Abdel rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur rent neural networks. In JEEE Jnt. Conf Acousti. Sneech and Sig. Proc.. 2013.\nDavid Harwath and James Glass. Deep multimodal semantic en peech and images. In\nRaia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant oonition?OO6\nKarl Moritz Hermann and Phil Blunsom. Multilingual distributed representations without worc alignment. In Int. Conf. Learning Representations, 2014. arXiv:1312.6173 [cs.CL].\nFelix Hill, Roi Reichart, and Anna Korhonen. SimLex-999: Evaluating semantic models with (gen uine) similarity estimation. Computational Linguistics, 41(4), 2015..\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8) 1735-1780, 1997.\nHerman Kamper, Weiran Wang, and Karen Livescu. Deep convolutional acoustic word embeddings. using word-pair side information. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc.. 2016.\nRyan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Tor ralba, and Sanja Fidler. Skip-thought vectors. In Advances in Neural Information Processing Systems (NIPS), 2015.\nKeith Levin, Aren Jansen, and Benjamin Van Durme. Segmental acoustic indexing for zero resource keyword search. In IEEE Int. Conf. Acoustics, Speech and Sig. Proc., 2015.\nShane Settle and Karen Livescu. Discriminative acoustic word embeddings: Recurrent neura network-based approaches. In Proc. IEEE Workshop on Spoken Language Technology (SLT) 2016.\nJiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Ng. Multimodal deep learning. In ICML, pp. 689-696, 2011.\nRichard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng. Grounded compositional semantics for finding and describing images with sentences. Trans- actions of the Association for Computational Linguistics, 2:207-218, 2014.\nLaurens J. P. van der Maaten and Geoffrey E. Hinton. Visualizing data using t-SNE. Journal of Machine Learing Research, 9:2579-2605, November 2008\nWeiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. On deep multi-view representation learning. In ICML, pp. 1083-1092, 2015.\nJohn Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. Towards universal paraphrastic sentence embeddings. In Int. Conf. Learning Representations, 2016"}, {"section_index": "12", "section_name": "A ADDITIONAL ANALYSIS", "section_text": "We first explore the effect of network architectures for our embedding models. We learn embeddings using objective obj and evaluate them on the acoustic and cross-view word discrimination tasks The resulting average precisions on the development set are given in Table 4] All of the models were trained for 1o0o epochs, except for the 1-layer unidirectional models which converged after 500 epochs. It is clear that bidirectional LSTMs are more successful than unidirectional LSTMs for these tasks, and two layers of LSTMs are much better than a single layer of LSTMs. We did not observe significant further improvement by using more than two layers of LSTMs. For all other experiments, we fix the architecture to 2-layer bidirectional LSTMs for each view.\nAremnleclure Dev AP Dev AP (acoustic word discrimination) (cross-view word discrimination) 1-layer unidirectional 0.379 0.616 1-layer bidirectional 0.466 0.690 2-layer bidirectional 0.659 0.791\n1.0 1.4 1.2 0.8 1.0 0.8 0.6 0.6 000 0.4 0.4 .. 0.2 0.2 0.0 0.0 0.2 0.0 0.2 0.4 0.6 0.8 1.0 -5 0 5 10 15 20 Recall Orthographic edit distances\nFigure 5: Precision-recall curve (left: two-layer bidirectional LSTM trained with obj + obj2 for word discrimination task) and scatter plot of embedding distances vs. orthographic distances (right: cost-sensitive margin model for word similarity task). for our best embedding models\nIn Figure[5|we also give the precision-recall curve for our best models, as well as the scatter plot oi cosine distances between acoustic embeddings vs. orthographic edit distances.\nTable 4: Average precision (AP) for acoustic and cross-view word discrimination tasks on the de velopment set, using embeddings learned with objective obj and different LSTM architectures."}] |
r1br_2Kge | [{"section_index": "0", "section_name": "SHORT AND DEEP: SKETCHING AND NEURAL NETWORKS", "section_text": "Amit Daniely, Nevena Lazic, Yoram Singer, Kunal Talwar.\nData-independent methods for dimensionality reduction such as random projec- tions, sketches, and feature hashing have become increasingly popular in recent years. These methods often seek to reduce dimensionality while preserving the hypothesis class, resulting in inherent lower bounds on the size of projected data For example, preserving linear separability requires N(1/y2) dimensions, where is the margin, and in the case of polynomial functions, the number of required di- mensions has an exponential dependence on the polynomial degree. Despite these limitations, we show that the dimensionality can be reduced further while main taining performance guarantees, using improper learning with a slightly larger hypothesis class. In particular, we show that any sparse polynomial function of a sparse binary vector can be computed from a compact sketch by a single-layer neu- ral network, where the sketch size has a logarithmic dependence on the polyno mial degree. A practical consequence is that networks trained on sketched data are compact, and therefore suitable for settings with memory and power constraints. We empirically show that our approach leads to networks with fewer parameters than related methods such as feature hashing, at equal or better performance."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In many supervised learning problems, input data are high-dimensional and sparse. The high di mensionality may be inherent in the domain, such as a large vocabulary in a language model, o the result of creating hybrid conjunction features. This setting poses known statistical and compu tational challenges for standard supervised learning techniques, as high-dimensional inputs lead tc models with a very large number of parameters.\nAn increasingly popular approach to reducing model size is to map inputs to a lower-dimensional. space in a data-independent manner, using methods such as random projections, sketches, and hash-. ing. These mappings typically attempt to preserve the hypothesis class, leading to inherent theoret-. ical limitations on size. For example, for linearly separable unit vectors with margin y, it can be shown that at least Q(1/y2) dimensions are needed to preserve linear separability, even if one can use arbitrary input embeddings (see Section|D). It would therefore appear that data dimensionality cannot be reduced beyond this bound..\nIn this work, we show that using a slightly larger hypothesis class when decoding projections (im. proper learning) allows us to further reduce dimensionality while maintaining theoretical guarantees.. In particular, we show that any sparse polynomial function of a sparse binary vector can be computed from a very compact sketch by a single-layer neural network. The hidden layer allows us to \"decode' inputs from representations that are smaller than in existing work. In the simplest case, we show. that for linearly separable k-sparse d-dimensional inputs, one can create a O(k log )-dimensional sketch of the inputs and guarantee that a single-layer neural network can correctly classify 1 fraction of the sketched data. In the case of polynomial functions, the required sketch size has a logarithmic dependence on the polynomial degree..\n*Email: kunal@ google.com. Author for correspondences\nFor binary k-sparse input vectors, we show that it suffices to have a simple feed-forward network. with nonlinearity implemented via the commonly used rectified linear unit (Relu). We extend our. results to real-valued data that is close to being k-sparse, using less conventional min and median nonlinearities. Furthermore, we show that data can be mapped using sparse sketching matrices.\nThus, our sketches are efficient to compute and do not increase the number of non-zero input values by much, in contrast to standard dense Gaussian projections..\nWe empirically evaluate our sketches on real and synthetic datasets. Our approach leads to more compact neural networks than existing methods such as feature hashing and Gaussian random pro jections, at competitive or better performance. This makes our sketches appealing for deployment in settings with strict memory and power constraints, such as mobile and embedded devices..\nTo put our work in context, we next summarize some lines of research related to this work\nRandom projections and sketching. Random Gaussian projections are by now a standard tool for dimensionality reduction. For general vectors, the Johnson-Lindenstrauss (1984) Lemma implies that a random Gaussian projection into O(log(1/)/e2) dimensions preserves the inner product be- tween a pair of unit vectors up to an additive factor e, with probability 1- d. A long line of work has sought sparser projection matrices with similar guarantees; see (Achlioptas2003} Ailon & Chazelle2009] Matousek2008 Dasgupta et al.]2010] Braverman et al.2010 Kane & Nelson 2014 Clarkson & Woodruff2013). Research in streaming and sketching algorithms has addressed related questions. Alon et al. (1999) showed a simple hashing-based algorithm for unbiased estima tors for the Euclidean norm in the streaming setting. Charikar et al. (2004) showed an algorithm for the heavy-hitters problem based on the count sketch. Most relevant to our works is the count-min sketch of Cormode and Muthukrishnan (2005af |2005b).\nProjections in learning. Random projections have been used in machine learning at least since the work of Arriaga and Vempala (2006). For fast estimation of a certain class of kernel func tions, sampling has been proposed as a dimensionality reduction technique in (Kontorovich] 2007 and (Rahimi & Recht]2007). Shi et al. (2009) propose using a count-min sketch to reduce dimen-. sionality while approximately preserving inner products for sparse vectors. Weinberger et al. (2009. use the count-sketch to get an unbiased estimator for the inner product of sparse vectors and prove strong concentration bounds. Ganchev and Dredze (2008) empirically show that hashing is effective. in reducing model size without significantly impacting performance. Hashing has also been used. in Vowpal Wabbit (Langford et al.]2007). Talukdar and Cohen (2014) use the count-min sketch. in graph-based semi-supervised learning. Pham and Pagh (2013) showed that a count sketch of a tensor power of a vector could be quickly computed without explicitly computing the tensor power and applied it to fast sketching for polynomial kernels..\nCompressive sensing. Our work is also related to compressive sensing. For k-sparse vectors, results in this area, e.g. (Donoho2006Candes & Tao 2006, imply that a k-sparse vector x E Rd can be reconstructed w.h.p. from a projection of dimension O(k ln d). However, to our knowledge, no provable decoding algorithms are implementable by a low-depth neural network. Recent work by Mousavi et al. (2015) empirically explores using a deep network for decoding in compressive sensing and also considers learnt non-linear encodings to adapt to the distribution of inputs.\nParameter reduction in deep learning. Our work can be viewed as a method for reducing the number of parameters in neural networks. Neural networks have become ubiquitous in many ma- chine learning applications, including speech recognition, computer vision, and language processing tasks(see (Hinton et al.[|2012f|Krizhevsky et al.]2012) Sermanet et a1.[2013f|Vinyals et al.]2014) for a few notable examples). These successes have in part been enabled by recent advances in scaling up deep networks, leading to models with millions of parameters (Dean et al.]2012) Krizhevsky et al. 2012). However, a drawback of such large models is that they are very slow to train, and difficult o deploy on mobile and embedded devices with memory and power constraints. Denil et al. (2013) demonstrate significant redundancies in the parameterization of several deep learning architectures and they propose training low-rank decompositions of weight matrices. Cheng et al. (2015) im pose circulant matrix structure on fully connected layers. Ba and Caruana (2014) train shallow networks to predict the log-outputs of a large deep network, and Hinton et al.72015) train a small network to match smoothed predictions of a complex deep network or an ensemble of such models Collins and Kohli (2014) encourage zero-weight connections using sparsity-inducing priors, while others such as LeCun et al.(1989); Hassibi et al.(1993); Han et al.(2015) use techniques for prun- ing weights. HashedNets (Chen et al.2015) enforce parameter sharing between random groups\nof network parameters. In contrast to these methods, sketching only involves applying a sparse linear projection to the inputs, and does not require a specialized learning procedure or network. architecture."}, {"section_index": "2", "section_name": "3 SKETCHING", "section_text": "Given a parameter m and a hash function h : [d] -> [m], the sketch Sn(x) of a vector x E Bd.k is a binary vector y where each bit of y is the OR of bits of x that hash to it:\nYl = i:h(i)=l\nThus, the decoded bit i is simply the AND of the t bits of Y that index i hashes to in the sketch.\nPr[D h1:t\nProof. Fix a vector x E Bd.k. Let &n(i). {i' i : h(i') = h(i)} denote the collision set of i for. a particular hash function h. Decoding will fail if x, = O and for each of the t hash functions, the collision set of i for contains an index of a non-zero bit of x. For a particular h, the probability of this event is:\nk 1 Pr =1 < Pr[h(i') = h(i)]< Xi! m e i'EEn(i) i':x;=1\nwhere the second inequality follows since the sum is over at most k terms, and each term is m-1 by pairwise independence. Thus Pr[Yn,() xi] e-1 for any j E [t]. Since hash functions h; are drawn independently, and decoding can fail only if all t hash functions fail, it follows that Pr[DAND(Sh1:t(x),i) xi] e-t.\nPr 7\nSuch hash families can be easily constructed (see e.g.Mitzenmacher & Upfal(2005)), using a O(log m) bit seed. Moreover, each hash can be evaluated using O(1) arithmetic operations over O(log d)-sized words\nFor simplicity, we first present our results for sparse binary vectors, and extend the discussion to real-valued vectors to Appendix ALet Bd,k = {x E {0,1}d : x|o k} be the set of k- sparse d-dimensional binary vectors. We sketch such vectors using a family of randomized sketching algorithms based on the count-min sketch, as described next.\nWe map data using a concatenation of several such sketches. Given an ordered set of hash functions\nANL dej h1:t jE[t]\nThe following theorem summarizes an important property of these sketches. As a reminder, a set of hash functions h1:t from [d] to [m] is pairwise independent if for all i j E [d] and a, b E [m] Pr[h(i) = a ^ h(j) = b] = m-2\n4 SPARSE LINEAR FUNCTIONS Decoding layer Y=Sk(x) Sketching Step X X24\nFigure 1: Neural-network sketching: sparse vector x maps to sketch using t = 3 hashes & m = 8; shaded squares designate 1's; sketching step is random; sketch then used as input to single-layer net: w' x; nodes labelled \"24' & \"29' correspond to decoding of x24 & x29 and shown with non zero incoming edges\nthen set the output weights of the network to the corresponding non-zero weights w; to get w ' x\nIt remains to show that a hidden unit can implement DAND(Y,i). Indeed, the AND of t bits can n1: be implemented using nearly any non-linearity. With Relu(a) = max{0, a}, we can construct the activation for bit xi, aj = (1,) Vi Y1j + Bi, by setting the appropriate t weights in Vij to 1, setting remaining weights to 0, and setting the bias B, to 1 - t. Using Corollary|3.2 we have the following theorem.\nTheorem 4.1. For every w E Hd.s there exists a set of weights for a network N E Ns(Relu) such that for each x E Bd.k,.\nas long as m = ek and t = log(s/8). Moreover, the weights coming into each node in the hidden layer are in {0, 1} with at most t non-zeros.\nThe final property implies that when using w as a linear classifier, we get small generalization error as long as the number of examples is at least N(s(1 + t log mt)). This can be proved, e.g., using standard compression arguments: each such model can be represented using only st log(mt) bits in addition to the representation size of w. Similar bounds hold when we use l1 bounds on the weight coming into each unit. Note that even for s = d (i.e. w is unrestricted), we get non-trivial input compression.\nFor comparison, we prove the following result for Gaussian projections in the appendix [B] In thi. case, the model weights in our construction are not sparse.\nTheorem 4.2. For every w E Hd.s there exists a set of weights for a network N E Ns(Relu) such that for each x E Bd i.\nTheorem 5.1. Given w E Rs, and sets Aj. [d]. let g : {0, 1}d -> R denote the polynomia\nS S g(x)=wj 11 Xi=>Wj j=1 iEAj j=1 iEAj\nLet w E Hd.s, x E Bd.k, and Y = Sh1t (x) for m, t satisfying the conditions. of Corollary3.2 We will now argue that there exists a one-layer neural network that takes Y as input and outputs w'x. with high probability (over the random- ness of the sketching process)..\nLet Nn(f) denote the family of feed-. forward neural networks with one hidden layer containing n nodes, nonlinearity f. applied at each hidden unit, and a linear. function at the output layer. We can con-. struct a network in Ns(Relu) such that. h1:+ (i.e. decodes bit x, from the sketch) for. each index i in the support of w. We can. Onding non-zero weights w, to get w' x.\nPrh1t[N(Snt(x))=w'x]1-6,\nPrh1t[N(Gx))=w'x1-8\nFor boolean inputs, Theorem4.1|extends immediately to sparse polynomial functions. Note that we and DAND(Y, j). Since each decoding is an AND of t bits, the overall decoding is an AND of at n1 : t most 2t locations in the sketch. More generally, we have the following theorem:\nThen there exists a set of weights for a network N E Ns(Relu) such that for each x E Bd,k\nPrh1+ [N(Snt(x)) =g(x)>1-0\nThis is a setting where we get a significant advantage over proper learning. To our knowldege.. there is no analog of this result for Gaussian projections. Classical sketching approaches would. use a sketch of xOp, which is a kP-sparse vector over binary vectors of dimension dP. Known. sketching techniques such as Pham & Pagh|(2013) would construct a sketch of size Q(kp). Practical. techniques such as Vowpal Wabbit also construct cross features by explicitly building them and have this exponential dependence. In stark contrast, neural networks allow us to get away with a. logarithmic dependence on p.\nUsing polynomial kernels. Theorems 4.1 has a corresponding variants where the neural net is replaced by a polynomial of degree t. Similarly, the neural net in Theorem 5.1|can be replace by a degree-pt polynomial when the polynomial g has degree p. This implies that one can use a polynomial kernel to get efficient learning.\nDeterministic sketching. A natural question that arises is whether the parameters above can im. proved. We show in App.C|that if we allow large scalars in the sketches, one can construct a deter-. ministic (2k + 1)-dimensional sketch from which a shallow network can reconstruct any monomial We also show a lower bound of k on the required dimensionality..\nLower bound for proper learning. We can also show, see App.D that if one does not expand the hypothesis class, then even in the simplest of settings of linear classifiers over 1-sparse vec. tors, the required dimensionality of the projection is much larger than the dimension needed for improper learning. The result is likely folklore and thus we present a short proof in the appendix foi completeness using concrete constants in the theorem and its proof below.\nIn this section, we evaluate sketches on synthetically generated datasets for the task of polynomia. regression. In all the experiments here, we assume input dimension d = 104, input sparsity k =. 50, hypothesis support s = 300, and n = 2 10 examples. We assume that only a subset o features I C [d] are relevant for the regression task, with T = 50. To generate an hypothesis. we select s subsets of relevant features A1,..., As C I each of cardinality at most 3, and generat. the corresponding weight vector w by drawing corresponding s non-zero entries from the standar. Gaussian distribution. We generate binary feature vectors x E Bd.k as a mixture of relevant an. other features. Concretely, for each example we draw 12 feature indices uniformly at random fron I, and the remaining indices from [d]. We generate target outputs as g(x) + z, where g(x) is i1. the form of the polynomial given in Theorem|5.1] and z is additive Gaussian noise with standar. deviation 0.05. In all experiments, we train on 90% of the examples and evaluate mean square. error on the rest.\nWe first examined the effect of the sketching parameters m (hash size) and t (number of hash func-. tions) on sparse linear regression error. We generated synthetic datasets as described above (with all feature subsets in A having cardinality 1) and trained networks in Ns(Relu). The results are shown in Figure 2|(left). As expected, increasing t leads to better performance. Using hash size m less. than the input sparsity k leads to poor results, while increasing hash size beyond ek (in this case,. ek 136) for reasonable t yields only modest improvements..\nWe next examined the advantages of improper learning. We generated 10 sparse linear regression datasets and trained linear models and networks in N.(Relu) on original and sketched features with\nNeural nets on Boolean inputs. We remark that for Boolean inputs (irrespective of sparsity), any oolynomial with s monomials can be represented by a neural network in Ns(Relu) using the con struction in Theorem5.1\n0.7 0.25 m=25 m=136 linear 0.06 00 1 1 linear m=50 AAA m=150 I N.(Relu) A N,(Relu) 0.6 .. m=75 444 m=200 0.20 ... m=100 >> m=300 0.05 menn aenner erer 0.5 mennn aennner ener . : 0.4 0.15 0.04 0 : ............ 0.3 0.10 0.03 .... 0.2 H ......... A H 0.05 ++ : 0.1 : : ......... +44 0.02 ........ ..... ... 0.00 0.0 : 1 2 4 6 8 10 12 14 1 2 4 6 8 10 12 14 no sketch 1 2 4 6 8 10 12 14 no sketc t t t\nFigure 2: Left: effect of varying t, m for sketched 1-hidden layer network. Center: sparse linear regression on sketched data with improper learning. Right: sparse polynomial regression on sketched data..\nTable 1: Comparison of sketches and Gaussian. random projections on the sparse linear regression task (top) and sparse polynomial regression task (bottom). See text for details.\n1K 2K 3K Gaussian 0.089 0.057 0.029 Sketch t = 1 0.087 0.049 0.031 Sketch t = 2 0.072 0.041 0.023 Sketch t = 6 0.041 0.033 0.022 Gaussian 0.043 0.037 0.034 Sketch t = 1 0.041 0.036 0.033 Sketch t = 2 0.036 0.027 0.024 Sketch t = 6 0.032 0.022 0.018\nWe also compared our sketches to Gaussian random projections. We generated sparse linear and. polynomial regression datasets with the same settings as before, and reduce the dimensionality of the inputs to 1000, 2000 and 3000 using Gaussian random projections and sketches with t E {1, 2, 6} We remark that in this comparison, the column headings correspond to the total sketch size mt.. Thus, e.g., when we take t = 6, m is correspondingly reduced. We report the squared error averaged. across examples and five datasets of one-layer neural networks in Table[1] The results demonstrate. that sketches with t > 1 yield lower error than Gaussian projections. Note also that Gaussian. projections are dense and hence much slower to train..\nLinear and low degree sparse polynomials are often used for classification. Our results imply that if we have linear or a sparse polynomial with classification accuracy 1-= over some set of examples in Bd.k {0, 1}, then neural networks constructed to compute the linear or polynomial function attain accuracy of at least 1 - over the same examples. Moreover, the number of parameters in the new network is relatively small by enforcing sparsity or l1 bounds for the weights into the hidden layers. We thus get generalization bounds with negligible degradation with respect to non- sketched predictor. In this section, we evaluate sketches on the language processing classification tasks described below.\nm = 200 and several values of t. The results are shown in Figure 2(center). The neural net work yields notably better performance than a linear model. This suggests that linear classifiers are not well-preserved after projections, as the N(1/y2) projection size required for linear separability can be large. Applying a neural network to sketched data allows us to use smaller projections.\nWe repeated the previous experiment for 10 poly-. nomial regression datasets, generated with feature. subsets in A of cardinality 2 and 3. The results are shown in Figure2(right). The linear model is a bad. fit, showing that g(x) is not well approximated by a. linear function. Neural networks applied to sketches. succeed in learning a small model and achieve sig. nificantly lower error than a network applied to the. original features for t > 6. This suggests that reduc-. ing the input size, and consequently the number of. model parameters, can lead to better generalization. Note that previous work on hashing and projections. would imply using significantly larger sketch size for. this setting.\n0.75 * 0.92 10 0.75 81412 0.70 0.90 10 0.70 score ACecnney score 14 10 10 1 14 18 0.65 0.65 mt=1K mt=1K mt=1K 122 0000 mt=2K 190101 mt=2K mt=2K 0.86 : 4 10 mt=5K 1212 mt=5K 12 mt=5K mt=10K 0.6014 12 mt=10K 0.60 14 mt=10K +*+ original *** original 14 12 *** hash 500K 14 0.84 104 105 106 105 106 10' 105 106 Non-zero parameters in 1st layer Non-zero parameters in 1st layer Non-zero parameters in 1st layer\nFigure 3: Performance vs. number of nonzero parameters in 1st layer for Reuters (left), AG News. (center), and type tagging (right). Each color corresponds to a different sketch size and markers in. dicate the number of subsketches t. We evaluate each setting for three values of the l1 regularizatior parameter X1.\nEntity Type Tagging. Entity type tagging is the task of assigning one or more labels (such as. person, location, organization, event) to mentions of entities in text. We perform type tagging on a. corpus of new documents containing 110K mentions annotated with 88 labels (on average, 1.7 labels. per mention). Features for each mention include surrounding words, syntactic and lexical patterns leading to a very large dictionary. Similarly to previous work, we map each string feature to a 32 bit. integer, and then further reduce dimensionality using hashing or sketches. See Gillick et al.(2014. for more details on features and labels for this task..\nReuters-news Topic Classification. The Reuters RCV1 data set consists of a collection of ap-. proximately 800,o00 text articles, each of which is assigned multiple labels. There are 4 high-level categories: Economics, Commerce, Medical, and Government (ECAT, CCAT, MCAT, GCAT), and multiple more specific categories. We focus on training binary classifiers for each of the four major categories. The input features we use are binary unigram features. Post word-stemming, we get data of approximately 113,000 dimensions. The feature vectors are very sparse, however, and most. examples have fewer than 120 non-zero features\nAG-news Topic Classification. We perform topic classification on 680K articles from AG news corpus, labeled with one of 8 news categories: Business, Entertainment, Health, Sci/Tech, Sports Europe, U.S., World. For each document, we extract binary word indicator features from the title and description; in total, there are 210K unique features, and on average, 23 non-zero features per document.\nExperimental Setup. In all experiments, we use two-layer feed-forward networks with ReLU ac- tivations and 100 hidden units in each layer. We use a softmax output for multiclass classification and multiple binary logistic outputs for multilabel tasks. We experimented with input sizes of 1000. 2000, 5000, and 10,000 and reduced the dimensionality of the original features using sketches with t E {1,2, 4, 6,8,10,12, 14} blocks. In addition, we experimented with networks trained on the original features. We encouraged parameter sparsity in the first layer using l1-norm regularization and learn parameters using the proximal stochastic gradient method. As before, we trained on 90% of the examples and evaluated on the remaining 10%. We report accuracy values for multiclass classification, and F1 score for multilabel tasks, with true positive, false positive, and false negative counts accumulated across all labels.\nResults. Since one motivation for our work is reducing the number of parameters in neural networ models, we plot the performance metrics versus the number of non-zero parameters in the firs layer of the network. The results are shown in Figure 3 for different sketching configurations an settings of the l1-norm regularization parameters (1). On the entity type tagging task, we compare sketches to a single hash function of size 500,o00 as the number of the original features is to large. In this case, sketching allows us to both improve performance and reduce the number o parameters. On the Reuters task, sketches achieve similar performance to the original features witl fewer parameters. On AG news, sketching results in more compact models at a modest drop ii accuracy. In almost all cases, multiple hash functions yield higher accuracy than a single hasl function for similar model size\nWe have presented a simple sketching algorithm for sparse boolean inputs, which succeeds in sig. nificantly reducing the dimensionality of inputs. A single-layer neural network on the sketch can. provably model any sparse linear or polynomial function of the original input. For k-sparse vec- tors in {0, 1}d, our sketch of size O(k log s/) allows computing any s-sparse linear or polynomial. function on a 1- & fraction of the inputs. The hidden constants are small, and our sketch is sparsity. preserving. Previous work required sketches of size at least N(s) in the linear case and size at least kP for preserving degree-p polynomials. Our results can be viewed as showing a compressed sens-. ing scheme for 0-1 vectors, where the decoding algorithm is a depth-1 neural network. Our scheme. requires O(k log d) measurements, and we leave open the question of whether this can be improved to O(k log d) in a stable way. We demonstrated empirically that our sketches work well for both. linear and polynomial regression, and that using a neural network does improve over a direct linear regression. We show that on real datasets, our methods lead to smaller models with similar or better. accuracy for multiclass and multilabel classification problems. In addition, the compact sketches lead to fewer trainable parameters and faster training."}, {"section_index": "3", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We would like to thank Amir Globerson for numerous fruitful discussion and help with an early version of the manuscript."}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Dimitris Achlioptas. Database-friendly random projections: Johnson-Lindenstrauss with binary coins. J Comput. Syst. Sci., 66(4):671-687. 2003.\nRosa I. Arriaga and Santosh Vempala. An algorithmic theory of learning: Robust concepts and random projec tion. Machine Learning, 63(2):161-182, 2006\nsparse Johnson-Lindenstrauss transform. CoRR, abs/1011.2590, 2010 E.J. Candes and T. Tao. Near optimal signal recovery from random projections: Universal encoding strategies? IEEE Transactions on Information Theory, 52(12):5406-5425, 2006. M. Charikar, K. Chen, and M. Farach. Finding frequent items in data streams. Theor. Comp. Sci., 312(1), 2004 Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing convolu-. tional neural networks. CoRR, abs/1506.04449, 2015. URLhttp: //arxiv.0rg/abs/1506. 04449 Y. Cheng, F. Yu, r. Feris, S. Kumar, A. Choudhary, and S-F. Chang. An exploration of parameter redundancy. in deep networks with circulant projections. In CVPR, pp. 2857-2865, 2015. K. Clarkson and D. Woodruff. Low rank approximation and regression in input sparsity time. In STOC, 2013.. M.D. Collins and P. Kohli. Memory bounded deep convolutional networks. CoRR, abs/1412.1442, 2014.. G. Cormode and S. Muthukrishnan. An improved data stream summary: the count-min sketch and its applica- tions. J. Algorithms, 55(1):58-75, 2005a. G. Cormode and S. Muthukrishnan. Summarizing and mining skewed data streams. In SDM, pp. 44-55, 2005b. Anirban Dasgupta, Ravi Kumar, and Tamas Sarlos. A sparse Johnson-Lindenstrauss transform. In STOC, pp.. 341-350. ACM, 2010.\nM. Charikar, K. Chen, and M. Farach. Finding frequent items in data streams. Theor. Comp. Sci., 312(1), 2004\nVladimir Braverman, Rafail Ostrovsky, and Yuval Rabani. Rademacher chaos, random Eulerian graphs and the sparse Johnson-Lindenstrauss transform. CoRR, abs/1011.2590, 2010.\nE.J. Candes and T. Tao. Near optimal signal recovery from random projections: Universal encoding strategies? IEEE Transactions on Information Theory, 52(12):5406-5425, 2006.\nJeffrey Dean. Greg Corrado. Rajat Monga. Kai Chen. Matthieu Devin. Ouoc V Le. MarcAurelio Ranzato. Marl Mao, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. Large scale distributed deep networks. In. Advances in Neural Information Processing Systems, pp. 1223-1231. 2012\nDavid L Donoho. Compressed sensing. Information Theory, IEEE Transactions on, 52(4):1289-1306, 2006\nG. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82-97, 2012. G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. CoRR, 1503.02531, 2015. W.B. Johnson and J. Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. Contemp. Math,\n. Matousek. On variants of the Johnson-Lindenstrauss lemma. Rand. Struct. Algs., 33(2):142-156, 2008\nAli Mousavi, Ankit B. Patel, and Richard G. Baraniuk. A deep learning approach to structured signal recovery arXiv:1508.04065, 2015.\nNinh Pham and Rasmus Pagh. Fast and scalable polynomial kernels via explicit feature maps. In KDD, pr 239-247. ACM, 2013.\nA. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, pp. 1177-1184, 2007\nleatures Tor Targe-scale kernel dCICS -O Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, and Yann LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv:1312.6229, 2013. Q. Shi, J. Petterson, G. Dror, J. Langford, A. J. Smola, A. Strehl, and V. Vishwanathan. Hash kernels. In Artificial Intelligence and Statistics AISTATS'09, Florida, April 2009. P.P. Talukdar and W.W. Cohen. Scaling graph-based semi supervised learning to large number of labels using count-min sketch. In AISTATS, volume 33 of JMLR Proceedings, pp. 940-947. JMLR.org, 2014. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. CoRR, abs/1411.4555, 2014. URLhttp: //arxiv.0rg/abs/1411.4555 K. Weinberger, A. Dasgupta, J. Attenberg, J. Langford, and A. J. Smola. Feature hashing for large scale multitask learning. In International Conference on Machine Learning, 2009\nheadk(x) = arg min x-y y:yo< k\nIisha Denil, Babak Shakibi, Laurent Dinh, MarcAurelio Ranzato, and Nando de Freitas. Predicting parameters\nrepresent the set\n1 :\nRd.k.c ={x E Rd:tailk(x)|i < c}\nbinary case.\nSn.+(x) = [y1,...,Yt where yl = x i:h(i)=l min(yh h1:t jE[t]\nn1:1\nPr[DMIN(Sn(x),i)>xj+Ec]<1/e\nBy definition of tail(), the expectation of the second term is c/m. Using Markov's inequality, we get that\n1 Pr[ Xi Ec< Em i'ES(tailk(x))nE(i)\nk Pr Xi#0 < Pr 10 E 1 m i'ES(headk(x))nE(i) i'ES(headk(x))nE(i) i'ES(headk(x))nE()\nWjDMIN(Sh1t(x),i) -w~x| Ec|w|1 Pr <s\nThe proof in the case that x may not be non-negative is only slightly more complicated. Let us define the following additional decoding procedure,\nTheorem A.3. Let x E Rd,k,c, and let h1,..., ht be drawn uniformly and independently from a pairwise independent distribution, for m = 4e?(k + 2/e). Then for any i,\nPr D MEL (Sh1t(x),i) [xi-EC,xi+Ec] <e-t h1:t\nDMIN(Sn(x),i)-xi = Xi + Xi! i'ES(headk(x))nE(i) i'ES(tailk(x))nE(i)\nMedian Yn,(i),j h1:t jE[t]\nProof. As before, fix a vector x E Rd,k . and a specific i and h. We once again write\nDMED(Sn(x),i)-xi = Xi! + Xi i'ES(headk(x))nE(i) i'ES(tailk(x))nE(i)\n1 1 Pr Xj> EC and Pr Xi -EC L Em Em i'ES(tailk(x))nE(i) i'ES(tailk(x))nE(i)\n1 4e2\nIXi -EC,Xi + EC < Pr ) X; >t/2 1/2 1 XT 2 1 exp n 2 2 < exp(-t)\nPr <s\nTo implement DMIN() using hidden units in a neural network, we need to use a slightly non. conventional non-linearity. For a weight vector z and input vector x, a min gate implements mini:z, o Zx . Then, using Corollary|A.2 we get the following theorem..\nPrh1:t N(Sn1t(x))-w'xEc|w1< 6\nas long as m = e(k + ) and t = log(s/). Moreover, the weights coming into each node in the hidden layer are binary with t non-zeros..\nFor real vectors x, the non-linearity needs to implement a median. Nonetheless, an analogous resu still holds.\nIn this section we describe and analyze a simple decoding algorithm for Gaussian projections\nTheorem B.1. Let x E Rd, and let G be a random Gaussian matrix in Rd' d. Then for any i, ther exists a linear function f, such that\nx-xei EG|(fi(Gx)-xi)2< d'\nProof. Recall that G E Rd' d is a random Gaussian matrix where each entry is chosen i.i.d. from. N(0, 1/d'). For any i, conditioned on G,r = gji, we have that the random variable Yj|Gj; = gji is distributed according to the following distribution,.\n(Yj[Gji = gji) ~ gjixi+ ) Gji xi i'fi x:eall?/d 9a;x; + N(0 Ix.\nx-xe] j(Qji/gji)2 d\nMinimizing the variance of x; w.r.t a's gives us o q?.. Indeed, the partial derivatives are. X\nC 2Qji daji\n|x-x;e;||2 1 E[(x-x)21 d j 9ji\nwhich in expectation, now taken over the choices of gji's, is at most ||x -- x;e;||2/d'. Thus, the claim follows.\nFor comparison, if x E Bd.k, then the expected error in estimating x is. 1, so that taking d' = (k - 1) log /e2 suffices to get a error e estimate of any fixed bit with probablity 1. Setting. implies Theorem|B.2 However, note that the decoding layer is now densely connected to the input. layer. Moreover, for a k-sparse vectors x that are is necessarily binary, the error grows with the. 2-norm of the vector x, and can be arbitrarily larger than that for the sparse sketch. Note that G x still contains sufficient information to recover x with error depending only on x - head (x)|[2. To. our knowledge, all the known decoders are adaptive algorithms, and we leave open the question of. whether bounds depending on the 2-norm of the residual (x - head,(x)) are achievable by neural. networks of small depth and complexity.\nTheorem B.2. For every w E Hd.s there exists a set of weights for a network N E Ns(Relu) such that for each x E Bd k\nFirst we show that if we allow large scalars in the sketches, we can construct a deterministic (2k+1) dimensional sketch from which a shallow network can reconstruct any monomial.\nWe will also show a lower bound of k on the required dimensionality\nFor every x E Bd.k define a degree 2k univariate real polynomial by\n11 Px 1-(k+1) {i|xi=1}\n;Qji(Yj/gji le 2. X\nfor some non-negative a's. It is easy to verify that for any vector of a's, the expectation of xi,. when taken over the random choices of G' for i' i, is x. Moreover, the variance of x, is\nPrh1t[N(Gx))=w'x]1-8\nClaim C.1. Suppose that x E Bd.k, and let px() be defined as above. If x; = 1, then px(j) = 1. If xj = 0, then px(j) -k.\n2k def i=0\n2k Yij jEA i=0 DecPolyd,k(y, A) = |A]\n11 x; = Relu(DecPolyd,k(DSkd,k(x), A)) jEA\n2k ay jEA i=0 DecPolyd,k(DSkd,k(x),A) |A| jEAPx(j) |A|\nIn words, the decoding is the average value of px(j) over the indices j E A. Now first suppose tha HjeA ; = 1. Then for each j E A we have x; = 1 so that by Claim[C.1] px(j) = 1. Thus the average DecPolyd.k(DSkd.k(x), A) = 1.\nOn the other hand, if IIeA x; = 0 then for some j E A, say j*, xj* = 0. In this case, ClaimC.1 implies that px(j*) < k. For every other j, px(j) 1, and each px(j) is non-negative only when x; = 1, which happens for at most k indices j. Thus the sum over non-negative px(j) can be no larger than k. Adding px(j*) gives us zero, and any additional j's can only further reduce the sum. Thus the average in non-positive and hence the Relu is zero, as claimed.\nThe last theorem shows that Ba.k can be sketched in Rq, where q = 2k + 1, such that arbitrary products of variables be decoded by applying a linear function followed by a ReLU. It is natural tc ask what is the smallest dimension q for which such a sketch exists. The following theorem shows that q must be at least k. In fact, this is true even if we only require to decode single variables.\nTheorem C.3. Let Sk : Bd.k Rq be a mapping such that for every i E [d] there is w, E R9 satisfying x; = Relu((wi, Sk(x)) for each x E Bd.k, then q is at least k.\nProof. Denote X = {w1,..., wa} and let H C {0,1}X be the function class consisting of all. functions of the form hx(w) = sign((w, Sk(x))) for x E Bd.k. On one hand, H is a sub-class of the class of linear separators over X C Rq, hence VC(H) q. On the other hand, we claim that VC(H) > k which establishes the proof. In order to prove the claim it suffices to show that the set A = {w1,..., wk} is shattered. Let B C A let x be the indicator vector of B. We claim that the. restriction of hx to A is the indicator function of B. Indeed, we have that,.\nsign((wi,Sk(x))) sign(Relu((wi, Sk(x)))) Xi 1[i E B]\nsign((wi,Sk(x))) sign(Relu((wi, Sk(x))) Xi 1[i E B]\nWe would like to note that both the endcoding and the decoding of determinitic sketched can be computed efficiently, and the dimension of the sketch is smaller than the dimension of a random sketch. We get the following corollaries\nS S g(x)=wj 11 Xj=Wj j=1 iEAj j=1 iEAj\nFor any such g, there exists a set of weights for a network N E Ns(Relu) such that for each x E Bd.k, N(DSkh1t(x)) = g(x)\nKnown lower bounds for compressed sensing Ba et al. (2010) imply that any linear sketch has size at least N(k log d) to allow stable recovery. We leave open the question of whether one can get the compactness and decoding properties of our (non-linear) sketch while ensuring stability"}, {"section_index": "5", "section_name": "LOWER BOUND FOR PROPER LEARNING", "section_text": "We now show that if one does not expand the hypothesis class, then even in the simplest of settings oi linear classifiers over 1-sparse vectors, the required dimensionality of the projection is much large. than the dimension needed for improper learning. As stated earlier, the result is likely folklore and we present a proof for completeness.\nTheorem D.1. Suppose that there exists a distribution over maps $ : Bd.1 -> R and y : Bd.s -- R such that for any x E Bd,1, w E Bd,s\n1 9 Pr sgn sgn(y(w)'$(x)) W X 2 10\nProof. If the error is zero, a lower bound on q would follow from standard VC dimension arguments Concretely, the hypothesis class consisting of hw(x) = {e, x : w; = 1} for all w E Bd,s shatters. the set {e1,..., es}. If sgn(y(w)' (e,)) = sgn(w' x - ) for each w and e, then the points. (e), i E [s] are shattered by h(w)() where w E Bd,s, which is a subclass of linear separators in. Rg. Since linear separators in R4 have VC dimension q, the largest shattered set is no larger, and. thus q s.\n) S(wi) C [s] ) Distance property holds: S(w) S(wd) > s for i + j.\nLet A; = {y E A : h;(x) = 1} = {x E A : sgn(y(wj)'x) = 1}. Let E; A be the positions where the embeddings $, fail, that is,.\nCorollary C.4. For every w E Hd.s there exists a set of weights for a network N E Ns(Relu) such that for each x E Bd.k, N(DSkd.k(x)) = w' x.\nTo handle errors, we will use the Sauer-Shelah lemma and show that the set {(e;) : i E [s]} has many partitions. To do so, sample $, from the distribution promised above and consider the set of points A = {(e1), $(e2),..., $(es)}. Let W = {w1, w2,..., wx} be a set of k vectors in Bd,s such that,\nSuch a collection of vectors, with k = 2cs for a positive constant c, can be shown to exist by a probabilistic argument or by standard constructions in coding theory. Let H = {hy(w) : w E W}. be the linear separators defined by (ws) for i E W. For brevity, we denote hy(w,) by hj. We will. argue that H induces many different subsets of A..\nE; ={(e;) : i E [s], sgn(w, e- # sgn(y(wj)'$(ei)}\nBy definition, A, = S(w;)E;. If A, = Aj, then the distance property implies that E; Ey, Since the E,'s are increasing in size, it follows that for any lost j, |Ej| . Thus in expectation al most 4/5 of the j's are lost. It follows that there is a choice of , in the distribution for whicl H induces /5 distinct subsets of A. Since the VC dimension of H is at most q, the Sauer-Shelah lemma says that\nS 2cs K > 5 5 t<q\nThis implies that g > c' s for some absolute constant c'\nFor this setting, Theorem|4.1|implies that a sketch of size O(log(s/)) suffices to correctly classify 1 - & fraction of the examples if one allows improper learning as we do..\nTheorem E.1. Let x E 1}d. and let g {0. 1}d _> R denote the polynomial function\nS S g(x)=wj 1 xj=Wj j=1 iEAj j=1 iEAj\nThen there exists a set of weights for a network N E Ns(Relu) such that for each x E {0,1} N(x) = g(x). Moreover, the weights coming into each node in the hidden layer are in {0, 1}\nProof. The jth hidden unit implements hj = ieA, ,. As before, for boolean inputs, one cal compute hj as Relu(iA, , |Aj|+1). The output node computes , w3hj where hj is th output of jth hidden unit..\nThus A, = S(w;) E;. By assumption, E[|E,|] i for each j, where the expectation is taken over the choice of , . Thus , E[[E, sk/10. Renumber the h's in increasing order of E so that E1 E2 ... Ex]. Due to the E,'s being non-empty, not all A,'s are necessarily distinct. Call a j E [] lost if A; = A,, for some j' j.\nIn this short section show that for boolean inputs (irrespective of sparsity), any polynomial with s monomials can be represented by a neural network with one hidden layer of s hidden units. Our result is a simple improvement of Barron's theorem (1993 1994), for the special case of sparse polynomial functions on 0-1 vectors. In contrast, Barron's theorem, which works for arbitrary inputs would require a neural network of size d : s : pO(p) to learn an s-sparse degree-p polynomial. The proof of the improvement is elementary and provided for completeness."}] |
rJJ3YU5ge | [{"section_index": "0", "section_name": "IS A PICTURE WORTH A THOUSAND WORDS? A DEEP MULTI-MODAL FUSION ARCHITECTURE FOR PRODUCT CLASSIFICATION IN E-COMMERCE", "section_text": "Tom Zahavy & Shie Mannor\nDepartment of Electrical Engineering. The Technion - Israel Institute of Technology Haifa 32000. Israel\n{tomzahavy@tx,shie@ee}.technion.ac.il\nClassifying products into categories precisely and efficiently is a major challenge in modern e-commerce. The high traffic of new products uploaded daily and the dynamic nature of the categories raise the need for machine learning models that can reduce the cost and time of human editors. In this paper, we propose a decision level fusion approach for multi-modal product classification using text and image inputs. We train input specific state-of-the-art deep neural networks for each input source, show the potential of forging them together into a multi-modal architecture and train a novel policy network that learns to choose between them. Finally, we demonstrate that our multi-modal network improves the top-1 accuracy % over both networks on a real-world large-scale product classification dataset that we collected from Walmart.com. While we focus on image-text fusion that character izes e-commerce domains, our algorithms can be easily applied to other modalities such as audio, video, physical sensors, etc."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Product classification is a key issue in e-commerce domains. A product is typically represented by metadata such as its title, image, color, weight and so on, and most of them are assigned man- ually by the seller. Once a product is uploaded to an e-commerce website, it is typically placed in multiple categories. Categorizing products helps e-commerce websites to provide costumers a better shopping experience, for example by efficiently searching the products catalog or by develop- ing recommendation systems. A few examples of categories are internal taxonomies (for business needs), public taxonomies (such as groceries and office equipment) and the product's shelf (a group of products that are presented together on an e-commerce web page). These categories vary with time in order to optimize search efficiency and to account for special events such as holidays and sports events. In order to address these needs, e-commerce websites typically hire editors and use crowdsourcing platforms to classify products. However, due to the high amount of new products uploaded daily and the dynamic nature of the categories, machine learning solutions for product classification are very appealing as means to reduce the time and economic costs. Thus, precisely categorizing items emerges as a significant issue in e-commerce domains.\nA shelf is a group of products presented together on an e-commerce website page, and usually. contain products with a given theme/category (e.g., Women boots, folding tables). Product to shelf classification is a challenging problem due to data size, category skewness, and noisy metadata and labels. In particular, it presents three fundamental challenges for machine learning algorithms First, it is typically a multi-class problem with thousands of classes. Second, a product may belong to multiple shelves making it a multi-label problem. And last, a product has both an image and a text input making it a multi-modal problem..\nProducts classification is typically addressed as a text classification problem because most metadata of items are represented as textual features (Pyo et al.]2010). Text classification is a classic topic for natural language processing, in which one needs to assign predefined categories to text inputs.\nAlessandro Magnani & Abhinandan Krishnan\nAMagnani,AKrishnan}@walmartlabs.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Informative title and image Image is more informative Title is more informative Title: Apple iPhone 6s Title: Hades VENTAIL-BRWN-11 Title: California Umbrella 7.5' Smartphone (Unlocked) VENTAIL - BRWN - Size - 11 Market Patio Umbrella with Push Tilt in Straw Shelf: Iphone 6s Shelf: Women boots Shelf: Patio Umbrellas Princess wears BEOLSL Title: High Pressure Folding Table Title: WEAR BOOTS Title: Autumn Lip Pencil in Yellow (30 in. x 72 in./Yellow) Colorganics .22 g Pencil Shelf: Folding Tables & Chairs Shelf: Baby & Toddler Shelf: Moisturizers Underwear & Undershirts\nImage is more informative\nTitle: Apple iPhone 6s Smartphone (Unlocked) Shelf: Iphone 6s.\nTitle:Hades VENTAIL-BRWN-11 VENTAIL-BRWN-Size-11 Shelf: Women boots\nTitle: High Pressure Folding Table in Yellow 30 in.x 72 in.Yellow Shelf:Folding Tables & Chairs\nFigure 1: Predicting shelves from product metadata obtained from Walmart.com. Left: products that have both an image and a title that contain useful information for predicting the product's shelf Center, top: the boots title gives specific information about the boots but does not mention that the product is a boot, making it harder to predict the shelf. Center, bottom: the baby toddler shirt's title is only refers to the text on the toddler shirt and does not mention that it is a product for babies. Right, top: the umbrella image contains information about its color but it is hard to understand that. the image is referring to an umbrella. Right, bottom: the lips pencil image looks like a regular. pencil, making it hard to predict that it belongs to the moisturizers shelf..\nStandard methods follow a classical two-stage scheme of extraction of (handcrafted) features, fol. lowed by a classification stage. Typical features include bag-of-words or n-grams, and their TF-IDF. On the other hand, Deep Neural Networks use generic priors instead of specific domain knowledg. (Bengio et al.]2013) and have been shown to give competitive results on text classification task. (Zhang et al.|2015). In particular, Convolutional neural networks (CNNs) (Kim2014)Zhang et al. 2015f Conneau et al.]2016) and Recurrent NNs (Lai et al.]2015] Pyo et al.J2010 Xiao & Cho 2016) can efficiently capture the sequentiality of the text. These methods are typically applied di. rectly to distributed embedding of words (Kim2014f Lai et al.2015f |Pyo et a1.[2010) or character (Zhang et al.]2015} Conneau et al.]2016} Xiao & Cho[2016), without any knowledge on the syn tactic or semantic structures of a language. However, all of these architectures were only applie. on problems with a small amount of labels (~ 20) while e-commerce shelf classification problem typically have thousands of labels with multiple labels per product..\nIn Image classification, CNNs are widely considered the best models, and achieve state-of-the art results on the ImageNet Large-Scale Visual Recognition Challenge (Russakovsky et al.]2015. Krizhevsky et al.[2012 Simonyan & Zisserman2014] He et al.2015]. However, as good as they are, the classification accuracy of machine learning systems is often limited in problems with many classes of object categories. One remedy is to leverage data from other sources, such as text data. However, the studies on multi-modal deep learning for large-scale item categorization are still rare to. the best of our belief. In particular in a setting where there is a significant difference in discriminative. power between the two types of signals.\nIn this work, we propose a multi-modal deep neural network model for product classification. Our. design principle is to leverage the specific prior for each data type by using the current state-of-\nTitle is more informative\nTitle: California Umbrella 7.5 Market Patio Umbrella with. Push Tilt in Straw Shelf:Patio Umbrellas\nthe-art classifiers from the image and text domains. The final architecture has 3 main component (Figure2] Right): a text CNN (Kim 2014), an image CNN (Simonyan & Zisserman 2014) an a policy network that learns to choose between them. We collected a large-scale data set of 1.2 million products from the Walmart.com website. Each product has a title and an image and needs tc be classified to a shelf (label) with 2890 possible shelves. Examples from this dataset can be see. in Figure [1and are also available on-line at the Walmart.com website. For most of the products both the image and the title of each product contain relevant information for customers. However, i is interesting to observe that for some of the products, both input types may not be informative fo shelf prediction (Figure 1). This observation motivates our work and raises interesting questions which input type is more useful for product classification? is it possible to forge the inputs into a better architecture?\nIn our experiments, we show that the text CNN outperforms the image one. However, for a relatively large number of products (~ 8%), the image CNN is correct while the text CNN is wrong, indicating a potential gain from using a multi-modal architecture. We also show that the policy is able to choose between the two models and give a performance improvement over both state-of-the-art networks.\nTo the best of our knowledge, this is the first work that demonstrates a performance improvement on top-1 classification accuracy by using images and text on a large-scale classification problem. In. particular, our main contributions are:"}, {"section_index": "3", "section_name": "2 MULTI-MODALITY", "section_text": "Over the years, a large body of research has been devoted to improving classification using en. sembles of classifiers (Kittler et al.]1998] Hansen & Salamon1990). Inspired by their success these methods have also been used in multi-modal settings (e.g.Guillaumin et al.(2010); Poria et al. (2016), where the source of the signals, or alternatively their modalities, are different. Some exam. ples include audio-visual speech classification (Ngiam et al.|2011), image and text retrieval (Kiros et al.), sentiment analysis and semi-supervised learning (Guillaumin et al.|[2010).\nCombining classifiers from different input sources presents multiple challenges. First, classifiers. vary in their discriminative power, thus, an optimal unification method should be able to adapt. itself for specific combinations of classifiers. Second, different data sources have different state-of- the-art architectures, typically deep neural networks, which vary in depth, width, and optimization. algorithm; making it non-trivial to merge them. Moreover, a multi-modal architecture potentially. has more local minima that may give unsatisfying results. Finally, most of the publicly available. real-world big data classification datasets, an essential building block of deep learning systems,. typically contain only one data type.\nNevertheless, the potential performance boost of multi-modal architectures has motivated re searchers over the years.Frome et al.(2013) combined an image network (Krizhevsky et al.]2012 with a Skip-gram Language Model in order to improve classification results on ImageNet. However they were not able to improve the top-1 accuracy prediction, possibly because the text input the used (image labels) didn't contain a lot of information. Other works, used multi-modality to lear good embedding but did not present results on classification benchmarks (Lynch et al.f 2015) Kiro et al.f Gong et al.[2014).Kannan et al.(2011) suggested to improve text-based product classifica tion by adding an image signal, training an image classifier and learning a decision rule between th two. However, they only experimented with a small dataset and a low number of labels, and it i not clear how to scale their method for extreme multi-class multi-label applications that characteriz real-world problems in e-commerce.\nWe demonstrate that the text classification CNN (Kim]2014) outperforms the VGG net- work (Simonyan & Zisserman2014) on a real-world large-scale product to shelf classifi-. cation problem. We analyze the errors made by the different networks and show the potential gain of multi modality. We propose a novel decision-level fusion policy that learns to choose between the text and. image networks and improve over both..\nNetwork prediction Multi-modal layers Shared representation Image Input Text Input\nFigure 2: Multi-modal fusion architectures. Left, top: Feature-level fusion. Each modality is processed in a different pipe. After a certai. depth, the pipes are concatenated followed by multi-modal layers. Left, bottom: Decision-lev. fusion. Each modality is processed in a different pipe and gives a prediction. A policy network. learning to decide which classifier to use. Right: The proposed multi-modal architecture.\nAdding modalities can improve the classification of products that have a non-informative input source (e.g., image or text). In e-commerce, for example, classifiers that rely exclusively on tex suffer from short and non-informative titles, differences in style between vendors and overlapping text across categories (i.e., a word that helps to classify a certain class may appear in other classes) Figure|1|presents a few examples of products that have only one informative input type. These ex amples suggest that a multi-modal architecture can potentially outperform a classifier with a single. input type.\nMost unification techniques for multi-modal learning are partitioned between feature-level fusio techniques and decision-level fusion techniques (Figure|2[ Left).\nFeature-level fusion is characterized by three phases: (a) learning a representation, (b) supervisec training, and (c) testing. The different unification techniques are distinguished by the availability of the data in each phase (Guillaumin et al.] 2010). For example, in cross-modality training, the representation is learned from all the modalities, but only one modality is available for supervisec training and testing. In other cases, all of the modalities are available at all stages but we may wan (or not) to limit their usage given a certain budget. Another source for the distinction is the orde in which phases (a) and (b) are made. For example, one may first learn the representation and ther learn a classifier from it, or learn both the representation and the classifier in parallel. In the deep learning context, there are two common approaches. In the first approach, we learn an end-to-enc deep NN; the NN has multiple input-specific pipes that include a data source followed by inpu specific layers. After a certain depth, the pipes are concatenated followed by additional layers sucl that the NN is trained end-to-end. In the second approach, input specific deep NNs are learned first and a multi-modal representation vector is created by concatenating the input specific feature vectors (e.g., the neural network's last hidden layer). Then, an additional classifier learns to classify fron the multi-modal representation vector. While multi-modal methods have shown potential to boos performance on small datasets (Poria et al.]2016), or on top-k accuracy measures (Frome et al. 2013), we are not familiar with works that succeeded with applying it on a large-scale classificatior problem and received performance improvement in top-1 accuracy."}, {"section_index": "4", "section_name": "2.2 DECISION-LEVEL FUSION", "section_text": "In this approach, an input specific classifier is learned for each modality, and the goal is to find a decision rule between them. The decision rule is typically a pre-defined rule (Guillaumin et al. 2010) and is not learned from the data. For example, Poria et al.[(2016) chose the classifier with the maximal confidence, while[Krizhevsky et al.[(2012) average classifier predictions. However, in this work we show that learning the decision rule yields significantly better results on our data.\nFeature-level fusion Final prediction. Network predictior ^ Multi-modal Policy layers Shared Class probabilities PredictionT Class probabilities Prediction T representation A A 224 224 3224224 64 VGG16 Image Input. Text Input wait Text CNN for the video 12 112 128 Decision-level fusion and 56 256 512 1 1 409611 1000 Network prediction rent Policy max pcoling Input x k representation of Convolutionl ayer with Mav-over-time fully connected+ReLU nulfiple filler widths an non-static channels pooling feature maps ^ Network prediction Network prediction Text Image Image Input. Text Input"}, {"section_index": "5", "section_name": "METHODS AND ARCHITECTURES", "section_text": "In this section, we give the details of our multi-modal product classification architecture. The ar chitecture is composed of a text CNN and an image CNN which are forged together by a policy network, as can be seen in Figure2[ Right"}, {"section_index": "6", "section_name": "3.1 MULTLLABEL COST FUNCTION", "section_text": "Our cost function is the weighted sigmoid cross entropy with logits, a common cost function fo multi-label problems. Let x be the logits, z be the targets, q be a positive weight coefficient, used a. a multiplier for the positive targets, and o(x ) The loss is given by:\nThe positive coefficient q, allows one to trade off recall and precision by up- or down-weighting the cost of a positive error relative to a negative error. We found it to have a significant effect in practice\nFor the text signal, we use the text CNN architecture of Kim (2014). The first layer embeds words into low-dimensional vectors using random embedding (different than the original paper). The next layer performs convolutions over time on the embedded word vectors using multiple filter sizes (3. 4 and 5), where we use 128 filters from each size. Next, we max-pool-over-time the result of each. convolution filter and concatenated all the results together. We add a dropout regularization layer (0.5 dropping rate), followed by a fully connected layer, and classify the result using a softmax layer.. An illustration of the Text CNN can be seen in Figure2"}, {"section_index": "7", "section_name": "3.3 IMAGE CLASSIFICATION", "section_text": "For the image signal, we use the VGG Network (Simonyan & Zisserman! 2014). The input to th network is a fixed-size 224 x 224 RGB image. The image is passed through a stack of convolutiona layers with a very small receptive field: 3 x 3. The convolution stride is fixed to 1 pixel; the spatia padding of the convolutional layer is 1 pixel. Spatial pooling is carried out by five max-poolin layers, which follow some of the convolutional layers. Max-pooling is performed over a 2 x 2 pixe window, with stride 2. A stack of convolutional layers is followed by three Fully-Connected (FC layers: the first two have 4096 channels each, the third performs 2890-way product classificatior and thus contains 2890 channels (one for each class). All hidden layers are followed by a ReLi non-linearity. The exact details can be seen in Figure[2"}, {"section_index": "8", "section_name": "3.4 MULTI-MODAL ARCHITECTURE", "section_text": "We experimented with four types of multi-modal architectures. (1) Learning decision-level fusion. policies from different inputs. (1a) Policies that use the text and image CNNs class probabilities. as input (Figure|2). We experimented with architectures that have one or two fully connected layers. (the two-layered policy is using 10 hidden units and a ReLu non-linearity between them). (1b). Policies that use the text and/or image as input. For these policies, the architecture of policy. network was either the text CNN or the VGG network. In order to train policies, labels are collected. from the image and text networks predictions, i.e., the label is 1 if the image network made a correct. prediction while the text network made a mistake, and 0 otherwise. On evaluation, we use the. policy predictions to select between the models, i.e., if the policy prediction is 1 we use the image. network, and use the text network otherwise. (2) Pre-defined policies that average the predictions. of the different CNNs or choose the CNN with the highest confidence. (3) End-to-end feature-level. fusion, each input type is processed by its specific CNN. We concatenate the last hidden layers of the. CNNs and add one or two fully connected layers. All the layers are trained together end-to-end (we. also tried to initialize the input specific weights from pre-trained single-modal networks). (4) Multi-. step feature-level fusion. As in (3), we create shared representation vector by concatenating the last hidden layers. However, we now keep the shared representation fixed and learn a new classifier from. 1t\n1zx+1+q-1)z)log1+exp-x))"}, {"section_index": "9", "section_name": "4.1 SETUP", "section_text": "Our dataset contains 1.2 million products (title image and shelf) that we collected from Walmart.cor (offered online and can be viewed at the website) and were deemed the hardest to classify by th. current production system. We divide the data into training (1.1 million) validation (50k) and tes. (50k). We train both the image network and the text network on the training dataset and evaluat. them on the test dataset. The policy is trained on the validation dataset and is also evaluated o. the test dataset. The objective is to classify the product's shelf, from 2890 possible choices. Eac.. product is typically assigned to more than one shelf (3 on average), and the network is considere. accurate if its most probable shelf is one of them.."}, {"section_index": "10", "section_name": "4.2 TRAINING THE TEXT ARCHITECTURE", "section_text": "Preprocess: we build a dictionary of all the words in the training data and embed each word using a random embedding into a one hundred dimensional vector. We trim titles with more than 40 words and pad shorter titles with nulls..\nWe experimented with different batch sizes, dropout rates, and filters stride, but found that the vanilla architecture (Kim] 2014) works well on our data. This is consistent withZhang & Wallace (2015) who showed that text CNNs are not very sensitive to hyperparameters. We tuned the cost functior positive coefficient parameter q, and found out that the value 30 performed best in practice (we will also use this value for the image network). The best CNN that we trained classified 70.1% of the products from the test set correctly (Table1).\nreprocess: we re-size all the images into 224 x 224 pixels and reduce the image mean\nThe VGG network that we trained classified 57% of the products from the test set correctly. This is a bit disappointing if we compare it to the performance of the VGG network on ImageNet (~ 75%). There are a few differences between these two datasets that may explain this gap. First, our data has 3 times more classes and contains multiple labels per image making the classification harder, and second, Figure|1|implies that some of our images are not informative for shelf classification. Some works claim that the features learned by VGG on ImageNet are global feature extractors (Lynch et al.[[2015). We therefore decided to use the weights learned by VGG on ImageNet and learn only the last layer. This configuration yielded only 36.7% accuracy. We believe that the reason is that some of the ImageNet classes are irrelevant for e-commerce (e.g., vehicles and animals) while some relevant categories are misrepresented (e.g., electronics and office equipment). It could also be that our images follow some specific pattern of white background, well-lit studio etc., that characterizes e-commerce."}, {"section_index": "11", "section_name": "4.4 ERROR ANALYSIS", "section_text": "Is a picture worth a thousand words? Inspecting Figure 3] we can see that the text network out performed the image network on this dataset, classifying more products correctly. Similar results were reported before (Pyo et al.]2010] Kannan et al.]2011) but to the best of our knowledge, this is the first work that compares state-of-the-art text and image CNNs on a real-world large-scale e- commerce dataset.\nWhat is the potential of multi-modality? We identified that for 7.8% of the products the image net- work made a correct prediction while the text network was wrong. This observation is encouraging since it implies that there is a relative big potential to harness via multi-modality. We find this large gap surprising since different neural networks applied to the same problem tend to make the same mistakes (Szegedy et al.|2013). Unification techniques for multi-modal problems typically use the last hidden layer of each network as features (Frome et al.||2013Lynch et al.]2015] Pyo et al.[2010). We therefore decided to visual- ize the activations of this layer using a tSNE map (Maaten & Hinton]2008). Figure[3] depicts such a map for the activations of the text model (the image model yielded similar results). In particular\nBoth models are correct: 47 Title is correct, image is not Image is correct, title is not Both models are wrong: 22\nFigure 3: Error analysis using a tSNE map, created from the last hidden layer neural activations of the text model.\nwe were looking for regions in the tSNE map where the image predictions are correct and the texi is wrong (Figure[3] green). Finding such a region will imply that a policy network can learn good decision boundaries. However, we can see that there are no well-defined regions in the tSNE maps where the image network is correct and the title is wrong (green), thus implying that it might be harc to identify these products using the activations of the last layers."}, {"section_index": "12", "section_name": "4.5 MULTI-MODAL UNIFICATION TECHNIQUES", "section_text": "Our error analysis experiment highlights the potential of merging image and text. Still, we found. it hard to achieve the upper bound provided by the error analysis in practice. We now describe the. policies that managed to achieve performance boost in top-1 accuracy % over the text and image. networks, and then provide discussion on other approaches that we tried but didn't work.\nDecision-level fusion: We trained policies from different data sources (e.g., title, image, and eac CNN class probabilities), using different architectures and different hyperparameters. Looking a Table [1] we can see that the best policies were trained using the class probabilities (the softma probabilities) of the image and text CNNs as inputs. The amount of class probabilities that wer used (top-1, top-3 or all) did not have a significant effect on the results, indicating that the top- probability contains enough information to learn good policies. This result makes sense since th top-1 probability measures the confidence of the network in making a prediction. Still, the top- probabilities performed slightly better, indicating that the difference between the top probabilitie may also matter. We can also see that the 2-layer architecture outperformed the 1-layer, indicatin that a linear policy is too simple, and deeper models can yield better results. Last, the cost functio positive coefficient q had a big impact on the results. We can see that for q = 1, the policy networ is more accurate in its prediction however it achieves worse results on shelf classification. For q = we get the best results, while higher values of q (e.g., 7 or 10) resulted in inaccurate policies that di not perform well in practice.\nPolicy input # layers Text Image Policy q Oracle Policy accuracy CP-1 1 5 70.1 56.7 71.4 (+1.3) 77.5 (+7.8) 86.4 CP-1 2 5 70.1 56.6 71.5 (+1.4) 77.6 (+7.5) 84.2 CP-all 2 5 70.1 56.6 71.4 (+1.3) 77.6 (+7.5) 84.6 CP-3 2 5 70.2 56.7 71.8 (+1.6) 77.7 (+7.5) 84.2 CP-3 2 1 70.2 56.7 70.2 (+0) 77.7 (+7.5) 92.5 CP-3 2 7 70.0 56.6 71.0 (+1.0) 77.5 (+7.5) 79.1 CP-3 2 10 70.1 56.6 70.7 (+0.6) 77.6 (+7.5) 75.0 Image - 5 70.1 56.6 68.5(-1.6) 77.6 (+7.5) 80.3 Text 5 70.1 56.6 69.0 (-1.1) 77.6 (+7.5) 83.7 - Both 5 70.1 56.6 66.1 (-4) 77.6 (+7.5) 73.7 Fixed-Mean 70.1 56.7 65.4 (+0) 77.6 (+7.5) - 1 Fixed-Max 70.1 56.7 60.1 (-10) 77.7 (+7.6) 38.2 -\nTable 1: Decision-level fusion results. Each row presents a different policy configuration (defined. by the policy input, the number of layers and the value of q), followed by the accuracy % of the. image, text, policy and oracle (optimal policy) classifiers on the test dataset. The policy accuracy. column presents the accuracy % of the policy in making correct predictions, i.e., choosing the image network when it made a correct prediction while the text network didn't. Numbers in (+.) refer to the performance gain over the text CNN. Class Probabilities (CP) refer to the number of class probabilities used as input.\nBoth models are correct: 47.9% Title is correct, image is not: 21.9% Image is correct, title is not:7.8% Both models are wrong: 22.4%\nWhile it may not seem surprising that combining text and image will improve accuracy, in practice we found it extremely hard to leverage this potential. To the best of our knowledge, this is the firs work that demonstrates a direct performance improvement on top-1 classification accuracy fron using images and text on a large-scale classification problem.\nwe found it extremely hard to leverage this potential. To the best of our knowledge, this is the work that demonstrates a direct performance improvement on top-1 classification accuracy fron using images and text on a large-scale classification problem.. We experimented with pre-defined policies that do not learn from the data. Specifically, we tried tc. average the logits, following (Krizhevsky et al.]2012]Simonyan & Zisserman]2014), and to choose the network with the maximal confidence following (Poria et al.|2016). Both of these experiments. yielded significantly worse results, probably, since the text network is much more accurate than the. image one (Table|1). We also tried to learn policies from the text and/or the image input, using. a policy network which is either a text CNN, a VGG network or a combination. However, all of. these experiments resulted in policies that overfit the data and performed worse than the title mode. on the test data (Table[1). We also experimented with early stopping criteria, various regularization methods (dropout, 11, 12) and reduced model size but none could make the policy network generalize.\nFeature-level fusion: Training a CNN end-to-end can be tricky. For example, each input sourc has its own specific architecture, with specific learning rate and optimization algorithm. We exper. imented with training the network end-to-end, but also with first training each part separately an then learning the concatenated parts. We tried different unification approaches such as gating func. tions (Srivastava et al.f2015), cross products and a different number of fully connected layers afte. the concatenation. These experiments resulted in models that were inferior to the text model. While. this may seem surprising, the only successful feature level fusion that we are aware of (Frome et al.. 2013), was not able to gain accuracy improvement on top-1 accuracy.."}, {"section_index": "13", "section_name": "5 CONCLUSIONS", "section_text": "State-of-the-art image CNNs are much larger than text CNNs, and take more time to train and to run. Thus, extracting image features during run time, or getting the image network predictions may be prohibitively expensive. In this context, an interesting observation is that feature level fusion methods require using the image signal for each product, while decision level fusion methods re. quire using the image network selectively making them more appealing. Moreover, our experiments suggest that decision-level fusion performs better than feature-level fusion in practice.\nFinally, we were only able to realize a fraction of the potential of multi-modality. In the future, we plan to investigate deeper policy networks and more sophisticated measures of confidence. We also. plan to investigate ensembles of image networks (Krizhevsky et al.2012) and text networks (Pyo et al.]2010). We believe that the insights from training policy networks will eventually lead us to. train end to end differential multi-modal networks..\nYoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and nev perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8), 2013.\nAlexis Conneau, Holger Schwenk, Loic Barrault, and Yann Lecun. Very deep convolutional net works for natural language processing. arXiv preprint arXiv:1606.01781, 2016\nYunchao Gong, Liwei Wang, Micah Hodosh, Julia Hockenmaier, and Svetlana Lazebnik. Improv ing image-sentence embeddings using large weakly annotated photo collections. In European Conference on Computer Vision, pp. 529-545. Springer, 2014.\nIn this work, we investigated a multi-modal multi-class multi-label product classification problem and presented results on a challenging real-world dataset that we collected from Walmart.com. We discovered that the text network outperforms the image network on our dataset, and observed a big potential of fusing text and image inputs. Finally, we suggested a multi-modal decision-level fusion approach that leverages state-of-the-art results from image and text classification and forges them into a multi-modal architecture that outperforms both.\nAndrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Tomas Mikolov, et al. Devise: A deep visual-semantic embedding model. In Advances in neural information processing systems. 2013.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385. 2015\nRyan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Multimodal neural language models\nSiwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. Recurrent convolutional neural networks for tex1 classification. 2015.\nHyuna Pyo, Jung- Woo Ha, and Jeonghee Kim. Large-scale item categorization in e-commerce usin? multiple recurrent neural networks. 2010..\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.\nChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellov and Rob Fergus. Intriguing roperties of neural networks. arXiv preprint arXiv:1312.6199. 2013\nYijun Xiao and Kyunghyun Cho. Efficient character-level document classification by combinin, convolution and recurrent layers. arXiv preprint arXiv:1602.00367, 2016\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo"}] |
HJDdiT9gl | [{"section_index": "0", "section_name": "Abstract", "section_text": "Sequence-to-sequence models have been"}, {"section_index": "1", "section_name": "1 Introduction", "section_text": "Building computer systems capable of general\nI Tanguage, and for eventually passing the. Turing test. The sequence-to-sequence (seq2seq). model has proven very popular as a purely data-. driven approach in domains that can be cast as learning to map to and from variable-length se-. quences, with state-of-the art results in many do-. mains, including machine translation (Cho et al.,. 2014: Sutskever et al., 2014; Wu et al., 2016). Neural conversation models are the latest devel- opment in the domain of conversation modeling,. with the promise of training computers to converse in an end-to-end fashion (Vinyals and Le, 2015; Shang et al., 2015; Sordoni et al., 2015; Wen et al.,. 2016). Despite promising results, there are still. many challenges with this approach. In particu- lar, these models produce short, generic responses. that lack diversity (Sordoni et al., 2015; Li et al.,. 2015). Even when longer responses are explicitly. encouraged (e.g. via length normalization), they. tend to be incoherent (\"The sun is in the center of the sun.\"), redundant (\"i like cake and cake\"), or. contradictory (\"I don't own a gun, but I do own a gun.\"). In this paper, we provide two methods to ad-. dress these issues with minimal modifications to do1\nncouraged (e.g. via length normalization), the\nIn this paper, we provide two methods to ad-. dress these issues with minimal modifications to the standard seq2seq model. First, we present. a glimpse model that only trains on fixed-length. segments of the target-side at a time, allowing. us to scale up training to larger data sets. Sec-. ond, we introduce a segment-based stochastic de- coding technique which injects diversity earlier. in the generated responses. Together, we find that these two methods lead to both longer re- sponses and higher ratings, compared to a baseline. seq2seq model with explicit length and diversity-. promoting heuristics integrated into the generation. procedure (see Table 1 for examples generated us-. ing our model)\nIn Section 2, we present a high-level overview of these two techniques. We then discuss each"}, {"section_index": "2", "section_name": "Overview and Motivation", "section_text": "A major difference between translation and re sponding to conversations is that, in the former, the high-level semantic content to generate in the tar- get sequence y is completely given by the source sequence, i.e., given the source x, there is low con- ditional entropy in the target distribution P(y[x) In the seq2seq approach, the decoder network therefore only has to keep track of where it is in the output, and the content to generate can be trans- formed from the relevant parts in the source via the attention mechanism (Bahdanau et al., 2014). In contrast, in conversation response generation, the prompt turn may be short and general (e.g., \"what do you have planned tonight\"), while an appropri- ate response may be long and informative.\nThe standard seq2seq model struggles with gen-. erating long responses, since the decoder has to. keep track of everything output so far in its fixed-. length hidden state vector, which leads to incoher. ent or even contradictory outputs. To combat this. we propose to integrate target-side attention into the decoder network, so it can keep track of what has been output so far. This frees up capacity in. the hidden state for modeling the higher-level se- mantics required during the generation of coherent. longer responses. We were able to achieve small. perplexity gains using this idea on the small Open-. Subtitles 2009 data set (Tiedemann, 2009). How- ever, we found it to be too memory-intensive when scaling up to larger data sets..\nAs a trade-off, we propose a technique (called the 'glimpse model') which interpolates between source-side-only attention on the encoder, and source and target-side attention on the encoder and decoder, respectively. Our solution simply trains the decoder on fixed-length glimpses from the target side, while having both the source se- quence and the part of the target sequence before the glimpse on the encoder, thereby sharing the at- tention mechanism on the encoder. This can be implemented as a simple data-preprocessing tech- nique with an unmodified standard seq2seq imple- mentation, and allows us to scale training to very large data sets without running into any memory issues. See Figure 1 for a graphical overview,\nmentation, and allows us to scale training to very\nIssues. See Figure 1 for a graphical I overview\nwhere we illustrate this idea with a glimpse-mode of length 3.\nGiven such a trained model, the next chal- lenge is how to generate long, coherent, and di- verse responses with the model. As observed in the previous section and in other work, stan- dard maximum a posteriori (MAP) decoding us- ing beam search often yields short, uninforma tive, and high-frequency responses. One ap-. proach to produce longer outputs is to em- ploy length-promoting heuristics (such as length normalization (Wu et al., 2016)) during decod ing. We find this increases the length of the out- puts, however often at the expense of coherence. Another approach to explicitly create variation in the generated responses is to rerank the N-best MAP-decoded list of responses from the model us- ing diversity-promoting heuristics (Li et al., 2015) or a backward RNN (Wen et al., 2015). We find this works for shorter responses, but not for long responses, primarily for two reasons: First. the method relies on the MAP-decoding to pro- duce the N-best list, and as mentioned above MAP-decoding prefers short, generic responses. Second, it is too late to delay reranking in the beam search until the whole sequence has been generated, since beam-search decoding tends to yield beams with low diversity per given prompt, even when the number of beams is high. In- stead, our solution is to break up the reranking over shorter segments, and to rerank segment-by- segment, thereby injecting diversity earlier during the decoding process, where it has the most impact on the resulting diversity of the generated beams. To further. imnr. Jariation in the generated.\non the resulting diversity of the generated beams\nTo further improve variation in the generated. responses, we replace the deterministic MAP-. decoding of the beam search procedure with sam-. pling. If a model successfully captures the distri-. bution of responses given targets, one can expect. simple greedy sampling to produce reasonable re-. sponses. However, due to model underfitting, the. learned distributions are often not sharp enough,. causing step-by-step sampling to accumulate er-. rors along the way, manifesting as incoherent out-. puts.We find that integrating sampling into the. beam-search procedure yields responses that are. more coherent and with more variation overall.\nTo further improve variation in the generated\nmore coherent and with more variation overall\nIn summary. the contributions of this work are he following:\n1. We propose to integrate target-side attention in neural conversation models, and provide a\nmodel. which scales well and is easy to im plement on top of the standard sequence-to-. sequence model. 2. We introduce a stochastic beam-search pro-. cedure with segment-by-segment reranking. which improves the diversity of the generated. responses. 3. We present large-scale experiments with hu-. man evaluations showing the proposed tech-. niques improve over strong baselines.. 4. We release our collection of context-free con- versation prompts used in our evaluations as. a benchmark for future open-domain conver-. sation response research. Seq2Seq Model with Attention on. Target\nsequence x and the decoder models the condi\nPROMPT: RESPONSE FROM BASELINE (A) vs OUR METHOD (B) A: I have no idea what I want to do with my life. I have no idea what I want to (Bad, Mediocre, Bad). what do you have planned tonight B: I was planning on doing a few of these, but I'm not sure what to choose. I need to find a place to camp, and I don't have a lot. of time to make a plan. (Excellent, Good, Mediocre) I'd like to read a good science fiction novel. Haven't read scifi A: If you like sci fi you should check out The Name of the Wind by Patrick Rothfuss (Excellent, Excellent, Excellent). in a while. B: If you want to know what sci-fi is, read the first book by Neil Gaiman. It's a sci-fi sci-fi sci-fi novel. It is really good and I. recommend it to all of you. It has a great premise. (Acceptable, Excellent, Acceptable). A: I have plans for the weekend but I don't know if I will be able to make it to the (Bad, Mediocre, Bad). Got any big weekend plans? B: I don't really have much to do on Saturday. But I plan on going to the movies at the mall and going to the show. (Good,. Good, Bad) Sometimes flying messes my ears, because of the air pressure A: I don't know why, but when I was a kid I used to do this all the time when I was younger. (Good, Bad, Bad). Especially if I have a cold or something, flying gets me messed B: I have no idea why, but I feel exactly the same way. I don't know why, but my brain is so sensitive to air that it makes my. up. nose twitch. (Excellent, Mediocre, Good) Yeah. I have this friend, and we get bored just hanging out here A: What kind of things do you like to do? (Excellent, Excellent, Excellent) sometimes, so we take little trips to places. We went to Seattle B: That sounds like a great time. I have a lot of friends that go to Seattle, so I was wondering if you could share any of the. a couple of months ago. locations? (Excellent, Excellent, Excellent) A: I have not, but I will check it out. (Good, Good, Good) have you seen Suicide Squad? B: I saw it in a theater once and it was great. It was very... I don't know, a little dark. I like the psychological effects and the. way it portrays the characters in the movie. (Excellent, Excellent, Excellent) Actually I like the snow too, except for today. It's all slushy ana A: I hate the snow. I hate the snow. I hate the snow. I hate the snow. I hate the snow. I hate the snow. I hate the snow. I hate the it's all over the sidewalks and it's gross. snow. (Acceptable, Good, Good) B: I'm in the same boat as you. I hate it when it's snowy and I don't like to shovel it all day. :( So this is a perfect excuse to not. get out of bed and go somewhere else. (Excellent, Good, Acceptable) What do you hope to see in the future? A: I want to be able to go back in time to when I was born. I want to be (Mediocre, Bad, Mediocre). B: I want to see an actual movie where the characters have the ability to hide the fact that they are actually in the movie. (Good,. Acceptable, Acceptable) A: I don't think it would be that big of a deal. (Acceptable, Mediocre, Acceptable) what dangers could happen with the singularity? B: If you look at it from the perspective of an alien, you can't say that there aren't any alternative ways to deal with it. (Accept-. able, Mediocre, Acceptable) A: I don't think you know what that word means. (Acceptable, Good, Good) I think you're a little crazy. B: I am, actually. I don't think I'm crazy though. I certainly am. (Acceptable, Mediocre, Acceptable)\nTable 1: Some example responses generated from the baseline (A) compared to our model (B), with. human ratings in parentheses. These were selected for quality. See the appendix for an unbiased sample\nP (yi|y[0:i-1];x) = DecoderRNN ( Yi-1, hi-1, Attention (hi-1,x))\nN P(y|x) =][P(yi|Y[0:i-1];x i=1\nAttention Attention Encoder Encoder Decoder Decoder (a) The vanilla sequence-to-sequence model. (b) Length-3 Target-glimpse Model.\nand the majority of its response sequences are shorter than 10 tokens. This may prevent us from seeing bigger gains, since our method is designed to help with longer outputs. In order to train on the much larger Reddit data set, we implemented this method on top of the GNMT model (Wu et al. 2016). Unfortunately, we met with frequent out- of-memory issues, as the 8-layer GNMT model is already very memory-intensive, and adding target- side attention made it even more so. Ideally, we would like to retain the model's capacity in or- der to train a rich response model, and therefore a more efficient approach is necessary.\nTo this end, we propose the target-glimpse\nquence y 1s yo, y1, y2, ..., Yio, and 3.lne nrs\nWhile decoding each glimpse, the decoder therefore attends to both the source sequence and the part of the target sequence that precedes the glimpse, thereby benefiting from the GNMT en- coder's bidirectional RNN. Through generaliza tion. the decoder should learn to decode a glimpse of length K in any arbitrary position of the target sequence (which we will exploit in our decoding technique discussed in Section 4). One drawback of this model, however, is that the context inputs to the attention mechanism only include the words that have been generated so far in this glimpse rather than the words from the full target side. The workaround that we use is to simply connect the last hidden state of the GNMT-encoder to the ini tial hidden state of the decoder', thereby giving the decoder access to all previous symbols regardless Of the stai e olimnse"}, {"section_index": "3", "section_name": "4 Stochastic Decoding with. Segment-by-Segment Reranking", "section_text": "We now turn our attention from training to in ference (decoding). Our strategy is to perform reranking with a normalized score at the seg- ment level, where we generate the candidate seg- ments using a trained glimpse-model and using a stochastic beam search procedure, which we dis- cuss next. The full decoding algorithm proceeds segment by segment.\nThe standard beam search algorithm generates. symbols step-by-step by keeping a set of the B. highest-scoring beams generated so far at each. step2. The algorithm adds all possible single-token. extensions to every existing beam, and then selects\n1This is the default in standard seq2seq models, but not in the GNMT mode1.\nBeams are also called hypotheses', and B is referred to as the 'beam width'.\nthe top B beams. In our stochastic beam search. algorithm, we replace this deterministic top-B se. lection by a stochastic sampling operation in order. to encourage variation. Further, to discourage a. single beam from dominating the search and de-. creasing the final response diversity, we perform a. two-step sampling procedure: 1) For each single-. token extension of an individual beam we don't. enumerate all possibilities, but instead sample a. fixed number of D candidate tokens to be added to the beam. This yields a total of B D beams, each. with one additional symbol. 2) We then compute. the accumulated conditional log-probabilities for. each beam (normalized across all B D beams).. and treat these as the logits for sub-sampling B. beams for the next step. We repeat this procedure. until we reach the desired segment-length H, or. until a segment ends with the end-of-sequence to-. ken.\nFor a given source sequence, we can use this stochastic beam search algorithm to generate B candidate H-length segments as the beginning of the target sequence. We then perform a rerank- ing step (described below), and keep one of these. The concatenation of the source and the first tar- get segment is then used as the input for generat- ing the next B candidate segments. The algorithm continues until the segment selected ends with an end-of-sequence token.\nThis algorithm behaves similarly to standard beam search when the categorical distribution used during the process is sharp (peaked'), since the samples are likely to be the top categories (words). However, when the distribution is. smooth, many of the choices are likely. In con-. versation response generation we are dealing with a conditional probability model with high entropy. so this is what often happens in practice..\nP(yk|x,Y1:k-1 ykX,Y1:k-1 Gx'Ex P(yk|x',y1:k-1\nfrom the context-free evaluation set Gintroduced ir\nIt is worth noting that when is an unbi- ased sample from P(x), the summation in the. denominator is a Monte-Carlo approximation of. P(yk|y1:k-1). In the case of reranking whole. target sequences y, this becomes the marginal. P(y), which corresponds to the same diversity-. promoting objective used in (Li et al., 2015). However, we found that our approximation works. better in terms of N-choose-1 accuracy (see Sec- tion 5.2), which suggests that its value may be. closer to the true conditional probability.."}, {"section_index": "4", "section_name": "5 Experimental Results", "section_text": "In this section we present experimental results for evaluating the target-glimpse model and the stochastic decoding method that we presented. We train the model using the Google neural machine translation model (GNMT, (Wu et al., 2016)), on a data set that combines multiple sources mined from the Web:\n1. The full Reddit data3 that contains 1.7 billion messages (221 million conversations). 2. The 2009 Open Subtitles data (0.5 million conversations, (Tiedemann, 2009)). 3. The Stack Exchange data (0.8 million conver- sations). 4. Dialogue-like texts that we recognized and extracted from the web (17 million conver- sations).\nFor all these data sets, we extract pairs of mes. sages where one can be considered as a response. to the other. For example, in the Reddit data set. the messages belonging to the same post are or-. ganized as a tree. A child node is a message that. replies to its parent. This may not necessarily be. true as people may be replying to other messages. that are also visually close. However, for our cur- rent single-turn experiments, we treat these as a single exchange.\nFor all these data sets, we extract pairs of mes-\n3Download links are at https://redd.it/3bxlg7\nIn this setting, the GNMT model trained on prompt-to-response pairs works surprisingly well without modification when generating short re- sponses with beam search. Similar to previous work on neural conversation models, we find that the generated responses are almost always gram- matical, and sometimes even interesting. They are also usually on topic. In addition, we found that even greedy sampling from the 8-layer GNMT model produces grammatical responses most of the time, although these responses are more likely to be semantically-broken than responses gener- ated using standard beam search. We would like to leverage the benefits of greedy sampling, because the induced variation generates more surprises and may potentially help improve user-engagement, and we found that our proposed segment-based beam sampling procedure accomplishes this to some extent."}, {"section_index": "5", "section_name": "5.1 Evaluation Metric", "section_text": "It is difficult to come up with an objective eval\nIn the 5-scale human evaluation, we use a\ncollection of 200 context-free prompts4 These prompts are collected from the following sources,. and filtered to prompts that are context-free (i.e.. do not depend on previous turns in the conversa-. tion), general enough, and by eliminating near du- plicates:\n1. The questions and statements that users aske. an internal testing bot. 2. The Fisher corpus (David et al., 2004).. 3. User inputs to the Jabberwacky chatbot5..\nThese can be either generic or specific. Some. example prompts from this collection are shown. in Table 1. These prompts are open-domain (not. about any specific topic), and include a wide range. of topics. Many require some creativity for an-. swering, such as \"Tell me a story about a bear.. Our evaluation set is therefore not from the same distribution as our training set. However, since our. goal is to produce good general conversation re-. sponses, we found it to be a good general purpose. evaluation set.\nThe evaluation itself is done by human raters. They are well-trained for the purpose of ensuring rating quality, and they are native English speak- ers. The A 5-scale rating is produced for each prompt-response pair: Excellent, Good, Accept- able, Mediocre, and Bad. For example, the in structions for rating Excellent is \"On topic, inter esting, shows understanding, moves the conver- sation forward. It answers the question\" The instruction for Acceptable is \"On topic but with flaws that make it seem like it didnt come from a human. It implies an answer.\" The instruction for Bad is \"A completely off-topic statement or question, nonsensical, or grammatically broken. It does not provide an answer.'\nIn our experiments, we perform the evaluations side-by-side, each time using responses generated from two methods. Every prompt-response pair is rated by three raters. We rate 200 pairs in total for every method, garnering 600 ratings overall. After the evaluation, we report aggregated results from each method individually.\nthe evaluation, we report aggregated results from"}, {"section_index": "6", "section_name": "5.2 Motivating Experiments", "section_text": "To see whether generating long responses is in- deed a challenging problem, we trained the plain\n4This list will be released to the community 5http://www.jabberwacky.com/\nseq2seq with the GNMT model where the encoder holds the source sequence and the decoder holds the target sequence. We experimented with the standard beam search and the beam search with length normalization a = 0.8 similar to (Wu et al. 2016). With this length normalization the gener- ated responses are indeed longer. However, they are more often semantically incoherent. It pro duces \"I have no idea what you are talking about. more often, similarly observed in (Li et al., 2016) The human evaluation results are summarized in Figure 2(b). Methods that generate longer re- sponses have more Bad and less Excellent / Good ratings.\nWe also performed the N-choose-1 evaluation. on the baseline model using different normal- ization schemes. The results are shown in Ta ble 2(a). No Normalization means that we use. P(y[x) for scoring, Normalize by Marginal uses. P(y[x)/P(y), as suggested in (Li et al., 2015) and Normalize by Random Prompts is our scoring. objective described in Section 4. The significant. boost when using both normalization schemes in dicates that the conditional log probability pre. dicted by the model may be biased towards the language model probability of P(y). After adding. the normalization, the score may be closer to the true conditional log probability..\nOverall, this reranking evaluation indicates that. our heuristic is preferred to scoring using the. marginal. However, it is unfortunately hard to di- rectly make use of this score during beam search. decoding (i.e., generation), since the resulting se-. quences are usually ungrammatical, as also ob-. served by (Li et al., 2015). This is the motivation for using a segment-by-segment reranking proce-. dure, as described in Section 4.."}, {"section_index": "7", "section_name": "5.3 Large-Scale Experiments", "section_text": "For our large-scale experiments, we train our target-glimpse model on the full combined data set. Figure 2(d) shows the training progress curve. In this figure, we also include the curve for K = 1. that is, the glimpse model with decoder-length 1. It is clear enough that this model progresses much slower, so we terminated it early. How- ever, it is surprising that the glimpse model with K = 10 progresses faster than the baseline model with only source-side attention, because the model is trained on examples with decoder-length fixed at 10, while the average response length is 38 in\nour data set. This means it takes on average 3.8 training steps for the glimpse model to train on the. same number of raw training-pairs as the baseline. model. Despite this, the faster progress indicate. that target-side attention indeed helps the mode. generalize better.\nThe human evaluation results shown in Figure 2. compare our proposed method with the baseline. seq2seq model. For this, we trained a length-10. target-glimpse model and decoded with stochastic. beam-search using segment-by-segment rerank- ing. In our experiments, we were unable to gen-. erate better long, coherent responses using the. whole-sequence level reranking method from (Li. et al., 2015) compared to using standard beam search with length-normalization6. We therefore. choose the latter as our baseline. because it is the only method which generates responses that are. long enough that we can compare to..\nFigure 2 shows that our proposed method gen-. erates more long responses overall. One third. of all responses are longer than 100 characters,. while the baseline model produces only a neg-. ligible fraction. Although we do not employ. any length-promoting objectives in our method. length-normalization is used for the baseline. For responses generated by our method, the proportion. of Acceptable and Excellent responses remains constant or even increases as the responses grow. longer. Conversely, human ratings decline sharply. with length for the baseline model..\nThe percentage of test cases with major agree. ment is high for both methods. We consider a tes to have major agreement if two ratings out of th three are the same. For the baseline method, 80% of the responses have major agreements, and fo. our method it is 70%..\nHowever, SDOLLCI Iesponses naVe amucn smaller search space, and we find that standard beam search tends to generate better (\"safer'') short responses. To maximize cumulative re- sponse quality, we therefore implemented a back- off strategy that combines the strengths of the two methods. We choose to fallback to the baseline model without length normalization when the lat- ter produces a response shorter than 40 characters, otherwise we use the response from our method. This corresponds to the white histogram in Fig- ure 2(b). Compared to the other methods in the fig\n6This is because the method reranks the responses in the. N-best list resulting from the beam search, which tend to be short with not much variation to begin with.\nure, the combined strategy results in more ratings. of Excellent, Good, Acceptable, and Mediocre,. and fewer Bad ratings. With this strategy, among. the responses generated for the same 200 prompts, 133 were from the standard beam search and 67 were from our model. Out of the 67 long re-. sponses, two thirds were longer than 60 characters. and half were longer than 75 characters. To com- pare the combined model's performance with the. baseline, we generated responses from both mod-. els using the same 200 prompts. For 20 of the re-. sponse pairs, human raters had no preference, but. for the remaining 180, human raters preferred the. combined model's response in 103 cases and the baseline's in only 77. indicating a significant win."}, {"section_index": "8", "section_name": "6 Conclusion", "section_text": "The research of building end-to-end systems that. can engage in general-purpose conversation is still. in its infancy. More significant progress is ex-. pected to be made with more advanced neural ar-. chitectures. However, our results reported in this. paper show that minimal modeling change and a. slightly more advanced decoding technique, com-. bined with training over very large data sets, can. still lead to noticeable improvements in the quality. of responses generated using neural conversation. models. Overall, we found using fixed-lengths in. the decoder to make it easier to train on large data. sets, as well as to allow us to improve the diversity. and coherence of the generated responses earlier. during generation, when it has most impact. While. the focus of this work has been on conversation modeling, we expect some of these results to carry. over to other sequence-to-sequence settings, such. as machine translation or image-captioning..\nover to other sequence-to-sequence settings. such"}, {"section_index": "9", "section_name": "Acknowledgments", "section_text": "We would like to thank Quoc Le, Oriol Vinyals and Jakob Uszkoreit for many helpful discussions. and Scott Benson, Fuchun Peng for collecting the context-free prompt set, and Amin Ahmad for setting up the human evaluation, and Rami Eid. Daniel Cer for collecting training data sets, and Yonghui Wu, Zhifeng Chen, Mike Schuster for help on training the GNMT model.\ncontext-free prompt set, and Amin Ahmad for"}, {"section_index": "10", "section_name": "References", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Cor- rado, Andy Davis, Jeffrey Dean, Matthieu Devin\nSanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore. Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Tal- war, Paul Tucker, Vincent Vanhoucke, Vijay Va- sudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous sys- tems. Software available from tensorflow.org. http://tensorflow.org/.\nChristopher Cieri David, David Miller, and Kevin Walker. 2004. The fisher corpus: a resource for the next generations of speech-to-text. In in Proceed ings 4th International Conference on Language Re- sources and Evaluation. pages 69-71.\nLifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversa- tion. arXiv preprint arXiv:1503.02364 .\nTsung-Hsien Wen, Milica Gasic, Dongho Kim, Nikola Mrksic, Pei-hao Su, David Vandyke, and Steve J.. Young. 2015. Stochastic language generation in di-. alogue using recurrent neural networks with convo-. lutional sentence reranking. CoRR abs/1508.01755\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey,. Maxim Krikun, Yuan Cao, Qin Gao, Klaus. Macherey, Jeff Klingner, Apurva Shah, Melvin. Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan. Gouws,Yoshikiyo Kato, Taku Kudo, Hideto. Kazawa, Keith Stevens, George Kurian, Nishant. Patil. Wei Wang. Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado,. Macduff Hughes, and Jeffrey Dean. 2016. Google's. neural machine translation system: Bridging the gap. between human and machine translation .."}, {"section_index": "11", "section_name": "A The Context-Free Prompt List and Evaluation Results", "section_text": "selected purely at random, and not cherry-picked\n1. Excellent\n(a) Prompt: what does your handwriting look like? Response: I have a pen on the left hand side. (b) Prompt: have you seen Suicide Squad? Response: I saw it in a theater once and it was great. It wa very... I don't know, a little dark. I like the psychological effeci and the way it portrays the characters in the movie.\n(d) Prompt: what are you watching right now? Response: watch the new season of rick and morty (e) Prompt: Actually I like the snow too, except for today. It's all slushy and it's all over the sidewalks and it's gross. Response: I'm in the same boat as you. I hate it when it's snowy and I don't like to shovel it all day. :( So this is a perfect excuse to not get out of bed and go somewhere else Good (a) Prompt: I went on a bike ride today Response: That is awesome. (b) Prompt: how was your weekend? what'd you do? Response: I had a good weekend and I had a good time. I jusi wanted to get some rest. (C) Prompt: get anything good for xmas? Response: I got a bunch of stuff to give away, and I don't know how long it will take to send. I know I have a lot of stuff that I want to give to people and I don't know if I can give it away to someone else. (d) Prompt: What do you want to be? Response: I want to be a psychologist (e) Prompt: Nothing makes me happy. :(. Response: I can't even imagine how awful this feels to be\n3. Acceptable\n(a) Prompt: Are you good at math? Response: I have to admit, I am a terrible coder... (b) Prompt: I didn't used to like Indian food, but I think at this point I like just about anything. Response: Indian indian food is so good. (c) Prompt: i was in bed all day yesterday Response: how do you feel? (d) Prompt: It sounds like you're a sports fan. Response: I am, I watch a lot of baseball and I don't want to be a sports fan. (e) Prompt: I've never learned how to fly Response: You have no idea how to fly a plane. It's not that hard.\nMediocre (a) Prompt: I live alone, and if I eat at home I eat whatever I can put in the microwave that night. A lot of times it's just that, or I'll trade with friends; I'll cook and they'll cook and then we'll. swap leftovers. Response: I live alone, and if I eat at home, I eat whatever I can put in the microwave that night. I do this all the time. I eat the same thing over and over again. (b) Prompt: I don't eat seafood. I don't eat anything that used to swim. I know that's kind of rigid, but I had a bad experience when I was a kid. Just doesn't do it for me. Response: I don't eat anything that used to swim. That's not the point. The point is that you can eat fish that will eat whatever you want. That's just not the same thing as fish. (c) Prompt: I find I like Japanese food a lot. I can make a number of the types of rolls and various types of sushi. Of course I end up doing a lot of things with noodle soup just because it's relatively cheap and fast. Response: I don't understand, how would you go about getting a sushi roll with Japanese food? (d) Prompt: Let's talk about cooking. Do you own a steamer? Response: I own a steamer but my user agent is a hunter, so he owns a steamer, but I don't own one, he uses it a lot but I used. to own a steamer. (e) Prompt: Have you heard of Turing? Response: I think I have. I like it. .Bad (a) Prompt: I've had a cold myself this week. Right now I'm trying to figure out how to get rid of it. Response: I have a cold and I don't have any colds because I have a cold. (b) Prompt: How much time would you say you spend attached to a computer? 2 hours? 10 hours? Response: I have a gaming computer that I'd rather not have to spend hours on, so I spend all day playing games on it on a\nC Prompt: How significant is the potential for nuclear energy? Response: I think there is a very good chance it will be a very strong nuclear weapon, but it wouldn't make a huge impact. The only thing that has a chance of making it into nuclear power is a nuclear disaster, which will be very very very un- likely if it ever does.. (d) Prompt: I really like the trees of san francisco. Response: I like the new one too. (e) Prompt: I don't eat seafood. I don't eat anything that used to swim. I know that's kind of rigid, but I had a bad experience when I was a kid. Just doesn't do it for me.. Response: I don't eat anything that used to swim. That's not the point. The point is that you can eat fish that will eat whatever you want. That's just not the same thing as fish.."}, {"section_index": "12", "section_name": "A.2 Baseline Method", "section_text": "1. Excellent\n(a) Prompt: What do you think of Asimov's novels? Response: I read them when I was in high school and I loved. them. (b) Prompt: Finally we're getting a winter with some snow.. Response: Where is this? I want to go to there. (c)Prompt: Are you a bot? Response: I am a bot, and this action was performed automat. ically. (d) Prompt: Are you at work now? Response: Yes, I am. (e) Prompt: Actually, we just got a snow storm yesterday. I live in Maryland. Where do you live, on the east coast? Response: I live on the east coast of the US 2.Good (a) Prompt: Do you read the bible? Response: I don't read the bible. I read the bible. I read the. bible. I read the bible (b) Prompt: have you seen Suicide Squad? Response: I have not, but I will check it out. (C) Prompt: Are you at work now? Response: Yes, I am. (d) Prompt: I run alot. That's my jam. I run a marathon a year.. Response: Do you run on a treadmill or a treadmill? (e) Prompt: how was your dentist appt? Response: It was a lot better than I thought it would be.. 3.Acceptable (a) Prompt: What do you think of Asimov's novels? Response: I read them when I was in high school and I loved. them. (b) Prompt: is chicken healthier than beef? Response: Yes, it is. (c) Prompt: It's a beautiful day Response: I wish I could upvote this more than once.. (d) Prompt: Let's talk about cooking. Do you own a steamer?. Response: I have a steamer and a steamer (e) Prompt: what r u doing this summer?\n(a) Prompt: What do you think about universal healthcare? Response: I don't like the idea of universal healthcare. I don't like the idea of universal healthcare. I think (b) Prompt: It will be very cool when we succeed in sending astro- nauts toMars. Response: I think it would be a good idea to send astronauts to Mars. C) Prompt: I've a software engineer at Google, it's a good life Response: I have no idea what you are talking about. (d) Prompt: I taught in Kansas. I taught French and Spanish at high school and college level. Response: Did you learn it in high school? (e) Prompt: What makes you happy? Response: The fact that I don't have to worry about my Bad (a) Prompt: i need to start eating healthier Response: Your submission has been automatically removed pending manual approval because your account is either too\n(b) Prompt: where did you grow up?. Response: where are you from? (C) Prompt: i hate being the center of attention. Response: I hate it when people do this.. (d) Prompt: You can't eat pasta every day. You can't live on only Italian food. Response: I don't eat Italian food, but I do eat a lot of Italian food. (e) Prompt: You are a program, and I am human. Response: I am a bot, and this action was performed automat- ically. Please contact the moderators of this subreddit if you have\nN for computing. 50 10 2 \"N-choose-1\" Overall Human Eval. 300 Beam search on plain seq2seq Beam search with length normalization 0.8. 250 Segment Beam Sample with Glimpse Model. 0.047 0.15 0.56 No Normalization. With fallback to beam search. Raetn %150 Normalize by Juqwnrr 0.44 0.65 0.91 Marginal 100 Normalize by 50 Random Prompts 0.61 0.78 0.97 (our heuristics). Bad MediocreAcceptable. Good Excellent (a) (b) Number of responses above length threshold. Eval Log Perplexity w.r.t. Time 7.0 200 Baseline Length-1 Glimpse Our method 6.5 Length-10 Glimpse Plain Seq2Seq 6.0 Length-10 Glimpse with Enc-to-Dec Connection. 150 5.5 100 w 50 4.0 3.5 3.0 100000 200000 300000 400000 500000 600000 700000 800000 0 20 40 60 80 100 Length threshold. Second (c) (d) Ratig of responses above length threshold that are rated at least acceptable Ratio of responses above length threshold that are rated as exceller Baseline Baseline 0.7 Our method Our method 0.6 0.15 0.5 0.4 0.10 0.3 0.2 0.05 0.1 0.0 0.00 0 20 40 60 80 100 0 20 40 60 80 100 Length threshold. Length threshold.\nNo Normalization\nNormalize by Marginal\nNormalize by Random Prompts (our heuristics)\n(d) Ratig.of responses above length threshold that are rated at least acceptable Ratio of responses above length threshold that are rated as excellent Baseline Baseline 0.7 Our method Our method 0.6 0.15 0.5 0.4 0.10 0.3 0.2 0.05 0.1 0.0 0.00 0 20 40 60 80 100 0 20 40 60 80 100 Length threshold Length threshold (e) (f)\nFigure 2: (a) N-choose-1 evaluation on the baseline model. (d) Training progress of different models on the full combined data set. Length-1 and Length-10 are the target-glimpse models we propose, and Plain Seq2seg is the baseline model we described. (b)(c)(e)(f): Human evaluation results on the conversation data. (b) The histogram of 5 ratings per method. (c) The length thresholds (horizontal axis) and the number of responses generated that are above the length threshold (vertical axis); (e) The proportion of responses above the length-threshold that are judged at least Acceptable; (f) The proportion of responses above the length-threshold that are judged as Excellent. The length thresholds are all measured in number of characters"}] |
ry8u21rtl | [{"section_index": "0", "section_name": "Abstract", "section_text": "The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250 labels, outperforming Tempora1 Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels from 35.24% to 9.11%."}, {"section_index": "1", "section_name": "1 Introduction", "section_text": "Deep learning has seen tremendous success in areas such as image and speech recognition. In orde to learn useful abstractions, deep learning models require a large number of parameters, thus making them prone to over-fitting (Figure 1). Moreover, adding high-quality labels to training data manually. is often expensive. Therefore, it is desirable to use regularization methods that exploit unlabeled data. effectively to reduce over-fitting in semi-supervised learning..\nWhen a percept is changed slightly, a human typically still considers it to be the same object. Corre spondingly, a classification model should favor functions that give consistent output for similar data points. One approach for achieving this is to add noise to the input of the model. To enable the model to learn more abstract invariances, the noise may be added to intermediate representations, an insight that has motivated many regularization techniques, such as Dropout [28]. Rather than minimizing the classification cost at the zero-dimensional data points of the input space, the regularized model minimizes the cost on a manifold around each data point, thus pushing decision boundaries away from the labeled data points (Figure|1b).\nSince the classification cost is undefined for unlabeled examples, the noise regularization by itseli does not aid in semi-supervised learning. To overcome this, the T model [21] evaluates each data point with and without noise, and then applies a consistency cost between the two predictions. In this case, the model assumes a dual role as a teacher and a student. As a student, it learns as before; as a teacher, it generates targets, which are then used by itself as a student for learning. Since the mode itself generates targets, they may very well be incorrect. If too much weight is given to the generated targets, the cost of inconsistency outweighs that of misclassification, preventing the learning of nev"}, {"section_index": "2", "section_name": "Mean teachers are better role models:. Weight-averaged consistency targets improve semi-supervised deep learning results", "section_text": "The Curious AI Company\nharri@cai.fi\nFigure 1: A sketch of a binary classification task with two labeled examples (large blue dots) anc one unlabeled example, demonstrating how the choice of the unlabeled target (black circle) affects the fitted function (gray curve). (a) A model with no regularization is free to fit any function that predicts the labeled training examples well. (b) A model trained with noisy labeled data (small dots learns to give consistent predictions around labeled data points. (c) Consistency to noise around unlabeled examples provides additional smoothing. For the clarity of illustration, the teacher model (gray curve) is first fitted to the labeled examples, and then left unchanged during the training of the student model. Also for clarity, we will omit the small dots in figures d and e. (d) Noise on the teache model reduces the bias of the targets without additional training. The expected direction of stochastic gradient descent is towards the mean (large blue circle) of individual noisy targets (small blue circles) (e) An ensemble of models gives an even better expected target. Both Temporal Ensembling and the Mean Teacher method use this approach.\nThere are at least two ways to improve the target quality. One approach is to choose the perturbation of the representations carefully instead of barely applying additive or multiplicative noise. Another approach is to choose the teacher model carefully instead of barely replicating the student model Concurrently to our research, Miyato et al. [16] have taken the first approach and shown that Virtual Adversarial Training can yield impressive results. We take the second approach and will show that it too provides significant benefits. To our understanding, these two approaches are compatible, and their combination may produce even better outcomes. However, the analysis of their combined effects is outside the scope of this paper.\nOur goal, then, is to form a better teacher model from the student model without additional training As the first step, consider that the softmax output of a model does not usually provide accurate predictions outside training data. This can be partly alleviated by adding noise to the model a inference time [4], and consequently a noisy teacher can yield more accurate targets (Figure|1d). Thi. approach was used in Pseudo-Ensemble Agreement [2] and has lately been shown to work well or semi-supervised image classification [13] 23]. Laine & Aila [13] named the method the I model; we will use this name for it and their version of it as the basis of our experiments.\nThe I model can be further improved by Temporal Ensembling [13], which maintains an exponentia moving average (EMA) prediction for each of the training examples. At each training step, al the EMA predictions of the examples in that minibatch are updated based on the new prediction Consequently, the EMA prediction of each example is formed by an ensemble of the model's currer version and those earlier versions that evaluated the same example. This ensembling improves th quality of the predictions, and using them as the teacher predictions improves results. However, sinc each target is updated only once per epoch, the learned information is incorporated into the trainin process at a slow pace. The larger the dataset, the longer the span of the updates, and in the case o on-line learning, it is unclear how Temporal Ensembling can be used at all. (One could evaluate a the targets periodically more than once per epoch, but keeping the evaluation span constant woul require O(n2) evaluations per epoch where n is the number of training examples.)"}, {"section_index": "3", "section_name": "2 Mean Teacher", "section_text": "To overcome the limitations of Temporal Ensembling, we propose averaging model weights instead of predictions. Since the teacher model is an average of consecutive student models, we call this the Mean Teacher method (Figure[2). Averaging model weights over training steps tends to produce a\n(a) (b) (c) (d) (e)\nprediction prediction 3 3 classification consistency cost cost e exponential moving average 34 3 label input student model teacher model\nFigure 2: The Mean Teacher method. The figure depicts a training batch with a single labeled. example. Both the student and the teacher model evaluate the input applying noise (n, n') within. their computation. The softmax output of the student model is compared with the one-hot label using classification cost and with the teacher output using consistency cost. After the weights of the student model have been updated with gradient descent, the teacher model weights are updated as an exponential moving average of the student weights. Both model outputs can be used for prediction. but at the end of the training the teacher prediction is more likely to be correct. A training step with an unlabeled example would be similar, except no classification cost would be applied..\nmore accurate model than using the final weights directly [19]. We can take advantage of this during. training to construct better targets. Instead of sharing the weights with the student model, the teacher model uses the EMA weights of the student model. Now it can aggregate information after every. step instead of every epoch. In addition, since the weight averages improve all layer outputs, not just the top output, the target model has better intermediate representations. These aspects lead to two practical advantages over Temporal Ensembling: First, the more accurate target labels lead to a faster. feedback loop between the student and the teacher models, resulting in better test accuracy. Second. the approach scales to large datasets and on-line learning.\nMore formally, we define the consistency cost J as the expected distance between the prediction of the student model (with weights 0 and noise n) and the prediction of the teacher model (with weights. 0' and noise n').\nJ(0) =Ex,n',n f(x,0',n)-f(x,0,n)I\n0=a0-1+1-a0\nwhere a is a smoothing coefficient hyperparameter. An additional difference between the three algorithms is that the H model applies training to 0' whereas Temporal Ensembling and Mean Teacher treat it as a constant with regards to optimization.\nWe can approximate the consistency cost function J by sampling noise n, n' at each training step with stochastic gradient descent. Following Laine & Aila [13], we use mean squared error (MSE) as the consistency cost in most of our experiments.\nThe difference between the I model, Temporal Ensembling, and Mean teacher is how the teacher predictions are generated. Whereas the I model uses 0' = 0, and Temporal Ensembling approximates. f(x, 0', n') with a weighted average of successive predictions, we define 0 at training step t as the. EMA of successive 0 weights:\nTable 1: Error rate percentage on SVHN over 10 runs (4 runs when using all labels). We use exponential moving average weights in the evaluation of all our models. All the methods use a similar 13-layer ConvNet architecture. See Table 5lin the Appendix for results without input augmentation\n250 labels 500 labels 1000 labels 73257 labels 73257 images 73257 images 73257 images 73257 images GAN [25] 18.44 4.8 8.11 1.3 H model [13] 6.65 0.53 4.82 0.17 2.54 0.04 Temporal Ensembling [13 5.12 0.13 4.42 0.16 2.74 0.06 VAT+EntMin 16 3.86 Supervised-only 27.77 3.18 16.88 1.30 12.32 0.95 2.75 0.10 H model 9.69 0.92 6.83 0.66 4.95 0.26 2.50 0.07 Mean Teacher. 4.35 0.50 4.18 0.27 3.95 0.19 2.50 0.05\nTable 2: Error rate percentage on CIFAR-10 over 10 runs (4 runs when using all labels)\n1000 labels 2000 labels 4000 labels 50000 labels 50000 images 50000 images 50000 images 50000 images GAN [25] 18.63 2.32 H model [13] 12.36 0.31 5.56 0.10 Temporal Ensembling [13] 12.16 0.31 5.60 0.10 VAT+EntMin [16] 10.55 Supervised-only 46.43 1.21 33.94 0.73 20.66 0.57 5.82 0.15 H model 27.36 1.20 18.02 0.60 13.20 0.27 6.06 0.11 Mean Teacher 21.55 1.48 15.73 0.31 12.31 0.28 5.94 0.15"}, {"section_index": "4", "section_name": "3 Experiments", "section_text": "To test our hypotheses, we first replicated the I model [13] in TensorFlow [1] as our baseline. We then modified the baseline model to use weight-averaged consistency targets. The model architecture is a 13-layer convolutional neural network (ConvNet) with three types of noise: random translations and horizontal flips of the input images, Gaussian noise on the input layer, and dropout applied withir the network. We use mean squared error as the consistency cost and ramp up its weight from O tc. its final value during the first 80 epochs. The details of the model and the training procedure are described in AppendixB.1"}, {"section_index": "5", "section_name": "3.1 Comparison to other methods on SVHN and CIFAR-10", "section_text": "The recently published version of Virtual Adversarial Training by Miyato et al. [16] performs even better than Mean Teacher on the 1000-label SVHN and the 4000-label CIFAR-10. As discussed in the introduction, VAT and Mean Teacher are complimentary approaches. Their combination may yield better accuracy than either of them alone, but that investigation is beyond the scope of this paper.\nWe ran experiments using the Street View House Numbers (SVHN) and CIFAR-10 benchmarks [17] Both datasets contain 32x32 pixel RGB images belonging to ten different classes. In SVHN, each. example is a close-up of a house number, and the class represents the identity of the digit at the center of the image. In CIFAR-10, each example is a natural image belonging to a class such as horses, cats,. cars and airplanes. SVHN contains of 73257 training samples and 26032 test samples. CIFAR-10 consists of 50000 training samples and 10000 test samples.\nTables[1and[2|compare the results against recent state-of-the-art methods. All the methods in the comparison use a similar 13-layer ConvNet architecture. Mean Teacher improves test accuracy over the II model and Temporal Ensembling on semi-supervised SVHN tasks. Mean Teacher also improves results on CIFAR-10 over our baseline II model.\nTable 3: Error percentage over 10 runs on SVHN with extra unlabeled training data\n500 labels 500 labels 500 labels 73257 images 173257 images 573257 images H model (ours) 6.83 0.66 4.49 0.27 3.26 0.14 Mean Teacher 4.18 0.27 3.02 0.16 2.46 0.06 73257 images and labels. 73257 images and 500 Iabels 573257 images and 500 labels 101 eaas eassassst 100 10-1 model (test set) Mean teacher (student, test set) 10-2 model (training) Mean teacher (student, training). 10-3 100% model 50% model (EMA) error Mean teacher (student) eaassasonn 20% Mean teacher (teacher) 10% 5% 2%\nFigure 3: Smoothened classification cost (top) and classification error (bottom) of Mean Teacher anc our baseline I model on SVHN over the first 10ooo0 training steps. In the upper row, the training. classification costs are measured using only labeled data.."}, {"section_index": "6", "section_name": "3.2 SVHN with extra unlabeled data", "section_text": "Above, we suggested that Mean Teacher scales well to large datasets and on-line learning. In addition. the SVHN and CIFAR-1O results indicate that it uses unlabeled examples efficiently. Therefore, we wanted to test whether we have reached the limits of our approach..\nBesides the primary training data, SVHN includes also an extra dataset of 531131 examples. We picked 500 samples from the primary training as our labeled training examples. We used the rest of the primary training set together with the extra training set as unlabeled examples. We ran experiments with Mean Teacher and our baseline I model, and used either 0, 100o00 or 500000 extra examples Table3 shows the results."}, {"section_index": "7", "section_name": "3.3 Analysis of the training curves", "section_text": "Using the EMA-weighted model as the teacher improves results in the semi-supervised settings There appears to be a virtuous feedback cycle of the teacher (blue curve) improving the student (orange) via the consistency cost, and the student improving the teacher via exponential moving averaging. If this feedback cycle is detached, the learning is slower, and the model starts to overfit earlier (dark gray and light gray).\nMean Teacher helps when labels are scarce. When using 5oo labels (middle column) Mean Teache learns faster, and continues training after the I model stops improving. On the other hand, in the. all-labeled case (left column), Mean Teacher and the I model behave virtually identically..\nThe training curves on Figure[3Jhelp us understand the effects of using Mean Teacher. As expected, the EMA-weighted models (blue and dark gray curves in the bottom row) give more accurate predictions than the bare student models (orange and light gray) after an initial period..\n15% (b) 15% 15% (a) (c) ugmentation with nout 10% with 10% 10% augmentation . 5% 5% 5% no input dropout both. 0.0 0.25 0.5 0.75 0 0.10.3 1 3 10 noise noise teacher dropout consistency cost weight 15% (d) (e) consistency 15% 15% + (f) ramp-up on 10% 10% 10% off 5% 5% 5% 9 66 666 L666 O s-0t t-0I e-0T z-01 siinae ondunf MSE 9 9 8 A!p-ly % EMA decay dual output diff. cost. cons. cost function t.\nFigure 4: Validation error on 250-label SVHN over four runs per hyperparameter setting and their means. In each experiment, we varied one hyperparameter, and used the evaluation run hyperparameters of Table[1|for the rest. The hyperparameter settings used in the evaluation runs are marked with the bolded font weight. See the text for details.\nMean Teacher uses unlabeled training data more efficiently than the I model, as seen in the middle column. On the other hand, with 5o0k extra unlabeled examples (right column), I model keeps improving for longer. Mean Teacher learns faster, and eventually converges to a better result, but the. sheer amount of data appears to offset II model's worse predictions.."}, {"section_index": "8", "section_name": "3.4 Ablation experiments", "section_text": "To assess the importance of various aspects of the model, we ran experiments on SVHN with 25( labels, varying one or a few hyperparameters at a time while keeping the others fixed.\nRemoval of noise (Figures 4(a) and4(b)). In the introduction and Figure [1] we presented the hypothesis that the I model produces better predictions by adding noise to the model on both sides But after the addition of Mean Teacher, is noise still needed? Yes. We can see that either inpu augmentation or dropout is necessary for passable performance. On the other hand, input noise does not help when augmentation is in use. Dropout on the teacher side provides only a marginal benefit. over just having it on the student side, at least when input augmentation is in use..\nSensitivity to EMA decay and consistency weight (Figures4(c) and4(d)). The essential hyperpa rameters of the Mean Teacher algorithm are the consistency cost weight and the EMA decay a. Hov sensitive is the algorithm to their values? We can see that in each case the good values span roughly an order of magnitude and outside these ranges the performance degrades quickly. Note that EMA decay a = 0 makes the model a variation of the II model, although somewhat inefficient one because the gradients are propagated through only the student path. Note also that in the evaluation runs we. used EMA decay = 0.99 during the ramp-up phase, and a = 0.999 for the rest of the training. We chose this strategy because the student improves quickly early in the training, and thus the teache. should forget the old, inaccurate, student weights quickly. Later the student improvement slows, and. the teacher benefits from a longer memory.\nDecoupling classification and consistency (Figure4[e)). The consistency to teacher predictions may not necessarily be a good proxy for the classification task, especially early in the training. Sc far our model has strongly coupled these two tasks by using the same output for both. How would decoupling the tasks change the performance of the algorithm? To investigate, we changed the mode to have two top layers and produce two outputs. We then trained one of the outputs for classificatior and the other for consistency. We also added a mean squared error cost between the output logits, anc then varied the weight of this cost, allowing us to control the strength of the coupling. Looking at the results (reported using the EMA version of the classification output), we can see that the strongly coupled version performs well and the too loosely coupled versions do not. On the other hand, a moderate decoupling seems to have the benefit of making the consistency ramp-up redundant.\nTable 4: Error rate percentage of ResNet Mean Teacher compared to the state of the art. We report. the test results from 10 runs on CIFAR-10 and validation results from 2 runs on ImageNet\nCIFAR-10 ImageNet 2012 4000 labels 10% of the labels State of the art. 10.55 [16 35.24 0.90 [20] ConvNet Mean Teacher. 12.31 0.28 ResNet Mean Teacher. 6.28 0.15 9.11 0.12 2.86 [5] State of the art using all labels 3.79 [10]\nChanging from MSE to KL-divergence (Figure|4(f)) Following Laine & Aila [13], we use mean squared error (MSE) as our consistency cost function. but KL-divergence would seem a more natural choice. Which one works better? We ran experiments with instances of a cost function family ranging from MSE ( = 0 in the figure) to KL-divergence ( = 1), and found out that in this setting MSE performs better than the other cost functions. See Appendix|C|for the details of the cost function family and for our intuition about why MSE performs so well"}, {"section_index": "9", "section_name": "3.5 Mean Teacher with residual networks on CIFAR-10 and ImageNet", "section_text": "In the experiments above, we used a traditional 13-layer convolutional architecture (ConvNet), which has the benefit of making comparisons to earlier work easy. In order to explore the effect of the model architecture, we ran experiments using a 12-block (26-layer) Residual Network [8](ResNet) with Shake-Shake regularization [5] on CIFAR-10. The details of the model and the training procedure are described in Appendix B.2 As shown in Table[4] the results improve remarkably with the better network architecture.\nTo test whether the methods scales to more natural images, we ran experiments on Imagenet 2012. dataset [22] using 10% of the labels. We used a 50-block (152-layer) ResNeXt architecture [33] and saw a clear improvement over the state of the art. As the test set is not publicly available, we measured the results using the validation set.."}, {"section_index": "10", "section_name": "4 Related work", "section_text": "Noise regularization of neural networks was proposed by Sietsma & Dow [26|. More recently, several types of perturbations have been shown to regularize intermediate representations effectively in deep learning. Adversarial Training [6] changes the input slightly to give predictions that are as different as possible from the original predictions. Dropout [28] zeroes random dimensions of layer outputs. Dropconnect [31] generalizes Dropout by zeroing individual weights instead of activations Stochastic Depth [11] drops entire layers of residual networks, and Swapout [27] generalizes Dropou. and Stochastic Depth. Shake-shake regularization [5] duplicates residual paths and samples a linear combination of their outputs independently during forward and backward passes.\nSeveral semi-supervised methods are based on training the model predictions to be consistent t perturbation. The Denoising Source Separation framework (DSs) [29] uses denoising of laten. variables to learn their likelihood estimate. The T variant of Ladder Network [21] implements DS. with a deep learning model for classification tasks. It produces a noisy student predictions and cleai. teacher predictions, and applies a denoising layer to predict teacher predictions from the studen predictions. The I model [13] improves the T model by removing the explicit denoising layer anc. applying noise also to the teacher predictions. Similar methods had been proposed already earlier fo linear models [30] and deep learning [2]. Virtual Adversarial Training [16] is similar to the I mode but uses adversarial perturbation instead of independent noise.\nThe idea of a teacher model training a student is related to model compression [3] and distillation [9] The knowledge of a complicated model can be transferred to a simpler model by training the simpler model with the softmax outputs of the complicated model. The softmax outputs contain. more information about the task than the one-hot outputs, and the requirement of representing this\nknowledge regularizes the simpler model. Besides its use in model compression, distillation can be used to harden trained models against adversarial attacks [18]. The difference between distillatior and consistency regularization is that distillation is performed after training whereas consistency regularization is performed on training time.\nConsistency regularization can be seen as a form of label propagation [34]. Training samples tha resemble each other are more likely to belong to the same class. Label propagation takes advantage. of this assumption by pushing label information from each example to examples that are near it. according to some metric. Label propagation can also be applied to deep learning models [32]. However, ordinary label propagation requires a predefined distance metric in the input space. In. contrast, consistency targets employ a learned distance metric implied by the abstract representations. of the model. As the model learns new features, the distance metric changes to accommodate these. features. Therefore, consistency targets guide learning in two ways. On the one hand they spread the. labels according to the current distance metric, and on the other hand, they aid the network learn a better distance metric."}, {"section_index": "11", "section_name": "5 Conclusion", "section_text": "Temporal Ensembling, Virtual Adversarial Training and other forms of consistency regularization have recently shown their strength in semi-supervised learning. In this paper, we propose Mean Teacher, a method that averages model weights to form a target-generating teacher model. Unlike Temporal Ensembling, Mean Teacher works with large datasets and on-line learning. Our experiments suggest that it improves the speed of learning and the classification accuracy of the trained network. In addition, it scales well to state-of-the-art architectures and large image sizes.\nThe success of consistency regularization depends on the quality of teacher-generated targets. If the targets can be improved, they should be. Mean Teacher and Virtual Adversarial Training represent two ways of exploiting this principle. Their combination may yield even better targets. There are probably additional methods to be uncovered that improve targets and trained models even further."}, {"section_index": "12", "section_name": "Acknowledgements", "section_text": "We thank Samuli Laine and Timo Aila for fruitful discussions about their work, Phil Bachman, Colin Raffel, and Thomas Robert for noticing errors in the previous versions of this paper and everyone at The Curious AI Company for their help, encouragement, and ideas.."}, {"section_index": "13", "section_name": "References", "section_text": "References [1] Abadi, Martin, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig Corrado, Greg S., Davis, Andy, Dean, Jeffrey, Devin, Matthieu, Ghemawat, Sanjay, Goodfellow Ian, Harp, Andrew, Irving, Geoffrey, Isard, Michael, Jia, Yangqing, Jozefowicz, Rafal, Kaiser Lukasz, Kudlur, Manjunath, Levenberg, Josh, Mane, Dan, Monga, Rajat, Moore, Sherry Murray, Derek, Olah, Chris, Schuster, Mike, Shlens, Jonathon, Steiner, Benoit, Sutskever, Ilya Talwar, Kunal, Tucker, Paul, Vanhoucke, Vincent, Vasudevan, Vijay, Viegas, Fernanda, Vinyals Oriol, Warden, Pete, Wattenberg, Martin, Wicke, Martin, Yu, Yuan, and Zheng, Xiaoqiang TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. 2015. [2] Bachman, Philip, Alsharif, Ouais, and Precup, Doina. Learning with Pseudo-Ensembles arXiv:1412.4864 [cs, stat], December 2014. arXiv: 1412.4864. [3] Bucilua, Cristian, Caruana, Rich, and Niculescu-Mizil, Alexandru. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 535-541. ACM, 2006. [4] Gal, Yarin and Ghahramani, Zoubin. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proceedings of The 33rd International Conference on Machine Learning, pp. 1050-1059, 2016. [5] Gastaldi, Xavier. Shake-Shake regularization. arXiv:1705.07485 [cs], May 2017. arXiv 1705.07485.\n[6] Goodfellow, Ian J., Shlens, Jonathon, and Szegedy, Christian. Explaining and Harnessing. Adversarial Examples. December 2014. arXiv: 1412.6572. [7] Guo, Chuan, Pleiss, Geoff, Sun, Yu, and Weinberger, Kilian Q. On Calibration of Modern Neural Networks. arXiv:1706.04599 [cs], June 2017. arXiv: 1706.04599. [8] He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep Residual Learning for. Image Recognition. arXiv:1512.03385 [cs], December 2015. arXiv: 1512.03385. [9] Hinton, Geoffrey, Vinyals, Oriol, and Dean, Jeff. Distilling the Knowledge in a Neural Network arXiv:1503.02531 [cs, stat], March 2015. arXiv: 1503.02531. [10] Hu, Jie, Shen, Li, and Sun, Gang. Squeeze-and-Excitation Networks. arXiv:1709.01507 [cs], September 2017. arXiv: 1709.01507. [11] Huang, Gao, Sun, Yu, Liu, Zhuang, Sedra, Daniel, and Weinberger, Kilian. Deep Networks with Stochastic Depth. arXiv:1603.09382 Jcs], March 2016. arXiv: 1603.09382. [12] Kingma,Diederik and Ba, Jimmy. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs], December 2014. arXiv: 1412.6980. [13] Laine, Samuli and Aila, Timo. Temporal Ensembling for Semi-Supervised Learning arXiv:1610.02242 [cs], October 2016. arXiv: 1610.02242. [14] Loshchilov, Ilya and Hutter, Frank. SGDR: Stochastic Gradient Descent with Warm Restarts arXiv:1608.03983 [cs, math], August 2016. arXiv: 1608.03983. [15] Maas, Andrew L., Hannun, Awni Y., and Ng, Andrew Y. Rectifier nonlinearities improve neural network acoustic models. In Proc. ICML, volume 30, 2013..\nAdversarial Examples. December 2014. arXiv: 1412.6572. [7] Guo, Chuan, Pleiss, Geoff, Sun, Yu, and Weinberger, Kilian Q.. Neural Networks. arXiv:1706.04599 [cs], June 2017. arXiv: 1706 [8] He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. D Image Recognition. arXiv:1512.03385 [cs], December 2015. arXi [9] Hinton, Geoffrey, Vinyals, Oriol, and Dean, Jeff. Distilling the Knov arXiv:1503.02531 [cs, stat], March 2015. arXiv: 1503.02531. 10] Hu, Jie, Shen, Li, and Sun, Gang. Squeeze-and-Excitation Networl September 2017. arXiv: 1709.01507. 11] Huang, Gao, Sun, Yu, Liu, Zhuang, Sedra, Daniel, and Weinberg. with Stochastic Depth. arXiv:1603.09382 [cs], March 2016. arXiv 12] Kingma, Diederik and Ba, Jimmy.. Adam:A Method for arXiv:1412.6980 [cs], December 2014. arXiv: 1412.6980. 13] Laine, Samuli and Aila, Timo. Temporal Ensembling for S arXiv:1610.02242 [cs], October 2016. arXiv: 1610.02242. 14] Loshchilov, Ilya and Hutter, Frank. SGDR: Stochastic Gradient D arXiv:1608.03983 [cs, math], August 2016. arXiv: 1608.03983 15] Maas, Andrew L., Hannun, Awni Y., and Ng, Andrew Y. Rectifier nc network acoustic models. In Proc. ICML, volume 30, 2013. 16] Miyato, Takeru, Maeda, Shin-ichi, Koyama, Masanori, and Ishii, Shit ing: a Regularization Method for Supervised and Semi-supervised L [cs, stat], April 2017. arXiv: 1704.03976. 17] Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, V Reading digits in natural images with unsupervised feature learni. Deep Learning and Unsupervised Feature Learning, 2011.. 18] Papernot, Nicolas, McDaniel, Patrick, Wu, Xi, Jha, Somesh, and Swa as a Defense to Adversarial Perturbations against Deep Neural Ne. [cs, stat], November 2015. arXiv: 1511.04508. 19] Polyak, B. T. and Juditsky, A. B. Acceleration of Stochastic Apj SIAM J. Control Optim., 30(4):838-855, July 1992. ISSN 0363-012 20] Pu, Yunchen, Gan, Zhe, Henao, Ricardo, Yuan, Xin, Li, Chunyu Carin, Lawrence. Variational Autoencoder for Deep Learning of In. arXiv:1609.08976 [cs, stat], September 2016. arXiv: 1609.08976. 21] Rasmus, Antti, Berglund, Mathias, Honkala, Mikko, Valpola, Harri supervised Learning with Ladder Networks. In Cortes, C., Lav. Sugiyama, M., and Garnett, R. (eds.), Advances in Neural Informal pp. 3546-3554. Curran Associates, Inc., 2015. 22] Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Zhiheng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael, B. Fei, Li. ImageNet Large Scale Visual Recognition Challenge. arXiv.. 2014. arXiv: 1409.0575. 23] Sajjadi, Mehdi, Javanmardi, Mehran, and Tasdizen, Tolga. Regulariza formations and Perturbations for Deep Semi-Supervised Learning. Ir. Luxburg, U. V., Guyon, I., and Garnett, R. (eds.), Advances in Neu. Systems 29, pp. 1163-1171. Curran Associates, Inc., 2016.\n[16] Miyato, Takeru, Maeda, Shin-ichi, Koyama, Masanori, and Ishii, Shin. Virtual Adversarial Train- ing: a Regularization Method for Supervised and Semi-supervised Learning. arXiv:1704.03976 [cs, stat], April 2017. arXiv: 1704.03976. [17] Netzer, Yuval, Wang, Tao, Coates, Adam, Bissacco, Alessandro, Wu, Bo, and Ng, Andrew Y. Reading digits in natural images with unsupervised feature learning. In NIPs Workshop on Deep Learning and Unsupervised Feature Learning, 2011. [18] Papernot, Nicolas, McDaniel, Patrick, Wu, Xi, Jha, Somesh, and Swami, Ananthram. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. arXiv:1511.04508 [cs, stat], November 2015. arXiv: 1511.04508. [19] Polyak, B. T. and Juditsky, A. B. Acceleration of Stochastic Approximation by Averaging. SIAM J. Control Optim., 30(4):838-855, July 1992. ISSN 0363-0129. doi: 10.1137/0330046. [20] Pu, Yunchen, Gan, Zhe, Henao, Ricardo, Yuan, Xin, Li, Chunyuan, Stevens, Andrew, and Carin, Lawrence. Variational Autoencoder for Deep Learning of Images, Labels and Captions. arXiv:1609.08976 [cs, stat], September 2016. arXiv: 1609.08976. [21] Rasmus, Antti, Berglund, Mathias, Honkala, Mikko, Valpola, Harri, and Raiko, Tapani. Semi- supervised Learning with Ladder Networks. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 28, pp. 3546-3554. Curran Associates, Inc., 2015. [22] Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang Zhiheng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael, Berg, Alexander C., and Fei- Fei, Li. ImageNet Large Scale Visual Recognition Challenge. arXiv:1409.0575 [cs], September 2014. arXiv: 1409.0575. [23] Sajjadi, Mehdi, Javanmardi, Mehran, and Tasdizen, Tolga. Regularization With Stochastic Trans- formations and Perturbations for Deep Semi-Supervised Learning. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29, pp. 1163-1171. Curran Associates, Inc., 2016.\n[24] Salimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing. Systems, pp. 901-901, 2016. [25] Salimans, Tim, Goodfellow, Ian, Zaremba, Wojciech, Cheung, Vicki, Radford, Alec, and Chen. Xi. Improved techniques for training gans. In Advances in Neural Information Processing. Systems, pp. 2226-2234, 2016. [26] Sietsma, Jocelyn and Dow, Robert JF. Creating artificial neural networks that generalize. Neural networks, 4(1):67-79, 1991. [27] Singh, Saurabh, Hoiem, Derek, and Forsyth, David. Swapout: Learning an ensemble of deep. architectures. arXiv:1605.06465 [cs], May 2016. arXiv: 1605.06465. [28] Srivastava, Nitish, Hinton, Geoffrey, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn.. Res., 15(1):1929-1958, January 2014. ISSN 1532-4435. [29] Sarela, Jaakko and Valpola, Harri. Denoising Source Separation. Journal of Machine Learning. Research, 6(Mar):233-272, 2005. ISSN ISSN 1533-7928 [30] Wager, Stefan, Wang, Sida, and Liang, Percy. Dropout Training as Adaptive Regularization.. arXiv:1307.1493 [cs, stat], July 2013. arXiv: 1307.1493. [31] Wan, Li, Zeiler, Matthew, Zhang, Sixin, Le Cun, Yann, and Fergus, Rob. Regularization of Neural Networks using DropConnect. pp. 1058-1066, 2013. [32] Weston, Jason, Ratle, Frederic, Mobahi, Hossein, and Collobert, Ronan. Deep learning via. semi-supervised embedding. In Neural Networks: Tricks of the Trade, pp. 639-655. Springer,. 2012. [33] Xie, Saining, Girshick, Ross, Dollar, Piotr, Tu, Zhuowen, and He, Kaiming. Aggregated. Residual Transformations for Deep Neural Networks. arXiv:1611.05431 [cs], November 2016 arXiv: 1611.05431. [34] Zhu, Xiaojin and Ghahramani, Zoubin. Learning from labeled and unlabeled data with label. propagation. 2002."}, {"section_index": "14", "section_name": "A Results without input augmentation", "section_text": "See table5|for the results without input augmentation\nTable 5: Error rate percentage on SVHN and CIFAR-10 over 10 runs, including the results without input augmentation. We use exponential moving average weights in the evaluation of all our models. All the comparison methods use a 13-layer ConvNet architecture similar to ours and augmentation similar to ours, expect GAN, which does not use augmentation.\nSVHN 250 labels 500 labels 1000 labels all labelsa GANb 18.44 4.8 8.11 1.3 H modelc 6.65 0.53 4.82 0.17 2.54 0.04 Temporal Ensembling 5.12 0.13 4.42 0.16 2.74 0.06 VAT+EntMind 3.86 Ours Supervised-only 27.77 3.18 16.88 1.30 12.32 0.95 2.75 0.10 H model 9.69 0.92 6.83 0.66 4.95 0.26 2.50 0.07 Mean Teacher 4.35 0.50 4.18 0.27 3.95 0.19 2.50 0.05 Without augmentation. Supervised-onlye 36.26 3.83 19.68 1.03 14.15 0.87 3.04 0.04 H model 10.36 0.94 7.01 0.29 5.73 0.16 2.75 0.08 Mean Teacher 5.85 0.62 5.45 0.14 5.21 0.21 2.77 0.09 CIFAR-10 1000 labels 2000 labels 4000 labels all labelsa GANb 18.63 2.32 H modelc 12.36 0.31 5.56 0.10 Temporal Ensembling. 12.16 0.31 5.60 0.10 VAT+EntMind 10.55 Ours Supervised-only 46.43 1.21 33.94 0.73 20.66 0.57 5.82 0.15 H model 27.36 1.20 18.02 0.60 13.20 0.27 6.06 0.11 Mean Teacher 21.55 1.48 15.73 0.31 12.31 0.28 5.94 0.15 Mean Teacher ResNet 10.08 0.41 6.28 0.15 Without augmentation. Supervised-only 48.38 1.07 36.07 0.90 24.47 0.50 7.43 0.06 H model 32.18 1.33 23.92 1.07 17.08 0.32 7.00 0.20 Mean Teacher 30.62 1.13 23.14 0.46 17.74 0.30 7.21 0.24 b Salimans et al. [25] d Miyato et al. [16] a 4 runs c Laine & Aila [13]\na 4 runs b Salimans et al. [25] c Laine & Aila [13 d Miyato et al. 1 e Only labeled examples and only classification cost"}, {"section_index": "15", "section_name": "B Experimental setup", "section_text": "Source code for the experiments is available at https://github.com/Curious mean-teacher"}, {"section_index": "16", "section_name": "B.1 Convolutional network models", "section_text": "We replicated the I model of Laine & Aila [13] in TensorFlow [1], and added support for Mean Teacher training. We modified the model slightly to match the requirements of the experiments, as described in subsections[B.1.1|and[B.1.2] The difference between the original 1 model described by Laine & Aila [13] and our baseline I model thus depends on the experiment. The difference between\nTable 6: The convolutional network architecture we used in the experiments.\na Not applied on SVHN experiments\nWe used cross-entropy between the student softmax output and the one-hot label as the classification. cost, and the mean square error between the student and teacher softmax outputs as the consistency. cost. The total cost was the weighted sum of these costs, where the weight of classification cost was the expected number of labeled examples per minibatch, subject to the ramp-ups described below.\nWe used different training settings in different experiments. In the CIFAR-1O experiment, we matched the settings of Laine & Aila [13] as closely as possible. In the SVHN experiments, we diverged from Laine & Aila [13] to accommodate for the sparsity of labeled data. Table[7 summarizes the differences between our experiments.."}, {"section_index": "17", "section_name": "B.1.1 ConvNet on CIFAR-10", "section_text": "We normalized the input imag es with ZCA based on training set statistics\nLayerd Hyperparameters Input 32 x 32 RGB image Translation Randomly {x,y} ~ -2, 2 Horizontal flip. Randomly p = 0.5 Gaussian noise. = 0.15 Convolutional. 128 filters, 3 3, same padding. Convolutional 128 filters, 3 3, same padding. Convolutional 128 filters, 3 3, same padding. Pooling Maxpool 2 2 Dropout p = 0.5 Convolutional. 256 filters, 3 3, same padding Convolutional 256 filters, 3 3, same padding. Convolutional 256 filters, 3 3, same padding. Pooling Maxpool 2 x 2 Dropout p = 0.5 Convolutional. 512 filters, 3 3, valid padding Convolutional 256 filters, 1 1, same padding. Convolutional 128 filters, 1 1, same padding. Pooling Average pool (6 6 -> 11 pixels) Softmax Fully connected 128 -> 10\nour baseline I model and our Mean Teacher model is whether the teacher weights are identical to the student weights or an EMA of the student weights. In addition, the 1I models (both the original and ours) backpropagate gradients to both sides of the model whereas Mean Teacher applies them only to the student side.\nTable 6 describes the architecture of the convolutional network. We applied mean-only batch normalization and weight normalization [24] on convolutional and softmax layers. We used Leaky ReLu [15] with a = 0.1 as the nonlinearity on each of the convolutional layers.\nWe trained the network with minibatches of size 100. We used Adam Optimizer [12] for training with learning rate 0.003 and parameters 1 = 0.9, 32 = 0.999, and e = 10-8. In our baseline I model we applied gradients through both teacher and student sides of the network. In Mean teacher model the teacher model parameters were updated after each training step using an EMA with a = 0.999. These hyperparameters were subject to the ramp-ups and ramp-downs described below.\nWe applied a ramp-up period of 40000 training steps at the beginning of training. The consistency cost coefficient and the learning rate were ramped up from 0 to their maximum values, using a\nFor sampling minibatches, the labeled and unlabeled examples were treated equally, and thus the number of labeled examples varied from minibatch to minibatch..\nWe applied a ramp-down for the last 25o00 training steps. The learning rate coefficient was ramped. down to 0 from its maximum value. Adam 1 was ramped down to 0.5 from its maximum value. The ramp-downs did not improve the results, but were used to stay as close as possible to the settings of. Laine & Aila [13]."}, {"section_index": "18", "section_name": "B.1.2 ConyNet on SVHN", "section_text": "We normalized the input images to have zero mean and unit variance\nWhen doing semi-supervised training, we used 1 labeled example and 99 unlabeled examples in each mini-batch. This was important to speed up training when using extra unlabeled data. After all labeled examples had been used, they were shuffled and reused. Similarly, after all unlabeled examples had been used, they were shuffled and reused.\nWe applied different values for Adam 2 and EMA decay rate during the ramp-up period and the rest of the training. Both of the values were 0.99 during the first 40000 steps, and 0.999 afterwards. This helped the 250-label case converge reliably.\nWe trained the network for 180000 steps when not using extra unlabeled examples, for 400o00 steps. when using 100k extra unlabeled examples, and for 600000 steps when using 500k extra unlabeled ex amples."}, {"section_index": "19", "section_name": "B.1.3 The baseline ConyNet models", "section_text": "We trained the supervised-only model on CIFAR-10 for 7500 steps when using 1000 images, for 15000 steps when using 2000 images, for 30000 steps when using 4000 images and for 150000 steps when using all images. We trained it on SVHN for 40000 steps when using 250, 500 or 1000 labels. and for 180000 steps when using all labels.\nWe trained the I model on CIFAR-10 for 60000 steps when using 1000 labels, for 100000 steps. when using 2000 labels, and for 180000 steps when using 4000 labels or all labels. We trained it on SVHN for 100000 steps when using 250 labels, and for 180000 steps when using 500, 1000, or all. labels."}, {"section_index": "20", "section_name": "B.2.1 ResNet on CIFAR-10", "section_text": "For CIFAR-10, we replicated the 26-2x96d Shake-Shake regularized architecture described in [5 and consisting of 4+4+4 residual blocks.\nWe trained the network on 4 GPUs using minibatches of 512 images, 124 of which were labeled. We sampled the images in the same way as described in the SVHN experiments above. We augmented the input images with 4x4 random translations (reflecting the pixels at borders when necessary) and random horizontal flips. (Note that following 5] we used a larger translation size than on our earliei experiments.) We normalized the images to have channel-wise zero mean and unit variance over training data.\nWe trained the network using stochastic gradient descent with initial learning rate 0.2 and Nesterov. momentum 0.9. We trained for 180 epochs (when training with 1000 labels) or 300 epochs (wher training with 4000 labels), decaying the learning rate with cosine annealing [14] so that it would.\n' https://github.com/pytorch/pytorch\nFor training the supervised-only and H model baselines we used the same hyperparameters as for training the Mean Teacher, except we stopped training earlier to prevent over-fitting. For supervised- only runs we did not include any unlabeled examples and did not apply the consistency cost.\nTable 7: Differences in training settings between the ConvNet experiments\nhave reached zero after 210 epochs (when 1000 labels) or 350 epochs (when 4000 labels). We define epoch as one pass through all the unlabeled examples - each labeled example was included many times in one such epoch."}, {"section_index": "21", "section_name": "B.2.2 ResNet on ImageNet", "section_text": "On our ImageNet evaluation runs, we used a 152-layer ResNeXt architecture [33] consisting o 3+8+36+3 residual blocks, with 32 groups of 4 channels on the first block..\nWe trained the network on 10 GPUs using minibatches of 400 images, 200 of which were labelec We sampled the images in the same way as described in the SVHN experiments above. Following [10], we randomly augmented images using a 10 degree rotation, a crop with aspect ratio betweer 3/4 and 4/3 resized to 224x224 pixels, a random horizontal flip and a color jitter. We then normalizec images to have channel-wise zero mean and unit variance over training data.\nWe trained the network using stochastic gradient descent with maximum learning rate 0.25 and Nesterov momentum O.9. We ramped up the learning rate linearly during the first two epochs from 0.1 to 0.25. We trained for 60 epochs, decaying the learning rate with cosine annealing so that it would have reached zero after 75 epochs\nWe used a total cost function consisting of classification cost and three other costs: We used the dual output trick described in subsection |3.4 and Figure|4(e) with MSE cost between logits with coefficient O.01. We used a KL-divergence consistency cost with coefficient ramping up from O to 10.0 during the first 5 epochs, using the same sigmoid ramp-up shape as in the experiments above We also used an L2 weight decay with coefficient 5e-5. We used EMA decay value 0.9997.\nsemi-supervised supervised semi-supervised Aspect SVHN SVHN CIFAR-10 zero mean, zero mean, image pre-processing unit variance unit variance ZCA translation + image augmentation translation translation horizontal flip number of labeled examples per minibatch 1 100 varying training steps 180000-600000 180000 150000 Adam 2 during and after ramp-up 0.99, 0.999 0.99, 0.999 0.999, 0.999 EMA decay rate during and after ramp-up 0.99, 0.999 0.99, 0.999 0.999, 0.999 Ramp-downs No No Yes\nWe used a total cost function consisting of classification cost and three other costs: We used the dual output trick described in subsection|3.4 and Figure|4(e) with MSE cost between logits with coefficient O.01. This simplified other hyperparameter choices and improved the results. We used MSE consistency cost with coefficient ramping up from O to 100.0 during the first 5 epochs, using the same sigmoid ramp-up shape as in the experiments above. We also used an L2 weight decay with coefficient 2e-4. We used EMA decay value 0.97 (when 1000 labels) or 0.99 (when 4000 labels).\n15% (f) 10% 5% 1 3SI 1! 9 9 A!p-7y cons. cost function t\nFigure 5: Copy of Figure|4(f) in the main text. Validation error on 250-label SVHN over four runs and their mean, when varying the consistency cost shape hyperparameter t between mean squared error (t = 0) and KL-divergence (t = 1)."}, {"section_index": "22", "section_name": "B.3 Use of training. validation and test data", "section_text": "In the development phase of our work with CIFAR-10 and SVHN datasets, we separated 10% of training data into a validation set. We removed randomly most of the labels from the remaining training data, retaining an equal number of labels from each class. We used a different set of labels for each of the evaluation runs. We retained labels in the validation set to enable exploration of the results. In the final evaluation phase we used the entire training set, including the validation set but with labels removed.\nIn the ImageNet experiments we removed randomly most of the labels from the training set, retaining. an equal number of labels from each class. For validation we used the given validation set without modifications. We used a different set of training labels for each of the evaluation runs and evaluated the results against the validation set.."}, {"section_index": "23", "section_name": "Varying between mean squared error and KL-divergence", "section_text": "As mentioned in subsection[3.4] we ran an experiment varying the consistency cost function betweer MSE and KL-divergence (reproduced in Figure|5). The exact consistency function we used was\n2 T C,(p,q) = Z,DkL(p+||q), where Z. P+ = Tp+ qr = Tq + N2-2' N N\n1 DkL(pi|qi) = -qi)+O( N2-3 2\nwhere the zeroth- and first-order terms vanish. Consequently\n1 ) Pi N 2 DkL(p|q) N2\nThe results in Figure5|show that MSE performs better than KL-divergence or C, with any t. We also tried other consistency cost weights with KL-divergence and did not reach the accuracy of MSE\nOn a real-world use case we would not possess a large fully-labeled validation set. However, this setup is useful in a research setting, since it enables a more thorough analysis of the results. To the best of our knowledge, this is the common practice when carrying out research on semi-supervised learning. By retaining the hyperparameters from previous work where possible we decreased the chance of over-fitting our results to validation labels.\n1 when t -> 0 2 C-(p, DkL(p|q) when t = 1. N2\nThe exact reason why MSE performs better than KL-divergence remains unclear, but the form of C. may help explain it. Modern neural network architectures tend to produce accurate but overly confident predictions [7]. We can assume that the true labels are accurate, but we should discount the confidence of the teacher predictions. We can do that by having t = 1 for the classification cost and. 7 < 1 for the consistency cost. Then p- and q- discount the confidence of the approximations while. Z. keeps gradients large enough to provide a useful training signal. However, we did not perform. experiments to validate this explanation."}] |
B1lpelBYl | [{"section_index": "0", "section_name": "ACCELERATING SGD FOR DISTRIBUTED DEEP LEARNING USING APPROXIMTED HESSIAN MATRIX", "section_text": "Sebastien Arnold\nUniversity of Southern California. Los Angeles, CA-90007, USA arnolds0uic edu"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The Stochastic Gradient Descent (SGD) method has been shown to be well-suited for distributec. deep-learning applications (Mitliagkas et al.2016] Gupta et al.]2015} Dean et al.2012] Amode et al.2015). Typically a set of Worker nodes evaluate the gradient of the cost functional on one. or a batch of data point and perform updates to the parameters acquired (Read) from a Paramete. Server in parallel either synchronously or asynchronously. Periodically, the Workers return (Write. updated parameters to the Server. In most of cases, the average of the returned parameter values is. evaluated by the Server and is made available to Workers for the subsequent acquisitions. In some. implementations, the Workers are not required to return the gradient vectors of the cost functiona. to the server. In our method, the Workers supply the gradient evaluated at the updated parameters. to the Server. Using these gradient vectors the Server uses an approximated Hessian matrix to. produce a quasi-Newton update of the parameter which is made available to Workers. Our numerica. experimental results show that the new approach leads to accelerated convergence of the parameter. as well as, reduction of the cost functional. In some cases, the new algorithm exhibits quadrati convergence of the parameter which is characteristically associated with the Newton's method.."}, {"section_index": "2", "section_name": "2 METHOD", "section_text": "1 m 1 m 0 = 0k g = VJ(0k) m m k=1 k=1\nEquation (1) leads to G = H1O which represents key characteristics of the Hessian matrix. Note that both matrices G and O are not square matrices and not invertible in general. Therefore the above\nChunming Wang\nUniversity of Southern California Los Angeles, CA-90007, USA"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "We introduce a novel method to compute a rank m approximation of the inverse f the Hessian matrix in the distributed regime. By leveraging the differences n gradients and parameters of multiple Workers, we are able to efficiently im olement a distributed approximation of the Newton-Raphson method. We alsc oresent preliminary results which underline advantages and challenges of second rder methods for large stochastic optimization problems. In particular, our worl uggests that novel strategies for combining gradients provide further informatior on the loss surface.\nFollowing each Write operation by a Worker or all Workers in either an asynchronous or syn- chronous implementation of distributed SGD algorithm, the Server receives the updated parameters Ok E Rn and the estimated gradient J(0k) from Workers k = 1, . .. , m. An approximation of the. Hessian matrix H.1 can be obtained by requiring the following equality:.\nVJ(0k)-J(0)=Hj(0k-0),Vk,j=1,,m.\nequality does not uniquely define the matrix HJ. Our objective is to find a rank p approximation H-' of the inverse of the Hessian matrix and to generate an update to the parameter\nOnew=0-THF'g\nz E span{uk,k = 1,...,j} z E span{uk,k = 1,...,j}\nQk=zHuk,k=1,.,J Z = QkUk k=1\nWe underline that while this presentation follows the parameter server (Li et al.f2014) semantics. our technique is easily adapted to the tree-reduction framework (Iandola et al.|2015). In fact, it car be seen as a more sophisticated reduction of the gradients across Workers, as opposed to a simple. averaging. Our MPI-based implementation consists of a large All-to-All broadcast of the parameter and gradients, followed by the computations presented above. With n parameters and m replicas. this method has a space complexity on the order of O(mn) and its time complexity is O(m3 + m. n)\nwhere the integer j is satisfies ; Xo1, j+1 < Xo1 for a selected value 0 < X < 1. In particular for any z E Rn such that\n-1 z = z k=1\nNote that when j = 0, equation (2) is equivalent to the standard SGD method. The algorithm does require computation of the eigenvalues and eigenvectors of the m m matrix GHG. For small enough number of Workers, this represents a relative minor computational effort.\nMNIST CIFAR-10 2.5 DistNewton-2 DistNewton-2 DistNewton-4 2.2 DistNewton-4 DistNewton-6 DistNewton-6 2.0 DistNewton-8 DistNewton-8 SGD SGD 2.0 SGD-relu 1.5 TIN 1.8 1.0 0.5 1.6 0.0 1.4 10 20 30 40 50 10 20 30 40 50 Epochs Epochs\nFigure 1: Convergence curves on MNIST and CIFAR-10. DistNewton-m denotes the use of m Workers. In both experiments we notice an improvement in convergence as m increases. (ie, more gradients are used to compute H-1). On the CIFAR-10 experiment we also plot the SGD ReLU performance, which outperforms tanh activations and diverged using our method.\nagainst SGD. Furthermore, we restrict our study to the synchronous case. Since we are only inter ested in the optimization performance, we keep most of our hyper-parameters constant, including learning rates (0.0003 and 0.01), activation functions (ReLU (Nair and Hinton] 2010) and tanh), and a global batch size of 256. Our model is a 5 layer convnet with about 16'000 parameters, which we train each time for 50 epochs. We report the negative log-likelihood on the train dataset at every epoch, and for up to 8 Workers. Note that since the global batch size is fixed, the SGD convergence curves are identical and thus only reported once.\nOur results clearly demonstrate convergence improvements as we scale to a larger number of Work-. ers, and consistently outperforms stochastic gradient descent in the most distributed case. Inter- estingly the latter is true even with a relatively small number of Workers, as we observe a much faster convergence with m = 4 in both experiments. However, when the number of Workers is not sufficient to properly estimate the most influential singular values the method converges to poor. minimas and is slower than distributed SGD. This effect underlines the importance of the number of. eigenvalues considered which is defined through the parameter j..\nAdditionally, our method suffers of the limitations of Newton's method. For example, several exper iments diverged when using too large a learning rate, whereas this was beneficial to the convergence rate of SGD. Another downside is related to the use of ReLU activations; a good enough estimate o1 the Hessian results in numerical errors as the second derivative becomes 0. However, ReLUs have been widely successful in the computer vision domain (Krizhevsky et al.2012a] He et al.]2016) and usually outperform other non-linear activations. This is demonstrated by the SGD-relu curve in the CIFAR-10 experiment. Finally, as pointed byDauphin et al.(2014), even when including second-order information. iterative methods such as SGD or Newton method can be slowed dowr by saddle-points surrounded by plateaus\nWe note that previous work has suggested approaches to tackle some of those limitations. In par-. ticular, LeCun et al. derived an optimal formulation of the learning rate, assuming knowledge of the largest singular value Omax: Topt Since we directly approximate Omax we can trivially. adapt the learning rate to be upper-bounded by this approximate optimal at every update. In addition to underlining the saddle-point issue,Dauphin et al.(2014) also proposed a counter-measure: con-. sidering the absolute value of the Hessian's singular values. This approach comes as an artifact of our suggested method, since we approximate the inverse of the Hessian using only positive singular. values.\nIn this work, we introduced a Quasi-Newton method specifically designed for the distributed regime On preliminary small-scale experiments, our method largely outperforms stochastic gradient descent when the number of Workers allow for a good approximation of the inverse of the Hessian. Our results suggest that our method is effectively taking advantage of the second-order information of the optimization problem.\nMore importantly, this work suggests that alternative strategies for combining Workers' gradients. will provide superior convergence rates than a simple averaging. Intuitively this results from the ob. servation that each Worker is exploring a different region of the loss surface and thus the aggregated. information will provide a better understanding than averaged local statistics..\nFinally, we would like to indicate that our work is preliminary and more comprehensive investigation into the potential of this approach is required. In particular, we want to further define criteria to. identify cases for which this approach offers maximum benefits.\nSponsorship of the Living With a Star Targeted Research and Technology NASA/NSF Partnership for Collaborative Space Weather Modeling is gratefully acknowledged."}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior,. Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in neural information processing systems, pages 1223-1231, 2012.\nYann LeCun, Patrice Y Simard, and Barak Pearlmutter. Automatic learning rate maximization b on-line estimation of the hessians eigenvectors..\nMu Li, David G Andersen, and Jun Woo Park. Scaling distributed machine learning with the param eter server. 2014.\nIoannis Mitliagkas, Ce Zhang, Stefan Hadjis, and Christopher Re. Asynchrony begets momentum with an application to deep learning. arXiv preprint arXiv:1605.09774. 2016.\nSuyog Gupta, Wei Zhang, and Josh Milthrope. Model accuracy and runtime tradeoff in distributed deep learning. arXiv preprint arXiv:1509.04210, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog. nition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pages 770-778, 2016."}] |
SyuncaEKx | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "INTRODUCTTON Supervised deep learning architectures have been successfully used in a wide range of object classifi. cation tasks showing impressive results while discriminating between a lot different visual concepts. However, as powerful as such models are, one issue is that they are optimized to predict from a. restricted set of categories. So that, if a new data item belonging to an unknown category comes. at the prediction phase, the model systematically predicts a known label with a possibly high con. fidence. In a practical application, we rather would like to detect that the item does not belong tc. any of the known classes so as to inform the user about this mismatch. This fundamental novelty. detection problem has received a lot of attention in the machine learning community (a comprehen. sive review can for instance be found in (Pimentel et al.[2014)). Novelty detection methods can. be roughly classified into probabilistic approaches (parametric or nonparametric), distance-based approaches (e.g. nearest neighbors-based or clustering-based), reconstruction-oriented approaches. (e.g. neural network-based or subspace-based) and domain-based approaches (e.g. one-class sup port vector machines). In this paper, we chose to investigate how unsupervised and semi-supervisec deep learning methods could help solving the novelty detection problem. Basically, such methods. Hinton et al.]2006} Bengio & LeCun]2007] Vincent et al.]2010] Rifai et al.]2011} Goodfellow et al.2014] Makhzani et al.2015) aim at disentangling and capturing the explanatory factor of variation of the data. This can be seen as modeling the data generating distribution/process. By. doing so, we expect the system to learn the manifold on which the training data lies and to gener. alize on it. By extension, we expect that new data items belonging to unknown classes won't be. well captured and that the generative model will fail to reconstruct them accurately. In this paper. we focus in particular on Adversarial Autoencoders (AAE) (Makhzani et al.]2015) that have the. advantage to explicitly control the distribution of the known data in the feature space, so that it is. possible to quantify the likelihood that an image belongs to the manifold of the known training data. We explore the use of both unsupervised and supervised prior distributions and we introduce a new. variant that explicitly models a rejection class in the latent space. Experiments on MNIST dataset. show that this variant provides better novelty detection performance than classical autoencoders and. adversarial autoencoders.\nBaseline: reconstruction-based novelty detection through autoencoders: Using the reconstruc. tion error of a generative model is a well known novelty detection method (Pimentel et al.|2014 Thompson et al.l 2002). The higher the reconstruction error of an item is, the farther from the man-. ifold of the known training data it is expected to be. As a baseline novelty detection method, we. thus suggest to use the reconstruction error of a (deep) autoencoder. The autoencoder we used in our."}, {"section_index": "1", "section_name": "ADVERSARIAL AUTOENCODERS FOR NOVELTY DE- TECTION", "section_text": "Alexis Joly\nCXsJOn INRIA Zenith ajoly @inria.fr"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "experiment for evaluating this baseline is the one described in (Makhzani et al.]2015) with 3 fully connected layers for both the encoder and decoder. The ReLU activation function is used for all layers except for the last layer of the encoder that is linear and the last layer of the decoder that uses a sigmoid activation function. The reconstruction error is the Euclidean distance between a sample of the input space and its reconstruction and is directly used as the novelty detection criterion:\nAdversarial Autoencoders for Novelty Detection: An adversarial autoencoder (AAE) is a proba bilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to per form variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution (Makhzani et al.||2015). The decoder of the adver sarial autoencoder learns a deep generative model that maps the imposed prior to the data distribu tion. In this paper, we explore in what way AAEs might be useful for the novelty detection problem Therefore, we define a new novelty detection criterion based on the likelihood of a candidate sampl. according to the imposed prior:\nwhere p(z) is the imposed prior distribution, i.e. the higher p(f(x)) and the more likely x belong. to the training data distribution. In our experiments, we focus on two prior distributions:.\nAdversarial Autoencoder with an explicit rejection class: It is important to notice that the like lihood p(f(x)) of an item x does not model the real probability that it belongs to the manifold of the known data. Considering this likelihood as a probability is in fact a case of prosecutors fallacy since p(f(x)) should be rather noticed p(f(x)[y(x) = 1) where y(x) is a binary function indicating whether x belongs to one of the known classes or not. Then. what we would like to estimate is rather\np(f(x)|y(x) = 0).p(y = 0) p3(x)=p(y(x) =0|f(x)) = p(f(x)|y(x) =1).p(y=1)+p(f(x)|y(x)=0).p(y=0)\nBut since we don't know anything about the conditional likelihood of the unknown data p(f(x)(y(x) = 0) (and about the novelty rate p(y = O)), we can not estimate that probability. T try overcoming this issue, we propose to explicitly add a novelty class to the prior distributior of the AAE. More precisely, we model the unknown data by a normal distribution at the centei of the latent space (i.e. p(f(x)[y(x = 0) = N(0, I)) and we still model the known data by a mixture of non-centered Gaussian functions p(f(x)|y(x = 1) = , p(z|C). Then, we enforce the autoencoder to map the unknown data space onto p(f(x)[y(x = 0) = N(0, I)) by adding to the training set some random images and by passing the known/unknown label y(x) E 0, 1 as input tc the discriminator as any other class label."}, {"section_index": "3", "section_name": "3 EXPERIMENTS", "section_text": "Protocol and settings: We evaluated the novelty detection methods described above on MNIST handwritten digit dataset (LeCun et al.]1998) (10 classes, each with approximately 600 images fo training and validation and 100 images for testing). To fit with our novelty detection scenario, we removed from the training set 3 of the 10 classes (the '2', '5' and '7' digits). We then computec\n01(x)=x-g(fx))]\n02x)=1-p(f(x))\nNormal distribution: p(z) = N(0,I). This is the default distribution used in Makhzani. et al.[(2015) to ensure that generating from any part of the prior space results in meaningful samples. Gaussian mixture: p(z) = , p(z|Ct) with p(z|C) = N(i, i). This is the prior dis-. tribution suggested in to handle supervision or semi-supervision. To ensure the mapping. between the labels of the training data items and the classes of the Gaussian mixture, it is. required to pass as input of the adversarial discriminator a one-hot vector coding the label of z in addition to z itself. Complementary to this Supervised Gaussian mixture prior, in our experiments, we also evaluated the case of an Unsupervised Gaussian mixture by removing. the label's one-hot vector from the input of the adversarial discriminator. This allows us to evaluate separately the benefit of the Gaussian mixture (over the normal distribution), and. the benefit of the supervision.\nour novelty detection criteria (p1, P2 and p3) on the entire test set and we measured the performance. through a Mean Average Precision (mAP). A mAP that is equal to 1 means that all the test images. of digits '2', '5' and '7' (unknown classes) have a higher p value than the images of the known digits. All autoencoders have been trained through back-propagation using the Nesterov momemtum. solver (Sutskever et al.2013) using a learning rate and a momentum parameter respectively set tc 0.1 and 0.9. We iterated over 2000 epochs (without validation) using mini-batches of 128 images For the AAE, each epoch includes 3 steps: (i) reconstruction optimization phase, (ii) discriminator. optimization phase and (iii) generator optimization phase..\nResults: Table [1 and Figures 1 and 2] provide the results of our experiments when using a dimensional latent space for the autoencoders. This is of course not an optimal feature dimension terms of performance but this allows visualizing how the known and unknown test samples are di tributed in the latent space. The results first show that fully unsupervised Adversarial Autoencode do not perform better than baseline autoencoder (using the reconstruction error criterion p1). Loo ng at Fig1 (b) and Fig1(c), the main reason is that the unknown samples are mapped according the same prior distribution than the known samples. As stated in section|2] it is actually not becau the known data items are enforced to lie in the dense regions of the prior, but that any data item such regions is belongs to the manifold of known data. The second major conclusion is that the AAE using a supervised GMM prior clearly outperforr the baseline autoencoder (contrary to the AAE using the unsupervised GMM prior). As shown Fig1 (d), the addition of the supervision enforces the encoder to map the unknown data items awa from the known classes, and, by default, at the center of the feature space. Actually, the center the latent space seems to act as an attractor of the default open space. This might be related to tl fact that, whatever the used prior distribution, randomly generated images are distributed accordir to a normal distribution at the center of the feature space (because of the central limit theorem). The third conclusion of this preliminary study is that the likelihood-based and posterior-based no elty detection criteria are less effective than the reconstruction error criteria. But this has to mitigated for several reasons. First, this might be specific to the MNIST dataset for which tl original image space is already very well shaped so that the L2-distance between an image and reconstruction is semantically meaningful. But this might not be the case for more complex da hat would require to capture more invariance and spatial structures (e.g. using ConvNets). We c expect that the likelihood-based and posterior-based criteria would be less sensitive to such high complexity than the reconstruction error. Another advantage is to enable a normalized and we nterpretable novelty score, in particular the posterior-based criterion that is a real probability.\nResults: Table [1and Figures [1|and 2provide the results of our experiments when using a 2 dimensional latent space for the autoencoders. This is of course not an optimal feature dimension i terms of performance but this allows visualizing how the known and unknown test samples are di tributed in the latent space. The results first show that fully unsupervised Adversarial Autoencodei do not perform better than baseline autoencoder (using the reconstruction error criterion p1). Look ing at Fig1 (b) and Fig1(c), the main reason is that the unknown samples are mapped according t the same prior distribution than the known samples. As stated in section2] it is actually not becaus the known data items are enforced to lie in the dense regions of the prior, but that any data item i such regions is belongs to the manifold of known data. The second major conclusion is that the AAE using a supervised GMM prior clearly outperforr the baseline autoencoder (contrary to the AAE using the unsupervised GMM prior). As shown i\nNovelty detection criterion Representation Learning Methods reconstruction likelihood (p2) Figures error (p1) or posterior (p3) (Appendix) autoencoder 0.71 1(a) and 2(a) Normal distribution. 0.68 0.35 (p2) Fig 1 (b) Adversarial Unsupervised GMM 0.64 0.41 (p2) Fig 1 (c) autoencoder Supervised GMM 0.82 0.82 (p2) Fig 1 (d) Adversarial autoencoder Supervised GMM 0.89 0.83 (p3) Fig 1 (e) + rejection\nSupervised GMM"}, {"section_index": "4", "section_name": "4 CONCLUSION", "section_text": "In this preliminary study, we investigated the use of Adversarial autoencoders for the hard problen of novelty detection. We did show that imposing a supervised prior distribution can help mapping the unknown items away from the known classes but that it is still theoretically not possible tc control their distribution in the feature space. Overall, we believe this remains an open question that requires to first understand whether novelty should be conceptualized as unusual recombination oi elements of prior knowledge or not.\n0.83 (p3)\nTable 1: mAP of the different novelty detection methods"}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Yoshua Bengio and Yann LeCun. Scaling learning algorithms towards AI. In Large Scale Kernel Machines MIT Press, 2007.\nGeoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. A fast learning algorithm for deep belief nets. Neura Computation, 18:1527-1554, 2006\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to documen recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.\nMarco AF Pimentel, David A Clifton, Lei Clifton, and Lionel Tarassenko. A review of novelty detection Signal Processing, 99:215-249. 2014\nSalah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, and Yoshua Bengio. Contractive auto-encoders: Explicit invariance during feature extraction. In Proceedings of the 28th international conference on machine learning (1CML-11), pp. 833-840, 2011.\nIlya Sutskever, James Martens, George E Dahl, and Geoffrey E Hinton. On the importance of initialization an momentum in deep learning. 1CML (3), 28:1139-1147, 2013.\nBenjamin B Thompson, Robert J Marks, Jai J Choi, Mohamed A El-Sharkawi, Ming-Yuh Huang, and Carl Bunje. Implicit learning in autoencoder novelty assessment. In Neural Networks, 2002. IJCNN'02. Pro- ceedings of the 2002 International Joint Conference on, volume 3, pp. 2878-2883. IEEE, 2002\nPascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(Dec):3371-3408, 2010.\nAlireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoen coders. arXiv preprint arXiv:1511.05644, 2015.\nprojected onto latent space of autoencoder projected onto latent space of autoencoder. projected onto latent space of autoencoder 120 15 100 23) 10 80 60 4O 20 40 15 60 40 20 20 40 6O 10 10 (a) Autoencoder (b) AAE Normal distribution. (c) AAE unsupervised GMM projected onto latent space of autoencoder projected onto latent space of autoencoder 15 15 0 1 1 2 2 10 10 3 3 4 5 5 5 5 6 8 9X 9 0 0 -5 5 -10 10 15 15 10 -5 0 5 10 15 10 -5 0 5 10 15 (d) AAE supervised GMM (e) AAE supervised GMM+reject\n120 projected onto latent space of autoencoder 15 projected onto latent space of autoencode 100 LO 80 60 10 20 10 20 4 15 60 40 20 0 20 40 3 1 10 -5 O 10 0 15\nFigure 1: Visualization of the test samples in the latent space (unknown digits are '2' (dark blue) 5' (yellow) & '7' (green))\n(a) Autoencoder (b) AAE Normal distribution (c) AAE unsupervised GMM autoencode 3 3 M 6 8 9 9 9 8 9 9 8 8 9 6 6 8 8 8 8 8 8 Oo\nFigure 2: Visualization of the reconstructed latent spaces plotted on Figure[1] Images were obtained by uniformly sampling vectors in the latent space and by feeding them to the decoder function. When no supervision is used, the whole test set is learned to be reconstructed as images coming from classes of the training set. We can see in Fig (2e) that learning to reconstruct the noisy images coming from the rejection class while using the supervision to shape the latent space allows us to push the images of the novel classes toward the regions that are reconstructed as noisy images."}] |
ByH2gxrKl | [{"section_index": "0", "section_name": "ACCELERATING EULERIAN FLUID SIMULATION WITH CONVOLUTIONAL NETWORKS", "section_text": "Jonathan Tompson\nGoogle Brain\nEfficient simulation of the Navier-Stokes equations for fluid flow is a long standing. problem in applied mathematics, for which state-of-the-art methods require large. compute resources. In this work, we propose a data-driven approach that leverages the approximation power of deep-learning with the precision of standard solvers to. obtain fast and highly realistic simulations. Our method solves the incompressible Euler equations using the standard operator splitting method, in which a large. linear system with many free-parameters must be solved. We use a Convolutional. Network with a highly tailored architecture, trained using a novel unsupervised learning framework to solve the linear system. We present real-time 2D and 3D simulations that outperform recently proposed data-driven methods; the obtained. results are realistic and show good generalization properties.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The dynamics of a large number of physical phenomenon are governed by the incompressibl Navier-Stokes equations. In this work, we follow the Eulerian viewpoint for simulating these equa tions, which approximates quantities on a regular grid (Foster & Metaxas||1996). Euler methods are able to produce precise results simulating fluids like water or smoke, at the cost of a high computa tional load.\nThe most demanding portion of this method is solving the discrete Poisson equation, which enforce. he incompressibility condition. Exact solutions can be found using the Preconditioned Conjugate Gradient (PCG) algorithm or via stationary iterative methods such as the Jacobi or Gauss-Seide methods. A number of numerical methods have been proposed to mitigate this limitation for offlin applications, notably multi-grid approximations (McAdams et al.] 2010). However, in real-time Jacobi iterations are truncated before reaching convergence, rendering these methods inexact anc he obtained velocity fields divergent. A natural approach is to tackle the problem in a data-driver manner, adapting the solver to the specifics of the data of interest. For instance, by operating on epresentation of the simulation space of significantly lower dimensionality (Treuille et al.|2006 De Witt et al.2012). More recently, approaches have been proposed which train black-box machine learning systems to predict the output produced by an exact solver, e.g. using random regressior forests (Ladicky et al.]2015) or neural networks (Yang et al.]2016) for Lagrangian and Euleriar methods respectively. A major limitation of these methods is that they require a dataset of linea system solutions provided by an exact solver. Hence, targets cannot be computed during training anc models are trained to predict the ground-truth output always starting from an initial frame produce y an exact solver, while at test time this initial frame is actually generated by the model itself. Thi. discrepancy between training and simulation can yield errors that can accumulate quickly along the generated sequence. Additionally, the ConvNet architecture proposed by Yang et al. is not suited t our more general use-case; in particular it cannot accurately simulate long-range phenomena, sucl as gravity or buoyancy. While providing encouraging results that offer a significant speedup ove their PCG baseline, their work is limited to data closely matching the training conditions (as we wil discuss in Section3\nThe contributions of this work are as follows: (i) the learning task can be phrased as a completely. unsupervised learning problem; since obtaining ground-truth data is no longer necessary, we car. incorporate loss information from a composition of multiple time-steps and perform various forms oj. non-trivial data-augmentation. (ii) we propose a collection of domain-specific ConvNet architectura. optimizations motivated by the linear system structure itself, which lead to both qualitative anc\nKristofer Schlachter & Pablo Sprechmann & Ken Perlin\nAlgorithm 1: Velocity Update. 1: Advection and Force Update to calculate u+ : 9t 2: (optional) Advect scalar components through ut -1- 3: Self-advect velocity field ut - 1 Advect V.u Ut- conU Add external forces fbody. 4: 5: Add vorticity confinement force fvc. 6: Set normal component of solid-cell velocities. 7: Pressure Projection to calculate ut:. 8: Solve Poisson eqn, V2pt = u, to find pt 9: Apply velocity update ut = ut-1 - 1Vpt P Figure 1: Model architecture. 3x3x3 Conv 3x3x3 Conv 3x3x3 Conv 1x1x1 Conv 1x1x1 Conv 20 Jacobi 34 iterations velocity divergence ReLU ReLU ReLU ReLU ReLU this work: small model. this work: multi-frame. this work: single frame. 12 15 10 pressure 5 Poolir geom 0 10 20 30 40 50 60 Timestep\nquantitative improvements. (iii) the proposed simulator is stable and permits real-time simulatior showing good generalization properties to unseen settings\nAn alternative to our approach is learning an end-to-end mapping that predicts the velocity fielc directly at each time-step. We argue that our hybrid approach restricts the learning task to a stable projection step, relieving the need for modeling the well understood advection and external body forces and enabling the use of enhancing tools such as vorticity confinement. In addition to th above the technical contributions, we contribute a dataset that can be of interest for people working on real-time simulations and as a benchmarking framework for end-to-end approaches."}, {"section_index": "2", "section_name": "2 MODEL", "section_text": "Vhen a fluid has zero viscosity and is incompressible it can be modeled by the Euler equations\ndu -u.Vu subject to, V:u=0 dt\nwhere u is the velocity (a 2D or 3D vector field), t is time, p is the pressure (a scalar field), f is the summation of external forces applied to the fluid body (buoyancy, gravity, etc) and p is fluid density. We numerically calculate all partial derivatives using finite difference (FD) methods on a MAC. grid Harlow & Welch|(1965). Equations[1|can be solved via the standard operator splitting methoc described in Algorithm 1] At a high level, step[1|ignores the pressure term (p of (1) to create. an advected velocity field, ut, which includes unwanted divergence (see see (Selle et al.2008). for details), and then step[7|solves for pressure, p, to satisfy the constraint in (1. This produces a. divergence free velocity field, ut. In addition, we use vorticity confinement (Steinhoff & Underhill 1994) to counteract unwanted numerical dissipation. Step (8|is computationally demanding as i sparse linear system Apt = b, where A is referred to in the literature as the 5 or 7 point Laplaciar. matrix (for 2D and 3D grids respectively). After solving for pressure, the divergence free velocity i. calculated by subtracting the FD gradient of pressure, ut = u+ Vp.\nWe propose a learned approximate inference mechanism to find fast and efficient solutions to the. linear system Ap, = b. The key observation is that, while there is no closed form solution, the function mapping input data to the optimum of an optimization problem is deterministic. Therefore. one can attempt to approximate it using a powerful regressor such as a deep neural network. A block diagram of our high-level model architecture is depicted in Figure 1] and shows the computational blocks required to calculate ut for a single time-step. The advect block is a fixed function uni. solving step[1[of Algorithm[1] Then we add the body and vorticity confinement forces and obtair the divergence of the velocity field . u+ which, along with geometry, is fed through a multi-stage. ConvNet to produce pt. We then calculate the pressure divergence, and subtract it from the divergent. velocity to produce ut. Note that the only block with trainable parameters is the ConvNet model\nFigure 3: Test-set E ( . , ) versus time-step\nWe define an objective function and formulate the inference solution as an unsupervised machine learning task where the loss function is given by,.\nfobj=wi{Vut}- W\nWhere u, and pt are the predicted divergence free velocity and pressure fields respectively and w.. is a per-vertex weighting term which emphasizes the divergence of voxels on geometry boundaries Note that the bottle-neck architecture in the ConvNet avoids obtaining trivial solutions..\nThe internal structure of the ConvNet architecture is shown in Figure2] It consists of 5 stages of convolution (spatial or volumetric) and Rectifying Linear layers (ReLU). The convolutional operato. itself mimics the local sparsity structure of our linear system. However a single resolution network would have limited context, which limits the network's ability to model long-range external forces (such as gravity or buoyancy). As such, we add multi-resolution features to enable modeling long range physical phenomenon, processing each resolution in parallel then upsampling the resultant low resolution features before accumulating them.\nTo implement the model of [Yang et al.(2016) for comparison, we rephrase their fully-connecte. architecture as an equivalent, but significantly faster, sliding window model (on a 96x128x96 gric. Yang et al. report 515ms/frame, while our implementation takes 9.4ms/frame). Unfortunately, the. loss function fails to learn an accurate projection on our dataset. This is because our diverger. velocity frames include gravity and buoyancy terms, which result in a high amplitude, low frequenc. gradient in the ground-truth pressure. The small 3x3x3 context of the Yang et al. model cannc. infer such low frequency output, which dominates the loss function and results in over-training. By contrast, our unsupervised objective minimizes divergence after the pressure gradient operato. whose FD calculation acts as a high-pass filter. This is a significant advantage; our objective functio. is \"softer\"' on the divergence contribution for phenomena that the network cannot easily infer. Fc. the remaining experimental results, we will evaluate an improved version of the Yang et al. model a. our \"small model'' (i.e. a single resolution with only 3x3x3 context, trained using the loss functiol top level architectural improvements and training procedure of this work)..\nFor fair quantitative comparison of output residual, we choose the number of Jacobi iterations (34) to approximately match the FPROP time of our network. PCG is orders of magnitude slower at all resolutions. The \"small-model'' provides a significant speedup over other methods. The runtime for the PCG, Jacobi, this work, and the \"small model' are 2521ms, 47.6ms, 39.9ms and 16.9ms respectively. See Appendix B|for details, including timing as a function of resolution in Figure[5\nWe simulated a 3D smoke plume using our system and baseline methods (visual results are shown in Appendix C] figures [6|and[7)] Note that this boundary condition is not present in the training set; it is a difficult test of generalization performance. Qualitatively, the PCG and 100-iteration Ja- cobi solvers and our network produce visually similar results. The \"small model\"', cannot accurately simulate the large vortex under the plume, and as a result the plume rises too quickly and exhibits density blurring. Similarly the Jacobi method, when truncated early at 34 iterations, introduces im plausible high frequency noise and has an elongated shape due to inaccurate modeling of buoyancy Both ConvNet based methods lose some smoke density inside the arch model due to residual neg ative divergence at the fluid-geometry boundary. The maximum residual norm was <1e-3, 1.235. 1.966, 0.872 for the PCG, Jacobi, small model and this work respectively.\nAs a test of long-term stability, we record the mean residual norm (E (| : ,D) across all samples in our test-set for each frame after the initial condition, shown in Figure[1] Our model outperforms the small model (Yang et al. sizing), and is competitive with Jacobi. We also present the results of our model when a single time-step loss is used; without the multi-frame loss, single time-step accuracy is degraded, and the divergence increases over time as error is accumulated."}, {"section_index": "3", "section_name": "REFERENCES", "section_text": "Francis H. Harlow and J. Eddie Welch. Numerical calculation of time-dependent viscous incom pressible flow of fluid with free surface. Physics of Fluids, 8(12):2182-2189, 1965.\nTheodore Kim, Nils Thurey, Doug James, and Markus Gross. Wavelet turbulence for fluid sim ulation. ACM Trans. Graph., 27(3):50:1-50:6, August 2008. ISSN 0730-0301. doi: 10.1145 1360612.1360649. URLhttp://doi.acm.0rg/10.1145/1360612.1360649\nPatrick Min. Binvox utility v1.22. 2016\nTobias Pfaff and Nils Thuerey. Mantaflow fluid simulator. http : / /mant af low . com/\nAndrew Selle, Ronald Fedkiw, Byungmoon Kim, Yingjie Liu, and Jarek Rossignac. An uncondi tionally stable maccormack method. J. Sci. Comput., 35(2-3), June 2008.\nJohn Steinhoff and David Underhill. Modification of the euler equations for vorticity confinemen application to the computation of interacting vortex rings. Physics of Fluids (1994-present), 6(8). 2738-2744. 1994.\nCheng Yang, Xubo Yang, and Xiangyun Xiao. Data-driven projection method in fluid simulation Computer Animation and Virtual Worlds, 27(3-4):415-424, 2016.\nJiantao Pu and Karthik Ramani. On visual similarity based 2d drawing retrieval. Comput. Aided Des., 38(3):249-259, March 2006. ISSN 0010-4485. doi: 10.1016/j.cad.2005.10.009. URL h++ 0 09\nAdrien Treuille, Andrew Lewis, and Zoran Popovic. Model reduction for real-time fluids. ACM Trans. Graph., 25(3):826-834, July 2006. ISSN 0730-0301. doi: 10.1145/1141911.1141962 URLhttp://doi.acm.0rq/10.1145/1141911.1141962\nFigure 4: Some of the 3D Model used in our dataset"}, {"section_index": "4", "section_name": "A DATASET CREATION AND MODEL TRAINING", "section_text": "We use synthetic data generated using an offline 3D solver, mantaflow Pfaff & Thuerey- an open source research library for solving incompressible fluid flow. We then seed this solver with initia condition states generated via a simple procedure using a combination of i. a pseudo-random tur- bulent field to initialize the velocity ii. a random placement of geometry within this field, and iii. procedurally adding localized input perturbations. We will now describe this procedure in detail.\nNext, we generate an occupancy grid by selecting objects from a database of models and randomly scaling, rotating and translating these objects in the simulation domain. We use a subset of 100 objects from the NTU 3D Model Database Pu & Ramani (2006); 50 models are used only when generating training set initial conditions and 50 models are used when generating test samples. Figure|4|shows a selection of these models. Each model is voxelized using the binvox library|Min (2016). For generating 2D simulation data, we simply take a 2D slice of the 3D voxel grid..\nFinally, we simulate small divergent input perturbations by modeling inflow moving across the ve locity field using a collection of emitter particles. We do this by generating a random set of emitter. (with random time duration, position, velocity and size) and adding the output of these emitters tc the velocity field throughout the simulation.\nWith the above initial conditions defined, we use manta to calculate u* by advecting the velocity field and adding forces. We also step the simulator forward 256 frames (using Manta's PCG-based solver), recording the divergent velocity every 8 frame steps.\nUsing the above procedure, we generate a training set of 320 \"scenes\"' (each with a random initial condition) and a test set of an additiona1 320 scenes. Each \"scene' contains 32 frames, each 0.8 sec- onds apart. We use a disjoint set of geometry for the test and training sets to test generalization per- formance. We will make this dataset public (as well as the code for generating it) for future research use. All materials are located at ht tp : cims.nyu.edu/~schlacht/CNNFluids.htm\nFigure 5] shows the computation time of the Jacobi method, the small-model version (with Yang. et al. sizing) and this work. This runtime includes the pressure projection steps only: including velocity divergence calculation, the linear system solve, and the velocity update. Note that for\nWhile we do not need ground-truth label information to train the ConvNet model of Section[2] we need a collection of ground-truth pressure solutions to evaluate the precision of our model, and ad- ditionally our model does benefit from an efficient sampling of \"realistic\" initial conditions. That is, the space of all divergent velocity fields is unconstrained, and so our network's generalization per formance is improved when using a dataset of natural initial conditions that approximately samples the manifold of real-world fluid simulation states. To this end, we propose a procedural method to\nFirstly, we use the wavelet turbulent noise of Kim et al.[(2008) to initialize a pseudo-random, diver-. gence free velocity field. At the beginning of each simulation we randomly sample a set of noise parameters (uniformly sampling the wavelet spatial scale and amplitude) and we generate a random seed, which we then use to generate the velocity field..\njacobi (34 iterations) this work - small model 102 this work. su) ) nwnnnne 10' 100 2^4 2^5 2^6 2^7 2^8 resolution\nFigure 5: Runtime Vs. grid resolution (PCG omitted for clarity)\nfair quantitative comparison of output residual (shown in Section 3 of the paper), we choose the number of Jacobi iterations (34) to approximately match the FPROP time of our network. Since. the asymptotic complexity as a function of resolution is the same for Jacobi and our ConvNet, the FPROP times are equivalent. We use an NVIDIA Titan X GPU with 12GB of ram and an Intel Xeon E5-2690 CPU. PCG is orders of magnitude slower at all resolutions and has been left off for clarity.. The model of Yang et al. provides a significant speedup over other methods. The runtime for the PCG, Jacobi, this work, and Yang et al. at 1283 grid resolution are 2521ms, 47.6ms, 39.9ms and. 16.9ms respectively.\nThis appendix shows rendered frames for the proposed method as well as baseline alternatives\nFigure6 shows a rendered frame of our plume simulation (without geometry) for all methods. Not. hat this boundary condition is not present in the training set and represents an input divergent flo. pproximately 5 times wider than the largest impulse present during training. It is a difficult test c. generalization performance for data-driven methods. Qualitatively, the PCG and Jacobi (with 10. terations) and our network produce visually similar results. The model of Yang et al., trained usin. he loss function of this work, cannot accurately simulate the large vortex under the plume, and as. esult the plume rises too quickly and exhibits density blurring under the plume itself. Similarly th Jacobi method, when truncated early at 34 iterations, introduces implausible high frequency nois. and has an elongated shape due to inaccurate modeling of buoyancy forces..\nWe also repeat the above simulation with solid cells from the \"arch' model held out of our training set. Single frame results for this simulation are shown in Figure[7] Since this scene exhibits lots of turbulent flow, qualitative comparison is less useful. However, the network of Yang et al. has difficulty minimizing divergence around large flat boundaries and results in high-frequency density artifacts as shown. Both ConvNet based methods lose some smoke density inside the arch mode. due to negative divergence at the fluid-geometry boundary (specifically at the large flat ceiling), like a result of this wide plume interaction being outside the scope of the training samples.\nNote that with custom hardware Movidius, Google Inc. separable convolutions and other architec tural enhancements, we believe the runtime of our ConvNet could be reduced significantly. However we leave this to future work.\nFigure 6: Plume simulation (without vorticity confinement). Top left: Jacobi (34 iterations). Top Middle Jacobi (10O iterations). Top Right: PCG. Bottom left: Yang et al. Bottom middle: This work - small model. Bottom Right: This work.\nFigure 7: Plume simulation with \"Arch\"' geometry. Left: PCG. Middle This work - small model Right: This work."}] |
BJyBKyHKg | [{"section_index": "0", "section_name": "PARTICLE VALUE FUNCTIONS", "section_text": "Chris J. Maddison1,2, Dieterich Lawson', George Tucker3 Nicolas Heess?, Arnaud Doucet', Andriy Mnih?, Yee Whye Teh1,2\nThe policy gradients of the expected return objective can react slowly to rare re wards. Yet, in some cases agents may wish to emphasize the low or high returns regardless of their probability. Borrowing from the economics and control liter- ature, we review the risk-sensitive value function that arises from an exponential utility and illustrate its effects on an example. This risk-sensitive value func- tion is not always applicable to reinforcement learning problems, so we introduce. the particle value function defined by a particle filter over the distributions of an agent's experience, which bounds the risk-sensitive one. We illustrate the benefit of the policy gradients of this objective in Cliffworld.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The expected return objective dominates the field of reinforcement learning, but makes it difficult to express a tolerance for unlikely rewards. This kind of risk sensitivity is desirable, e.g., in real-world settings such as financial trading or safety-critical applications where the risk required to achieve a specific return matters greatly. Even if we ultimately care about the expected return, it may be beneficial during training to tolerate high variance in order to discover high reward strategies.\nWe look at a finite horizon Markov Decision Process (MDP) setting where Rt is the instantaneous reward generated by an agent following a non-stationary policy , see Appendix|A] A utility func. tion u : R -> R is an invertible non-decreasing function, which specifies a ranking over possible returns T=o Rt. The expected utility E[u(T=o Rt)|So = s] specifies a ranking over policies (Von Neumann & Morgenstern!1953). For an agent following u, a natural definition of the \"value' of a state is the real number V1 s, u) whose utility is the expected utility:\nT L VF(s,u) =u-1 E Rt So = s u t=1"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper we introduce a risk-sensitive value function based on a system of interacting trajectories. called a particle value function (PVF). This value function is amenable to large-scale reinforcement. learning problems with nonlinear function approximation. The idea is inspired by recent advances in variational inference which bound the log marginal likelihood via importance sampling estimators. (Burda et al.]2016] Mnih & Rezende2016), but takes an orthogonal approach to reward modifi- cations, e.g. (Schmidhuber1991 Ng et al.]1999). In Section2] we review risk sensitivity and a simple decision problem where risk is a consideration. In Section 3] we introduce a particle value function. In Section[4] we highlight its benefits on Cliffworld trained with policy gradients..\nT VF(s,) = B Rt log E exp So = s t=0\nFigure 1: A two state MDP. The plot shows VP(1, ) for distinct , assuming aliased states and a policy parameterized simply by the probability p E [0, 1] of remaining in a state.\nand as -> 0 we recover the expected return. See AppendixB|for details. One way to interpret this. value is through the following thought experiment. If the agent is given a choice between a single. interaction with the environment and an immediate deterministic return, then V(s, 3) represents the minimum return that our agent would take in exchange for forgoing an interaction. If < 0 then V (s, ) V (s, 0), meaning that the agent is willing to to take a loss relative to the expected. return in exchange for certainty. This is a risk-avoiding attitude, which emphasizes low returns. If > 0, then V(s, ) V(s, O), and the agent would only forgo an interaction for more than it. can expect to receive. This is risk-seeking behavior, which emphasizes high returns..\nTo illustrate one effect of risk, consider the two state MDP shown in Figure[1 The agent begins in state 1 and acts for two time steps, choosing between leaving or remaining. Suppose that the agent's policy is defined by a single parameter p E 0, 1| that describes the probability of remaining Then the expected return VP(1, 0) = 3p2 2p + 1 has two local maxima at p E {0, 1} and the solution p = 1 is not a global maximum. Any policy gradient trajectory initialized with p > 2/3 wil converge to the suboptimal solution p = 1, but as our risk appetite grows, the basin of attraction tc the global maximum of p = 0 expands to the entire unit interval. This sort of state aliasing happens often in reinforcement learning with non-linear function approximation. In these cases, modifying the risk appetite (either towards risk-avoidance or seeking) may favorably modify the convergence of policy gradient algorithms, even if our ultimate objective is the expected return.\nThe risk-seeking variant may be helpful in deterministic environments, where an agent can exactly reproduce a previously experienced trajectory. Rare rewards are rare only due to our current policy and it may be better to pursue high yield trajectories more aggressively. Note, however, that V (s, ) is non-decreasing in , so in general risk-seeking is not guaranteed to improve the expected return Note also that the literature on KL regularized control (Todorov2006) Kappen]2005] Tishby & Polani] 2011) gives a different perspective on risk sensitive control, which mirrors the relationship. between variational inference and maximum likelihood. See Appendix|C|for related work..\nAlgorithms for optimizing V(s, ) may suffer from numerical issues or high variance, see Ap pendix[B Instead we define a value function that bounds V(s, ) and approaches it in the infinite sample limit. We call it a particle value function, because it assigns a value to a bootstrap particle filter with K particles representing state-action trajectories. This is distinct, but related to Kantas (2009), which investigates particle filter algorithms for infinite horizon risk-sensitive control.\n1.0 Value functions at distinct risk settings beta = 2 0.8 beta = 1 beta = 0 1/4 -1/4 0.6 beta = -1 beta = -2 0.4 0.2 1 0.0 0.0 0.2 0.4 0.6 0.8 1.0 Probability of remaining.\nBriefly, a bootstrap particle filter can be used to estimate normalizing constants in a hidden Markov model (HMM). Let (Xt, Yt) be the states of an HMM with transitions Xt ~ p(|Xt-1) and emis- sions Yt ~ q(|Xt). Given a sample yo ... yT, the probability p({Yt = yt}T-o) can be computed with the forward algorithm. The bootstrap particle filter is a stochastic procedure for the forward al- gorithm that avoids integrating over the state space of the latent variables. It does so by propagating a set of K particles X(t) with the transition model X(t) ~ p(|X(1) and a resampling step in propor- of the desired probability (Del Morall 2004 Pitt et al.]2012). The insight is that if we treat the state-action pairs (St, At) as the latents of an HMM with emission potentials exp(3Rt(St, At)) (similar toToussaint & Storkey2006][Rawlik et al.]2010), then a bootstrap particle filter returns an unbiased estimate of E[exp( t=o Rt)|So = s]. Algorithm[1 1 summarizes this approach.\nI:K do 9: s(i) _ s(i) 2: 10: p:S ) # inherit from parer 3: 11: TTT 4: W(i) e 12: 5: end for 13: end for 6Zo=kKW K 14: W(i 7: for t = 1 : T do 15: end for 8: for i = 1 : K do 16: return log Z\nConsider the value if we initialize all particles at s, V.K(s, ) = V(s, ..., s, ). If > 0, then. by Jensen's inequality and the unbiasedness of the estimator we have that V.k(s, ) V(s, ) For < O the bound is in the opposite direction. It is informative to consider the behaviour of. the trajectories for different values of . For > 0 this algorithm greedily prefers trajectories that encounter large rewards, and the aggregate return is a per time step soft-max. For < 0 this. algorithm prefers trajectories that encounter large negative rewards, and the aggregate return is a per time step soft-min. See AppendixD|for the Bellman equation and policy gradient of this PVF.."}, {"section_index": "3", "section_name": "4 EXPERIMENTS", "section_text": "To highlight the benefits of using PVFs we apply them to a variant of the Gridworld task called Cliffworld, see AppendixE for comparison to other methods and more details. We trained time de- pendent tabular policies using policy gradients from distinct PVFs for E {-1, 0.5, 0, 0.5, 1, 2} We tried K E {1, ..., 8} and learning rates e E {1 10-3, 5 10-4, 1 10-4, 5 10-5}. For the = 0 case we ran K independent non-interacting trajectories and averaged over a policy gra dient with estimated baselines. Figure2|shows the density over the final state of the trained MDP under varying treatments but K = 4. Notice that the higher the risk parameter, the broader the pol- icy, with the agent eventually solving the task. No = 0, corresponding to standard REINFORCE runs solved this task, even after increasing the number of agents to 64."}, {"section_index": "4", "section_name": "5 CONCLUSION", "section_text": "MDP end state density O S CCCCCCCCCCG SCCCCCCCCCCG SCCCCCCCCCCG 1 beta = -1 beta = -1/2 beta = 0 2 3 O 1 beta = 1/2 beta = 2 2 beta = 1 3 01234567891011 01234567891011 01234567891011\nFigure 2: Last state distribution under policies trained with PVFs with distinct\ngorithm I An estmalor ol le P vr I ~ P(I = j) x W() # select random parent 1: for i = 1 : K do 9: S(i) = s(i) 2: 10: 3: A(i) ) ~ TT-t(|S() 11: 4: W(i) 12: 5: end for 13: end for Z= K w(t) 14: 7: for t = 1 : T do 15: end for 8: for i = 1 : K do 16: return + t-o log Zt\nTaking an expectation over all of the random variables not conditioned on we define the PVF asso- ciated with the bootstrap particle filter dynamics:\nEF Og t=0\nWe introduced the particle value function, which approximates a risk-sensitive value function for a given MDP. We will seek to address theoretical questions, such as whether the PVF is increasing in and monotonic in the number of particles. Also, the PVF does not have an efficient tabular represen- tation, so understanding the effect of efficient approximations would be valuable. Experimentally we hope to explore these ideas for complex sequential tasks with non-linear function approximators. One obvious example of such tasks is variational inference over a sequential model.\nWe thank Remi Munos, Theophane Weber, David Silver, Marc G. Bellemare, and Danilo J. Rezende for helpful discussion and support in this project.."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Kenneth Joseph Arrow. Essays in the theory of risk-bearing. 1974\nBart van den Broek, Wim Wiegerinck, and Hilbert Kappen. Risk sensitive path integral control arXiv preprint arXiv:1203.3523, 2012\nStefano Coraluppi. Optimal control of Markov decision processes for performance and robustness PhD thesis, University of Maryland, 1997.\nPeter Dayan and Geoffrey E Hinton. Using expectation-maximization for reinforcement learning Neural Computation, 9(2):271-278, 1997.\nPierre Del Moral. Feynman-kac formulae. In Feynman-Kac Formulae, pp. 47-93. Springer, 2004\nArnaud Doucet and Adam M Johansen. A tutorial on particle filtering and smoothing: fiteen year. later. 2011.\nRoy Fox, Ari Pakman, and Naftali Tishby. Taming the noise in reinforcement learning via so. updates. arXiv preprint arXiv:1512.08562, 2015\nMatt Hoffman, Arnaud Doucet, Nando De Freitas, and Ajay Jasra. On solving general state-space sequential decision problems using inference algorithms. Technical report, Technical Report TR- 2007-04, University of British Columbia, Computer Science, 2007.\nRonald A Howard and James E Matheson. Risk-sensitive Markov decision processes. Managemeni science, 18(7):356-369. 1972\nNikolas Kantas. Sequential decision making in general state space models. PhD thesis, Citeseer. 2009.\nNikolas Kantas, Arnaud Doucet, Sumeetpal S Singh, Jan Maciejowski, Nicolas Chopin, et al. On particle methods for parameter estimation in state-space models. Statistical science, 30(3):328- 351, 2015.\nHilbert J Kappen. Linear theory for control of nonlinear stochastic systems. Physical review letters 95(20):200201, 2005.\nHilbert J Kappen, Vicenc Gomez, and Manfred Opper. Optimal control as a graphical model infer ence problem. Machine learning, 87(2):159-182, 2012\nSven Koenig and Reid G Simmons. Risk-sensitive planning with probabilistic decision graphs. I Proceedings of the 4th international conference on principles of knowledge representation an reasoning, pp. 363, 1994.\nAndrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations Theory and application to reward shaping. In ICML, volume 99, pp. 278-287, 1999.\nJohn W Pratt. Risk aversion in the small and in the large. Econometrica, pp. 122-136, 1964\nYun Shen. Michael J Tobia. Tobias Sommer. and Klaus Obermayer. Risk-sensitive reinforcemeni learning. Neural computation, 26(7):1298-1328, 2014\nNaftali Tishby and Daniel Polani. Information theory of decisions and actions. In Perception-action cycle, pp. 601-636. Springer, 2011.\nEmanuel Todorov. Linearly-solvable markov decision problems. In NIPS, pp. 1369-1376, 2006\nMarc Toussaint and Amos Storkey. Probabilistic inference for solving discrete and continuous state markov decision processes. In Proceedings of the 23rd international conference on Machine. learning, pp. 945-952. ACM, 2006.\nJohn Von Neumann and Oskar Morgenstern. Theory of games and economic behavior. Princetor University Press, 1953\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992\nSteven I Marcus, Emmanual Fernandez-Gaucherand, Daniel Hernandez-Hernandez, Stefano Coraluppi, and Pedram Fard. Risk sensitive markov decision processes. In Systems and con trol in the twenty-first century, pp. 263-279. Springer, 1997."}, {"section_index": "6", "section_name": "MARKOV DECISION PROCESSES", "section_text": "We consider decision problems in which an agent selects actions and receives rewards in a stochastic. environment. For the sake of exposition, we consider a finite horizon MDP, which consists of: a finite. state space S, a finite action space A, a stationary environmental transition kernel satisfying the Markov property p([St, At, ..., So, Ao) = p(St, At), and reward functions rT-t : S A -> R. At each time step the agent chooses actions according to a policy T-t([St) given the current. state. T-t(|St) is the action distribution and rT-t the reward function when there are T - t steps. remaining. All together the MDP proceeds stochastically producing a sequence of random variables. (St, At) according to the following dynamics for T E N time steps. Let t E {0, ..., T}..\nThe agent receives a reward Rt = rT-t(St, At) at each time step. We will call a single realization of the MDP a trajectory. The objective in classical reinforcement learning is to discover the policies r = {t(|s)}T-o that maximize the value function,\nT VF(s) =E Rt So = s t=0\nwhere the expectation is taken with respect to all the stochastic elements not conditioned on\nUtility theory gives us a language for describing the relative importance of high or low returns.. A utility function u : R -> R is an invertible non-decreasing function, which specifies a ranking. over policies (Von Neumann & Morgenstern1953). The expected utility does not necessarily have an interpretable scale, because any affine transformation of the utility function results in the same relative ordering of policies or return outcomes. Therefore we define the value associated with a utility u by returning it to the scale of the rewards defined by the MDP. For an agent following u,. the \"value\"' of a state is the real number V(s, u) whose utility is the expected utility:.\nNote that when u is the identity we recover the expected return. Of course for non-decreasing. invertible utilities, the value gives the same ranking over policies. One way to interpret this value is. through the following thought experiment. If the agent is given a choice between a single interaction. with the environment or an immediate deterministic return, then V(s, u) represents the minimum return that our agent would take in exchange for forgoing an interaction. If.\nthen our agent is willing to take a loss relative to the expected return in exchange for certainty. This is a risk-avoiding attitude, which emphasizes the smallest returns, and one can show that this occurs iff u is concave. If\nthen the agent would only forgo an interaction for more than it can expect to receive. This is risk-. seeking behavior, which emphasizes the largest returns, and one can show that this occurs iff u is\nSo = s At ~TT-t([St) St+1 ~ p(|St,At)\nAll of the results we present can be simply extended to the infinite horizon case with discounted or episodic returns as well as more general uncountable state and action spaces.\nT F(s,u)=u-1 E Rt So u t=1\nT VT(s,u)<E r Rt|So = s t=0\nT RtSo =s V(s,u) E t=0\nWe focus on exponential utilities of the form u(x) = sgn() exp(x) where e R. This is a. broadly studied choice that is implied by the assumption that the value function V(s, u) is addi-. tive for deterministic translations of the return (Pratt|1964) Howard & Matheson1972) Coraluppi 1997). This assumption is nice, because it preserves the Markov nature of the decision process: if. the agent were given a choice at every time step t between continuing the interaction or terminating and taking its value as a deterministic return, then additivity in the value function means that the same decision is made regardless of the return accumulated so far (Howard & Matheson1972). The value function corresponding to an exponential utility is.\nand as -> 0 we recover the expected return. We list a few of its properties"}, {"section_index": "7", "section_name": "For proofs", "section_text": "From a practical point of view the value function V(s, ) behaves like a soft-max or soft-mi depending on the sign of , emphasizing the avoidance of low returns when < 0 and the pursui of high returns when > 0. As -> oo the value V(s, ) approaches the supremum of the returns over all trajectories with positive probability, a best-case penalty. As -> oo it approaches th infimum, a worst-case value (Coraluppil 1997). Thus for large positive this value is tolerant o high variance if it can lead to high returns. For large negative it is very intolerant of rare lov returns.\nT 1 V(s,)= log E exp Rt So = s t=0\nT T VF(s,) =E Rt Rt var So = s 2 TT t=0 t=0\nT T Rt E exp So = s = E exp Rt 3 Q So = s t=0 t=0 T > E exp Rt a S t=0\nsince xP is convex on x > 0 for p > 1 or p < 0, Jensen's inequality gives us the result. Taking log of both sides gives us the result in that case. In the case that a = 0 or = 0, 4. and Jensen's inequality gives us the result by the concavity of log.\nDespite having attractive properties the risk-sensitive value function is not always applicable tc reinforcement learning tasks (see also Mihatsch & Neuneier(2002)). The value function satisfies the multiplicative Bellman equation\nexp(VF(s,B)) = > nT(a[s)p(ss,a) exp(BrT(s,a) + BV_1(s,) a,s'\nOperating in log-space breaks the ability to exploit this recurrence from Monte Carlo returns gener. ated by a single trajectory, because expectations do not exchange with log. Operating in exp-spac is possible for TD learning algorithms, but we must cap the minimum/maximum possible return sc. that exp(V(s, 3)) does not underflow/overflow. This can be an issue when the rewards represen log probabilities as is often the case in variational inference. The policy gradient of V(s, 3) is.\nVVT(s, ) = E exp(3QT_t(St,At,) - V(St,B))V logT-t(At|St)[So = s t=0\nT T So -E Rt exp exp V lognT-t(At|St)[So = s t=0 t=0\nThere are particle methods that would address the estimation of this score, e.g. (Kantas et al.]2015) but for large T the estimate suffers from high mean squared errors.\nRisk sensitivity originates in the study of utility and choice in economics (Von Neumann & Mor genstern 1953 Pratt 1964, Arrow 1974). It has been extensively studied for the control of MDPs (Howard & Matheson[1972)|Coraluppi]|1997fMarcus et al.11997) Borkar & Meyn2002]|Mihatsch & Neuneier2002Bauerle & Rieder2013). In reinforcement learning, risk sensitivity has beer. studied (Koenig & Simmons1994]Neuneier & Mihatsch]1998] Shen et al.]2014), although none of these consider the direct policy gradient approach considered in this work. Most of the methods. considered are variants of a Q learning approach or policy iteration. As well, the idea of treating. rewards as emissions of an HMM is not a new idea (Toussaint & Storkey. 2006 Rawlik et al. 2010\nworks still optimize the expected reward objective V (s, 0) = lE[> t. t=o RtSo = s] with or without some regularization penalties on the policy. The ones that share the closest connection to the risk sensitive objective V (s, ) studied here, are the KL regularized objectives of the form.\nVF(s,)) => nT(a[s)p(s'[s,a) exp(BrT(s,a) + BV_1(s,)) a,s'\nT QT(s,a,) exp Rt log E So = s, Ao = a t=0\nT (At|St) =IE Rt+ log So = s n'(At|St) T t=0\nRuiz & Kappen|(2016). The observation is that in an MDP with fully controllable transition dynam ics, optimizing a policy ', which completely specifies the transition dynamics, achieves the risk\nNote that this has an interesting connection to Bayesian inference. Here, plays the role of the prior, ' the role of the variational posterior, V,t' (s, ) the role of the variational lower bound, and V (s, ) the role of the marginal likelihood. In effect, KL regularized control is like variational inference, where risk sensitive control is like maximum likelihood. Finally, when the environmental dynamics p( [s, a) are stochastic, (16) does not necessarily hold, therefore the risk sensitive value is distinct in this case. Yet, in certain special cases, risk sensitive objectives can also be cast as solutions to path integral control problems (Broek et al.]2012).\nRecalling Algorithm[1and the definition of the MDP in Appendix [A] define\nthe particle value function associated with the bootstrap particle filter dynamics\n1 E log Zt t=0\nBVF(s(1:K),) = IIr(a(1:K)|s(1:K)) log ZT(a(1:K), s(1:K)) + a(1:K) Ir(a(1:K)|s(1:K)Pr(o(1:K)|a(1:K),s(1:K)BVF-1(o(1:K),f a(1:K) o(1:K)\nK IT(a(1:K)|s(1:K)) =T(a i=1 K log ZT(a(1:K),s(1:K) log K i=1 K K ex l:K)|q(1:K) (1:K) 1 k=1 exp(Br i=1 j=1\nT T K log ZtV log T-t =F = s(i)}K t=0t'=ti=]\nIn this sense we can think of log Zt/ as the immediate reward for the whole system of particles and T t=o log Zt/ as the return.\nmax 3)=VF(s,B\nTo our knowledge no work has considered using particle filters for risk sensitive control by treating. the particle filter's estimator of the log partition function as a return whose expectation bounds the risk sensitive value and whose policy gradients are cheap to compute.\n2 rT_\nWe can also think of this value function as the expected return of an agent whose actions space is the product space AK, in an environment with state space SK whose transition kernel includes the resampling dynamic. Let s(1:K) = )). then the PVF satisfies the Bellman equation\nT VT.K 11 log E Zt S K S{i=1 S t=0\nV(s,)\n1. VF.k(s,) V(s,) 2. limK- VF.k(s,) = VF(s, ) 3.limx->0 V So = s= V1(s,) 4. V -(s. B) is continuous in 3\n1. VT.k(s,) VT(s,) 2. limK-o VF.k(s,) = VF(s,) 3. lima->0 VF.k(s,) = E RtSo=s=VT.1s,) 4. V v(s. B) is continuous in 3\nBecause each has a genealogy, which is an MDP trajectory"}, {"section_index": "8", "section_name": "E CLIFFWORLD DETAILS", "section_text": "We considered a finite horizon Cliffworld task, which is a variant on Gridworld. The world is 4 rows by 12 columns, and the agent can occupy any grid location. Each episode begins with the\nThe key point is that the use of interacting trajectories to generate the Monte Carlo return ensures that this particle value function defines a bound on V(s, ). Indeed, consider the particle value. function that corresponds to initializing all K trajectories in state s E S and define, VF.k(s, ) =. VA (s s B) Now for B > 0 we have by Jensen's inequality.\nFor < 0 we get the reverse inequality, VF,k(s, ) V(s, ). As K > oo the particle value function converges to V(s, ) since the estimator is consistent (Del Moral, 2004). We list this and some other properties:\nT log Zt lim VF,k(s, ) = E lim {S 3->0 3->0 t=0 T K -E K S[i=] K t=0i=1 T K 1 s}=1 IE K t=0 i=1\nT K 1 E|Rt|So= s K t=0 i=1 T =E Rt So = s t=0\nFigure 3: Cliffworld is a n m gridworld. S denotes the start state, G the goal state, and the agen is currently in state (4,2). The arrows show the actions available to the agent..\nFigure 4: Left plot: probability of solving task with standard deviation, defined as achieving positive. average return. Right plot: Average reward during training with standard deviation. Both VIMCO and PVF trained with = 1.0 and learning rate e = 1 10-3. Averages are for 8 runs. At 3. particles, some PVF runs began solving Cliffworld, while no VIMCO ones did..\nagent in state (0,0) (marked as S in Figure[3) and ends when 24 timesteps have passed. The actions available to the agent are moving north, east, south, and west, but moving off the grid is prohibited. All environmental transitions are deterministic. The 'cliff' occupies all states between the start and. the goal (marked G in Figure[3) along the northern edge of the world. All cliff states are absorbing and when the agent enters any cliff state it initially receives a reward of -100 and receives O reward. each timestep thereafter. The goal state is also absorbing, and the agent receives a +100 reward upor. entering it and O reward after. The agent receives a -1 reward for every action that does not transition. into a cliff or goal state. The optimal policy for Cliffworld is to hug the cliff and proceed from the. start to the goal as speedily as possible, but doing so could incur high variance in reward if the agent falls off the cliff. For a uniform random policy most trajectories result in large negative rewards and. occasionally a high positive reward. This means that initially for independent trajectories venturing. east is high variance and low reward..\nWe trained non-stationary tabular policies parameterized by parameters 0 of size 4 12 4 24\na=0CXP(0[S1, S2, U, The policies were trained using policy gradients from distinct PVFs for E {-1, -0.5, 0, 0.5, 1, 2} We tried K E {1, ..., 8} and learning rates e E {1 10-3, 5 10-4, 1 10-4, 5 10-5}. For the. 3 = 0 case we ran K independent non-interacting trajectories and averaged over a policy gradient with estimated baselines. For = 0, we used instead a REINFORCE (Williams 1992) estimator, that was simply estimated from the Monte Carlo returns. For control variates, we used distinct base- lines depending on whether = 0 or not. For = 0, we used a baseline that was an exponential. moving average with smoothing factor O.8. The baselines were also non-stationary, and with dimen- sionality 4 12 24. For 0 we used no baseline except for VIMCO's control variate (Mnih & Rezende2016) for the immediate reward. The VIMCO control variate is not applicable for the. whole return as future time steps are correlated with the action through the interaction of trajectories.\nwhere R) is a reward sequence generated by an independent Monte Carlo rollout of the original MDP. VIMCO is also a risk sensitive value function, but it does not decompose over time and so\nS G :\nPVF vs.VIMcO varying number of particles PVF vs.VIMCO with 3 particles 1.0 80 PVF PVF yst 60 VIMCO VIMCO 0.8 40 eeh eeettn eeeeeety 0.6 20 O Prqeqole 0.4 20 40 0.2 60 0.0 -80 2 3 4 5 6 7 8 5 10 15 20 Number of particles 10k weight updates\nexp0[s1, S2, a, T t] IT-t(as 3=o exp(0[s1, S2, a, T t])\nK T 1 1 VT,K(s,) =E log exp K i=1 t=0\ndoes not have a temporal Bellman equation. In this case, though, VIMCO policy gradients were able to solve Cliffworld under most of the conditions that the policy gradients of PVF were able to solve. For K = 3 and = 1.0, PVF occasionally solved Cliffworld while VIMCO did not. See Figure4 However, once in the regime where VIMCO could solve the task, it did so with more reliability than the PVF variant. Note that in no case did REINFORCE on the expected return solve this variant"}] |
rkmU-pEFl | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Minimally invasive surgery (MIs) became a wide-spread technique to have surgical access to the abdomen of patients without casing major damages in the skin or tissues. Since MIS supporting techniques like laparascopy or endoscopy provide a restricted access to the surgeon, computer-aided visualization systems are developed. One of the major research areas in the 3d reconstruction of stereo endoscope images. See Figure[1|for an example stereo cardiac laparoscopy image pair.\nIn this paper we present an approach to predict the disparity maps of a stereo image pair by creating a. very deep parallel Convolutional Neural Network (CNN). The CNN maps input RGB image patches to disparity map patches, avoiding the procedure of establishing a correspondence between the twc sides which would require prior knowledge about the data..\nThe paper is organized as follows: Section 2|describes the proposed method and the methodology we used. We show our preliminary results in Section3|and we draw conclusions in Section4"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "(a) Left image (b) Right image"}, {"section_index": "2", "section_name": "DISPARITY MAP RECONSTRUCTION USING DEEP CNNS", "section_text": "In this section we describe the approach to predict disparity maps from the stereo images. Eacl stereo image pair is split into 24 24 overlapping parts and we can create a mapping with a very. deep parallel CNN to a its respective disparity map patch.."}, {"section_index": "3", "section_name": "2.1 METHODOLOGY", "section_text": "The components of the proposed parallel CNN architecture can be categorized into two main build ing blocks: a CNN block (Table|1), which consists of several convolutional layers followed by ma. pooling and batch normalization and a Fully Connected block (Table2). Table 3|shows the whole architecture. There are 3 CNN blocks on each side at the beginning. After the convolutional layers the two side are merged and a succession of FC blocks maps the stereo images to a disparity map To avoid overfitting, we used Tikhonov normalization on the weights and initialized them with the Glorot uniform formula Glorot & Bengio(2010). We have used Adamax ? for training and allowec a maximum of 1000 epochs with early stopping. We have used a training set of 20 images and testec our approach on the remaining 2407 frames.\nLayer Shape Activation Convolutional 128 x 5 x 5 ReLU|Nair & Hinton(2010 Convolutional 64 x 3 x 3 ReLU Convolutional 64 x 3 3 ReLU Convolutional 32 x 3 x 3 ReLU Convolutional 32 3 x 3 ReLU Max Pooling 2 x 2 Batch Normalization\nTable 2: The architecture of a Fully connected (FC) block with p parameters"}, {"section_index": "4", "section_name": "4 CONCLUSION", "section_text": "In this paper we have presented a very deep parallel convolutional neural network to predict dis. parity maps from stereo image pairs. We have used a stereo laparoscopic image data set and the\nTable 1: The architecture of a CNN block\nLayer Shape Activation Densea p ReLU Dense p ReLU Dropout (0.5)\nTo evaluate the accuracy of the proposed approach, we have calculated the root mean squared error. (RMSE) of the predicted disparity maps to the ground truth. The proposed approach had a RMSE. of 4.844. In comparison, another DNN-based approach|Antal(2016) achieved 5.537 on the same images.\nTable 3: The architecture of a CNN block Left Right CNN Block L1 CNN Block R1 CNN Block L2 CNN Block R2 CNN Block L3 CNN Block R3 Merge Flatten FC Block 1 (4096) FC Block 1 (2048) FC Block 1 (1024) Dense (24 x 24) Total parameters: 38599425\nevaluation showed that it performed better then a previously publish deep neural network-based ap. proach. Since the proposed approach does not require prior knowledge on the image acquisition, it is potentially more generalizable across devices. In the future, we will investigate this hypothesis."}, {"section_index": "5", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work was supported in part by the project VKSZ 14-1-2015-0072, SCOPIA: Development of diagnostic tools based on endoscope technology supported by the European Union, co-financed by the European Social Fund. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Balint Antal. Automatic 3d point set reconstruction from stereo laparoscopic images using deep neural networks. In Proceedings of the 6th International Joint Conference on Pervasive and Embedded Computing and Communication Systems (PECCS 2016), pp. 116-121, 2016. ISBN 978-989-758-195-3. doi: 10.5220/0006008001160121.\nJames Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: A cpu and gpu math compiler in python. In Proc. 9th Python in Science Conf, pp. 1-7, 2010..\nSharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catan zaro, and Evan Shelhamer. cudnn: Efficient primitives for deep learning. arXiv preprint arXiv:1410.0759, 2014.\nFranois Chollet. Keras. https://github. com/fchollet/keras. 2015.\nPhilip Pratt, Danail Stoyanov, Marco Visentini-Scarzanella, and Guang-Zhong Yang. Dynamic guid ance for robotic surgery using image-constrained biomechanical models. In Medical Image Com puting and Computer-Assisted Intervention-MICCAI 2010, pp. 77-85. Springer, 2010.\nDanail Stoyanov, Marco Visentini Scarzanella, Philip Pratt, and Guang-Zhong Yang. Real-time. stereo reconstruction in robotically assisted minimally invasive surgery. In Medical Image Com puting and Computer-Assisted Intervention-M1CCAI 2010, pp. 275-282. Springer, 2010.\nTable 3: The architecture of a CNN block\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In International conference on artificial intelligence and statistics, pp. 249-256, 2010"}] |
SJGfklStl | [{"section_index": "0", "section_name": "EXPLORING THE ROLE OF DEEP LEARNING FOR PAR TICLE TRACKING IN HIGH ENERGY PHYSICS", "section_text": "Lawrence Berkeley National Laboratory, Redwood Center for Theoretical Neuroscience, Berkeley, California\nDustin Anderson, Jean-Roch Vilmant, Josh Bendavid, Maria Spiropoulou, Stephan Zheng California Institute of Technology, Pasadena, California.\nTracking particles in a collider is a challenging problem due to collisions, imper-. fections in sensors and the nonlinear trajectories of particles in a magnetic field Presently, the algorithms employed to track particles are best suited to capture lin- ear dynamics. We believe that incremental optimization of current LHC (Large. Halidron collider) tracking algorithms has reached the point of diminishing re-. turns. These algorithms will not be able to cope with the 10-100x increase in HL-LHC (high luminosity) data rates anticipated to exceed O(100) GB/s by 2025 without large investments in computing hardware and software development or without severely curtailing the physics reach of HL-LHC experiments. An op. timized particle tracking algorithm that scales linearly with LHC luminosity (or. events detected), rather than quadratically or worse, may lead by itself to an order of magnitude improvement in the track processing throughput without affecting. the track identification performance, hence maintaining the physics performance. intact. Here, we present preliminary results comparing traditional Kalman filter-. ing based methods for tracking versus an LSTM approach. We find that an LSTM. based solution does not outperform a Kalman fiter based solution, arguing for. exploring ways to encode apriori information.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep Learning has played a phenomenal role in making advances in many fields such as computer vision Goodfellow et al.[(2014), speech recognition Hinton et al.[(2012) and robotics Levine et al. (2016) amongst other fields. While there has been some work on applying deep learning techniques to searching for particles in high energy physics (HEP)|Baldi et al.(2014) there has not yet been any concerted effort in applying it to problems of tracking.\nIn this work, we explore the role of deep learning for problems of tracking in high energy physics. experiments. First, we present the complexities of the problem in detecting particles. Next, we present preliminary results on the applications of LSTMs to tracking particles in a detector array. We hope, with this work, to reach out to the broader machine learning community to both present. our findings and seek out methods for solving challenging problems in high energy physics ..\nIn a typical HEP experiment, building-sized underground detectors collect TBs of data per second. coming from high-energy collisions of two particle beams. The detectors are composed of concen-. tric cylindrical layers of electronic sensors surrounding the collision region. Each collision event\nO(100M) tracks/sec\nFigure 1: left Visualization of hits from trajectories from the ATLAS general-purpose LHC. The. figure shows a slice view of the detector ands hits on the various layers of the detectors. right a schematic describing how a single particle generates hits that are used as inputs. The particle travels. from its origin and passes through pixel detectors which help form the seed for track fitting. The particle then continues to travel through the various detector layers sometimes resulting in multiple. or missed hits in layers due to various physical interactions.\nThe most demanding pattern recognition task in HEP is to reconstruct the trajectories (\"'tracks') of millions of charged particles per second as they propagate through a tracking system of a detector Given a 3D image I(x, y, z) with triplets of inputs where each pixel has a binary value, with 1 sig nifying a hit on the detector layer, the pattern recognition task is to group together all hits generatec by each particle as seen in Figure[1] This task is made complicated by detector effects (such as noise in the sensors and an imperfect magnetic field) as well by stochastic perturbations to the particle trajectory derived from particle interactions with detector material.\nThe similarities in problems between those explored in computer vision, robotics and the HEP field are obvious. The obvious differences lie in the fact that in the case of HEP-LHC typically we would need to estimate the parameters of millions of tracks in parallel. Further, the required reliability of a model is significantly higher. For example, the existing state of the art methods can detect tracks with an efficiency between 90-99% depending of the particle type and its momentum.."}, {"section_index": "2", "section_name": "3 MODELING", "section_text": "For the tracking problem, one is provided with a seed. A seed is a n-tuple of three points in 3I space, where n is the minimum number of points required to fit a parametric curve to a set of points in 3D.\nWe compare our method against a Kalman filter whose transition matrices are not learnt but manually set with knowledge of the physics Frhwirth (1987). That is, we encode the transition matrices that describe the dynamics in the latent space and their projections back onto the observation space based on the approximated analytical forms that the particles are expected to take as they make their way out of the detectors. Of further importance, these Kalman filters have unique transition matrices for each detector layer to better capture the expressive nature of dynamics.\nstrip detector pixel detector particle origin\nconsists of O(103) particles that traverse the detectors in various directions and different charge energy, and momentum as seen in Figure[1] The topologies of the events offer insight into the nature of the collisions, allowing to probe the properties of elementary particles and the fundamental laws of nature on a statistical basis.\nSeed generation is a pattern recognition problem in of itself. But given the seed, our approach has. been to fit an LSTM to predict the location of the next hit. The loss function in this case is to minimize the predicted hits across an entire sequence of a trajectory..\nInput vectors are fed into LSTM units (at least 5 in number), the output of the LSTM units are then fed into two fully connected layers which produce the prediction (or the next time-step). The weights are then learnt through gradient descent..\nLSTN LSTM LSTM LSTM 0.10\nLST\nFigure 2: For a given track (red), we compare the Kalman Filter solutions (blue) and the LSTM solution (red). The three subplots show the comparision of Ro vs z, z vs R and R vs Ro (left Shows a case where the KF and LSTM solution very closely match the data. (middle) Compares the LSTM and KF solution on a track where they differ the most.\nTable 1: Comparing average Euclidean distance between measurements and predictions from both the Kalman Filter and LSTM\nThe limitations with these methods are that they inherently cannot capture non-linear dynamics anc the physics is known only up to an approximation that is further exacerbated by noisy measurements.. We wish to explore the role RNNs like LSTMs could play in modeling these dynamics.."}, {"section_index": "3", "section_name": "4 DISCUSSION", "section_text": "We then fit an LSTM with 10 hidden units and two fully connected units of sizes 20 and 2 (since. R is known apriori for all tracks) to produce the prediction for the next time step. We used Adam. Optimizer Kingma & Ba (2014) to train the weights with an initial learning rate set to 0.001. We also experimented with changing the number of fully connected layers and the types of recurrent. network units (for e.g. GRUs with varying number of hidden units), although we make no claim for. an exhaustive search of these architectures..\nWe find that an LSTM based approach can filter states comparably in some cases to an ideal mode based on the Kalman Filter as seen in Figure[2|and Table[1 Yet, there still remains a large gap ir performance.\nOur ideas moving forward is to look towards combining prior knowledge about the problem with a learning based approach. For example, we hope to train models that have access to information such as the magnetic field (say). Further we hope to explore models which can encode the geometry of the detector to better be able to make predictions between layers?\nTrack fitting is just one step of the puzzle in high energy physics. The goal of our HEP.TrkX project'. is to prototype an end-to-end solution for the HL-LHC track pattern recognition challenge. Current solutions for this have a combinatorial approach that would make the latency larger when the data throughput is higher. The motivation for this submission is to seek advice and inputs from the larger representation learning community on models and methods..\nMeasurement. Euclidean Distance\nHere we present a summary of our preliminary findings. Using the ACTS simulation software. we simulated around 50,Oo0 charged particle tracks. From this, for convenience of analysis , we sampled trajectories of step length 22 resulting in 16,275 samples. We sampled 200 examples to. form a test set. Each sample consists of three dimensions - Ro, z and R. R is the distance of the. detector from the origin determined by the geometry of the detector, is the angle swept across the. detector by the particle and finally z is a shift along the slice swept out by .."}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energ. physics with deep learning. Nature communications, 5, 2014.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural infor mation processing systems, pp. 2672-2680. 2014\nGeoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly. Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE. Signal Processing Magazine, 29(6):82-97, 2012.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014\nSergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo motor policies. Journal of Machine Learning Research, 17(39):1-40, 2016."}] |
ByD6xlrFe | [{"section_index": "0", "section_name": "HYBRID NEURAL NETWORKS OVER TIME SERIES FOR TREND FORECASTING", "section_text": "Tao Lin* Tian Guo* & Karl Aberer\ntao.lin, tian.guo,.. karl.aberer}@epfl.cl\nThe trend of time series characterize the intermediate upward and downward pat terns of time series. Learning and forecasting the trend in time series data play an important role in many real applications, ranging from resource allocation in data centers and load schedule in smart grid. Inspired by the recent successes of neural networks, in this paper we propose TreNet, a novel hybrid neural net- work based learning approach over time series and the associated trend sequence TreNet leverages convolutional neural networks (CNNs) to extract salient features from local raw data of time series and uses a long-short term memory recurrent neural network (LSTM) to capture the sequential dependency in historical trend evolution. Some preliminary experimental results demonstrate the advantage of TreNet over CNN, LSTM, the cascade of CNN and LSTM, Hidden Markov Mode] method and various kernel based baselines on real datasets"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Time series, as a sequential data points ordered by time, is being generated in a wide spectrum of. domains. However, in many applications, users are only interested in understanding and forecasting. the evolving trend in time series, i.e., upward or downward pattern of time series, since the conven-. tional prediction on specific data points could deliver very little information about the semantics and. dynamics of the underlying process generating the time series. For instance, time series in Figure|1 are from the power consumption dataset. Figure[1(a) shows some raw data points of time series. Though point A and B have approximately the same value, the underlying system is likely to be. in two different states when it outputs A and B, because A is in an upward trend while B is in a. downward trend (Wang et al.]2011} Matsubara et al.2014). On the other hand, even when two. points with the similar value are both in the upward trend, e.g., point A and C, the different slopes. and durations of the trends where point A and C locate, could also indicate different states of the underlying process.\nLocal Data 245 245 245 lne 240 Trend 3 240 240 \\Trend 2 Predict the . 235 235 235 trend Trend 1 from here. 230 230 230 0 10 20 30 40 50 60 70 8090 0 50 100 25 50 75 100 125 Time Time Time (a) (b) (c)\nLocal Data 245 245 245 yanue 240 Trend 3 240 240 Trend 2 Predict the 235 235 235 trend Trend 1 from here 230 230 230 0 10 20 30 40 50 60 70 80 90 0 50 100 0 25 50 75 100 125 Time Time Time (a) (b) (c)\nFigure 1: (a) Time series of power consumption. (b) Trend prediction on time series. (c) Sequence of historical trends associated with the time series.\n*These two authors contributed equally"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Learning and forecasting trends are quite useful in a wide range of applications, e.g., in the smart energy domain, knowing the predictive trend of power consumption time series enables energy\nproviders to schedule power supply and maximize energy utilization (Zhao & Magoules2012 In this paper, we are particularly interested in the trend of time series, i.e., upward or downwarc pattern of time series that characterized by the slope and duration (Wang et al.]2011)\nSpecifically, given a time series and the associated historical trend evolution, we aim to predict the duration and slope of the subsequent trend. It involves the learning of different aspects of the data On one hand, the trend variation of the time series is a sequence of historical trends describing. the long-term contextual information of time series and thus naturally affect the evolution of the. following trend. On the other hand, the recent raw data points of time series (Wang et al.]2011)|Batal. et al.[2012), which represent the local variation and behaviour of time series, affect the evolving o1. the following trend as well and have particular predictive power for abruptly changing trends (Wang. et al.[2011). Therefore, it is highly desired to develop a systematic way to model such various. hidden and complementary dependencies in time series..\nTo this end, we propose an end-to-end hybrid neural network, referred to as TreNet. It consists of. a LSTM recurrent neural network to capture the sequential dependency in historical trends, a con-. volutional neural network to extract local features from local raw data of time series and a feature fusion layer to learn joint representation to take advantage of both features drawn from CNN and. LSTM. Such joint representation is then used for the trend forecasting. Some preliminary experi-. mental analysis on real datasets demonstrates that TreNet outperforms RNN, CNN, the cascade of. RNN and CNN, and a variety of baselines in term of trend prediction accuracy..\nIn this section, we first provide the formal definition of the trend learning and forecasting problen in this paper, and then present the proposed TreNet."}, {"section_index": "3", "section_name": "Problem Formulation", "section_text": "We define time series as a sequence of data points = {x1, ..., xT}, where each data point xt is real-valued and subscript t represents the time instant. The historical trend sequence of I is a series of historical trends over ', denoted by T = {(lk, sk)}. Each element of T, i.e., (lk, sk), describes a linear function over a certain subsequence (or segment) of I' and corresponds to a trend in A. l and sk respectively represent the duration and slope of trend k. The local data w.r.t. each historical trend in T is defined as a set of data points of size w, denoted by = {(t-w, .:., xt)} and t is the ending time of trend k in T."}, {"section_index": "4", "section_name": "TreNet.", "section_text": "The idea of our TreNet is to combine CNN with LSTM to utilize their representation abilities or different aspects of training data (i.e., I' and T) and then to learn a joint feature for trend prediction.\nTechnically, TreNet is designed to learn a predictive function (l, s) = f(R(T), C(L)). R(T) i derived by training the LSTM over sequence T to capture the dependency in the trend evolving while C(L) corresponds to local features extracted by CNN from sets of local data in . The long term and local features captured by LSTM and CNN, i.e., R(T) and C(L), convey complementary information pertaining to the trend varying. Therefore, the feature fusion layer is supposed to take advantages of both features to produce a fused features used for forecasting the subsequent trend Finally, the trend prediction is realized by the function f(.,.), which corresponds to the feature fusion and output layers as is shown in Figure[2\nDuring the training phase, the duration l and slope sk of each trend k in sequence T are fed intc. the LSTM layer of TreNet. Please refer to (Hochreiter & Schmidhuber1997) for more details o LSTM. When the k-th trend in T is fed to LSTM, the corresponding local raw time series data point (Xtk-W,: : , Xt) in is input to the CNN part of TreNet. CNN consists of H stacked layers of 1-c convolutional, activation and pooling operations. Each layer has a specified number of filters of : specified filter size. The output of CNN in TreNet is the concatenation of max-pooling output on the. last layer H (Donahue et al.2015).\nOur ultimate goal is to propose a neural network based approach to learn a function f(T, L) to predict the subsequent trend (l, s). In this paper, we focus on univariate time series, and the method proposed below can be naturally generalized to multivariate time series..\nFigure 2: Illustration of the hybrid architecture of TreNet. (best viewed in colour)\nThe feature fusion layer combines the representations R(T) and C(L), to form a joint feature Particularly, we first map R(T) and C(L) to the same feature space and add them together to obtair the activation of the feature fusion layer (Mao et al.|2014). The output layer is a fully-connect laye following the feature fusion layer. Mathematically, the prediction of TreNet is expressed as:\n(l,s) = f(R(T), C(L)) = Wo.$(Wr.R(T) + Wc.C(L))+b f eature fusion.\nwhere () is element-wise leaky ReLU activation function and + denotes the element-wise addi tion. Wo and bo are the weights and bias of the output layer. TreNet is trained to minimize the squared loss function of durations and slopes\nIn this section, we report some preliminary experiments to demonstrate the prospect of TreNet. For. evaluation, we compare the performance of TreNet with six baselines: CNN, LSTM, the cascade of CNN and LSTM (CLSTM)(Bashivan et al.2015), Support Vector Regression, Pattern-based Hidden Markov Model (Wang et al.2011), and Naive. For space limitation, we only report the comparison. on Power Consumption (PC) dataset. Regarding the details of experimental setup, training procedure. and a complete comparison, please refer to Section|5\nTable 1: RMSE of the prediction of trend duration and slope on each dataset\nTable[1studies the prediction performances of TreNet and baselines on PC data, and more compari son can be found in Section5] We can observe that TreNet consistently outperforms baselines on the duration and slope prediction by achieving around 30% less errors at maximum. It verifies that the. hybrid architecture of TreNet can improve the performance by utilizing the information captured by both CNN and LSTM. Specifically, pHMM method performs worse due to the limited representa tion capability of HMM. On the slope prediction, SVR based approaches can get comparable results. as TreNet."}, {"section_index": "5", "section_name": "4 CONCLUSION", "section_text": "Historical Trend Sequence lk, Sk -+ LSTM LSTM LSTM R. > Feature Output Feature Output Feature Output Fusion Layer Fusion Layer Fusion Layer CNN CNN CNN Local RawData\nHistorical Trend Sequence lk,Sk LSTM LSTM LSTM Feature Output Feature Output Feature Output Fusion Layer Fusion Layer Fusion Layer S CNN CNN CNN Local RawData\nIn this paper, we propose TreNet, a novel hybrid neural network to learn and predict the trend behaviour of time series The preliminary experimental results demonstrate that such a hybrid frame- work can enhance the prediction performance."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Nicolas Ballas, Li Yao, Chris Pal, and Aaron Courville. Delving deeper into convolutional network for learning video representations. arXiv preprint arXiv:1511.06432, 2015.\nYukun Bao, Tao Xiong, and Zhongyi Hu. Multi-step-ahead time series prediction using multiple output support vector regression. Neurocomputing, 129:482-493, 2014.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. 2014\nJeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venu gopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual. recognition and description. In Proceedings of the IEEE Conference on Computer Vision and. Pattern Recognition, pp. 2625-2634, 2015.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nZachary C Lipton, David C Kale, Charles Elkan, and Randall Wetzell. Learning to diagnose with 1stm recurrent neural networks. arXiv preprint arXiv:1511.03677. 2015.\nPouya Bashivan, Irina Rish, Mohammed Yeasin, and Noel Codella. Learning representations from eeg with deep recurrent-convolutional neural networks. arXiv preprint arXiv:1511.06448, 2015.\nAndrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei. Fei. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. pp. 1725-1732. 2014\nJiajun Liu, Kun Zhao, Brano Kusy, Ji-rong Wen, and Raja Jurdak. Temporal embedding in convolu-. tional neural networks for robust learning of abstract snippets. arXiv preprint arXiv:1502.05113,. 2015. Qi Lyu and Jun Zhu. Revisit long short-term memory: An optimization perspective. In Advances in neural information processing systems workshop on deep Learning and representation Learning.. 2014. Pankaj Malhotra, Lovekesh Vig, Gautam Shroff, and Puneet Agarwal. Long short term memory. networks for anomaly detection in time series. In European Symposium on Artificial Neural. Networks, volume 23, 2015.\nJunhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan Yuille. Deep captioning wit multimodal recurrent neural networks (m-rnn). arXiv preprint arXiv:1412.6632, 2014.\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks In Advances in neural information processing systems, pp. 3104-3112, 2014\nJiang Wang, Yi Yang, Junhua Mao, Zhiheng Huang, Chang Huang, and Wei Xu. Cnn-rnn: A unifie. framework for multi-label image classification. arXiv preprint arXiv:1604.04573, 2016\nSouhaib Ben Taieb and Amir F Atiya. A bias and variance analysis for multistep-ahead time series\nHai-xiang Zhao and Frederic Magoules. A review on the prediction of building energy consumption Renewable and Sustainable Energy Reviews, 16(6):3586-3592, 2012\nIn this section, we first introduce some related work and then report all omitted experimental details including the information of three datasets, the description of baselines, training procedure and the. complete performance evaluation on three datasets.."}, {"section_index": "7", "section_name": "5.1 RELATED WORK", "section_text": "Traditional learning approaches over trends of time series mainly make use of Hidden Markov Mod. els (HMMs) (Wang et al.]2011} Matsubara et al.]2014).HMMs maintain short-term state de pendences, i.e., the memoryless Markov property and predefined number of states, which require.. significant task specific knowledge. RNNs instead use high dimensional, distributed hidden state. that could take into account long-term dependencies in sequence data. Multi-step ahead predictioi. is another way to realize trend prediction by fitting the predicted values to estimate the trend. How ever, multi-step ahead prediction is a non-trivial problem itself (Chang et al.|[2012) suffers from the. accumulative prediction errors (Taieb & Atiya|2016]Bao et al.]2014). In this paper, we concentrate on training neural networks over time series to learn the trend..\nRNNs have recently shown promising results in a variety of applications, especially when there exist sequential dependencies in data (Lyu & Zhu]2014]Chung et al.]2014] Sutskever et al. 2014). Long short-term memory (LSTM) (Hochreiter & Schmidhuber1997]Lyu & Zhu2014} Chung et al.]2014), a class of recurrent neural networks with sophisticated recurrent hidden and gated units, are particularly successful and popular due to its ability to learn hidden long-term sequential dependencies. (Lipton et al.]2015) uses LSTMs to recognize patterns in multivariate time series especially for multi-label classification of diagnoses. (Chauhan & Vig]2015) Malhotra et al.[|2015 evaluate the ability of LSTMs to detect anomalies in ECG time series. CNN is often used to learn effective representation of local salience from raw data (Vinyals et al.]2015]Donahue et al.]2015 Karpathy et al.[2014).(Hammerla et al.]2016] Yang et al.]2015] Lea et al.[2016) make use of CNNs to extract features from raw time series data for activity/action recognition. (Liu et al.]2015 focuses on the prediction of periodical time series values by using CNN and embedding time series with the potential neighbors in the temporal domain.\nHybrid neural networks, which combines the strengths of various neural networks, are receiving in- creasing interest in the computer vision domain, such as image captioning (Mao et al.||2014]|Vinyals et al.2015, Donahue et al.]2015), image classification (Wang et al.2016), protein structure pre- diction (Li & Yu]2016), action recognition (Ballas et al.[2015] Donahue et al.]2015) and so on But efficient exploitation of such hybrid architectures has not been well studied for time series data. especially the trend forecasting problem. (Li & Yu]2016f|Ballas et al.[2015) utilize CNNs over im- ages in cascade of RNNs in order to capture the temporal features for classification. (Bashivan et al. 2015) transforms EEG data into a sequence of topology-preserving multi-spectral images and then trains a cascaded convolutional-recurrent network over such images for EEG classification. (Wang et al.]2016] Mao et al.]2014) propose the CNN-RNN framework to learn a shared representation for image captioning and classification problems.\nDataset: We test our method and baselines on three real time series datasets\n1 https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption 2 https://archive.ics.uci.edu/ml/datasets/Gas+sensor+array+under+dynamic+gas+mixtures\nPower Consumption (PC). This datasef'|contains measurements of electric power con- sumption in one household with a one-minute sampling rate over a period of almost 4. years. Different electrical quantities and some sub-metering values are available. We use. the voltage time series throughout the experiments.. Gas Sensor (GasSensor). This datase(? contains the recordings of chemical sensors ex-. posed to dynamic gas mixtures at varying concentrations. The measurement was con-. structed by the continuous acquisition of the sensor array signals for a duration of about 12.\nBaselines: We compare TreNet with the following six baselines:\nEvaluation metric: We evaluate the predictive performance of TreNet and baselines in terms of Root Mean Square Error (RMSE). The lower the RMSE, the more accurate the predictions.\nTraining: In TreNet, CNN has two stacked convolutional layers, which have 32 filters of size 2 and 4. The number of memory cells in LSTM is 600. In addition to the learning rate, the number of neurons in the feature fusion layer is chosen from the range {300, 600, 900, 1200} to achieve the best performance. The window size in TreNet is chosen by cross validation. We use dropout and L2 regularization to control the capacity of neural networks to prevent overfitting, and set the values to 0.5 and 5 10-4 respectively for all datasets (Mao et al.|2014). The Adam optimizer (Kingma & Ba] 2014) is chosen to learn the weights in neural networks. Regarding the SVR based approaches, we carefully tune the parameters c (error penalty), d (degree of kernel function), and y (kernel coef ficient) for kernels. Each parameter is selected from the sets c E {10-5, 10-4, .., 1, . .., 104, 105} d E {1, 2, 3}, y E {10-5, 10-4, ..., 105} respectively.\nTable[2|continues the evaluation in Table[1and more thoroughly studies the prediction performance of TreNet and baselines.\nhours without interruption. We mainly use the gas mixture time series regarding Ethylene. and Methane in air. Stock Transaction (Stock): This dataset is extracted from Yahoo Finance and contains the daily stock transaction information in New York Stock Exchange from 1950-10 to 2016-4..\nFor the ease of experimental result interpretation, the slope of the trends is represented by the angle of the corresponding linear function and thus in a bounded value range -90, 90]. The duration of trends is measured by the number of data points within the trend..\nCNN. This baseline method predicts the trend by using CNN over the raw data of time. series. LSTM. This method uses LSTM to learn dependencies in the trend sequence T and pre. dicts the trend. ConvNet+LSTM(CLSTM). It is based on the cascade structure of ConvNet and LSTM in (Bashivan et al.[[2015) which feeds the features learnt by ConvNet over time series to a LSTM and obtains the prediction from the LSTM. Support Vector Regression (SVR). A family of support vector regression based ap. proaches with different kernel methods is used for the trend forecasting. We consider three. commonly used kernels (Liu et al.2015), i.e., Radial Basis kernel (SVRBF), Polynomial. kernel (SVPOLY), Sigmoid kernel (SVSIG). The trend sequence and the corresponding. set of local time series data are concatenated as the input features to such SVR approaches. Pattern-based Hidden Markov Model (pHMM). (Wang et al.]2011) proposed a pattern- based hidden Markov mode1 (HMM), which segments the time series and models the de-. pendency in segments via HMM. The derived HMM model is used to predict the state of. time series and then to estimate the trend.. Naive. This is the naive approach which takes the duration and slope of the last trend as. the prediction for the next one..\nDataset Model RMSE @ Duration RMSE @ SIope CNN 27.51 13.56 LSTM 27.27 13.27 CLSTM 25.97 13.77 PC SVRBF 31.81 12.94 SVPOLY 31.81 12.93 SVSIG 31.80 12.93 pHMM 34.06 26.00 Naive 39.68 21.17 TreNet 25.89 12.89 CNN 18.87 12.78 LSTM 11.07 8.40 CLSTM 9.26 7.31 Stock SVRBF 11.38 7.40 SVPOLY 11.40 7.42 SVSIG 11.49 7.41 pHMM 36.37 8.70 Naive 11.36 8.58 TreNet 8.86 6.84 CNN 53.99 11.51 LSTM 55.77 11.22 CLSTM 54.20 14.86 GasSensor SVRBF 62.81 10.21 SVPOLY 70.91 10.95 SVSIG 85.69 11.92 pHMM 111.62 13.07 Naive 53.76 10.57 TreNet 52.28 9.57\nTable 2: RMSE of the prediction of local trend duration and slope on each dataset"}] |
HJOuEn7Fx | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "High dimensionality and large amounts of data pose a challenge for data analysis in many fields Variables which are often redundant can typically be compressed into more compact representations without a significant loss of information. While many modern methods try to overcome the \"curse of dimensionality\"' and enable machine learning with high-dimensional data, good feature selection and extraction still can lead to significant improvements in many cases. The biggest challenge with representation learning methods is to ensure that the resulting features are sufficiently generic, i.e. that they are applicable to many different tasks. This work presents an early evaluation of t-SNE and CAE-based methods on real-world data from the automotive domain.\nThis research investigates an unsupervised non-linear mapping of high-dimensional data fro. on-board truck sensors, collected in the form of bivariate histograms, to a low-dimensional (2D, 3D. or 10D) representation. Solutions based on flattening out the histogram bins into the feature vector. as well as those that preserve spatial proximity of the bins, are evaluated. Since the aim is to obtair a generic representation, the investigated methods are assessed by measuring the performance over. a multitude of supervised machine learning tasks.."}, {"section_index": "1", "section_name": "2 METHODS", "section_text": "Convolutional autoencoder is an unsupervised learning algorithm that trains weights of a neural network so that the computed output is as similar as possible to the provided input. Stacked layers have symmetric sizes and the smallest, middle-most layer, known as bottleneck, can be exploited fol"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "This work presents evaluation of several approaches for unsupervised mapping of raw sensor data from Volvo trucks into low-dimensional representation. The overall goal is to extract general features which are suitable for more than one task. Compari son of techniques based on t-distributed stochastic neighbor embedding (t-SNE) and convolutional autoencoders (CAE) is performed in a supervised fashion over 74 dif ferent 1-vs-Rest tasks using random forest. Multiple distance metrics for t-SNE and multiple architectures for CAE were considered. The results show that t-SNE is most effective for 2D and 3D, while CAE could be recommended for 10D representations Fine-tuning the best convolutional architecture improved low-dimensional repre sentation to the point where it slightly outperformed the original data representation\nt-SNE (van der Maaten & Hinton 2008) uses distance matrix as an input and computes data coordinates in low-dimensional space in a way that tries to maintain the neighborhood relationships, not necessarily the actual distance values. Distance matrix was computed using several common bin-to-bin - Euclidean, cosine, correlation, and Spearman, - and cross-bin distances - Earth mover's (Ling & Okada2007) and diffusion (Ling & Okada|2006).\ndimensionality reduction. CAE allows for learning of local features and promotes weight sharing where hidden layers are a result of convolving the input with a filter mask\nExternal evaluation was performed using random forest (Breiman. 2001), composed of 1000 unprunned CART-type trees. Detector's votes for the out-of-bag (OOB) data were converted to a soft decision through a normalized difference between class probabilities. The average cost of log-likelihood-ratio Cur (Brummer & de Villiers!2013) over 74 different detection tasks was used. to measure how good the investigated representation is.."}, {"section_index": "3", "section_name": "3 EXPERIMENTS", "section_text": "Data originates from 79974 unique Volvo trucks, and is recorded during a full year. The data of a single truck is represented with a bivariate histogram, where the axes correspond to a pair of sensors turbocharger speed vs boost pressure. The original matrices with 9 10 dimensions were, for CAE experiments, zero-padded into 12 12. Discrete bin values of absolute frequency were converted into continuous values of relative frequency within the 0- 1| range through division by the overall sum of counts. Example of the resulting bivariate histogram for a typical truck is shown in Fig.[1|(left).\nValidation sample of 7997 trucks was selected by a conditioned Latin hypercube method (Minasny. & McBratney|2006) from original data using age-based stratification. Example result of such sampling is shown in Fig[1(right), visualized using t-SNE. The 71977 trucks remaining after this split were used. for training of CAE-based methods. Notably, there is no need to for training data in case of methods based on t-SNE. Finally, the CAE-based fine-tuning was done using another, independent test sample of 7198 trucks, selected using the same hypercube method from this training set of 71977 trucks..\nSupervised external evaluation and comparison of various methods was performed using a number of labels describing various truck configurations. The overall goal is not to find the best low-dimensional representation tailored to a very specific task, but rather to identify the method for learning a widely applicable representation. 74 different 1-vs-Rest detection tasks were devised from the following label groups (the number of categories within each group in parenthesis): engine (15), gearbox (9) chassis type (8), model (6), marketing type (6), fuel capacity (6), country of operation (5), model name (5), product class number (5), emission level (3), country (2), truck type (2), and brand type (2).\nExperimental setup. For t-SNE, the perplexity parameter was set to 15 and the number of iterations was 1000. For CAE, robust internal representations were enforced by dropout (Srivastava et al.2014) (rate = 0.5) and elastic net activity regularization (penalties L1 = L2 = 0.00001) in the bottleneck Activation function was hyperbolic tangent, 8 and 32 filters of 3 3 mask size with one and two convolutional layers were investigated, as well as classical (Masci et al.[2011) and variational (Kingma & Welling2014) CAE architectures in Keras framework using NADAM (Dozat2016) optimizer.\n0.3 3 3 0.25 2.5 2.5 Bonrssue Bosse 0.2 2 2 0.15 1.5 1.5 0.1 1 1 0.05 0.5 0.5 Turbocharger speed Freq. Age Age\nFigure 1: Truck sensor data, using original representation of relative frequency bivariate histogram. (left). Data visualization by t-SNE: full (middle) and validation sample (right), where age is the total hours of individual truck usage (corresponding to the overall sum of bin counts)..\nFine-tuning. The encoding part of the best performing CAE was used to initialize weights of discriminative convolutional neural network. Two types of architectures for classification, connected to the bottleneck, were tested: simple (using a single densely connected layer with softmax activations as in logit) and complex (2 layer perceptron with 100 rectified linear units in hidden layer and softmax activations in the output).\nx 10 x10 0.3 3 3 0.25 2.5 2.5 0.2 2 2 0.15 1.5 1.5 0.1 1 0.05 0.5 0.5 Turbocharger speed Freq. Age Age\nExperiments using the methods outlined above were done with original data, as well as with a sparse matrix, capturing the deviation of an individual truck from the average operation of all vehicles obtained from robust PCA (RPCA) (Aravkin et al.l2014)."}, {"section_index": "4", "section_name": "3.1 RESULTS OF EXPERIMENTS", "section_text": "The scatter plot on Fig.2|(right) demonstrates how Cur values vary for different detection tasks among the three selected methods. Each point corresponds to a single 1-vs-Rest labeling goodness of-detection in minimal Cr obtained using original data versus its low-dimensional representation: 3D from diffusion t-SNE (), 10D from CAE (), and 10D after complex fine-tuning (O).\nTo summarize, t-SNE was found to be suitable for 2D and 3D representations, but for 10D representation t-SNE was outperformed by CAE. Preprocessing by RPCA did not provide any significant improvement for methods analyzed. From various possible distances for t-SNE, Euclidean seems to be rather sufficient. Diffusion distance, although better than Earth mover's, provides similar performance to Euclidean. Classical CAE with 1 layer and 32 filters proved to be the best for 10D representation and complex fine-tuning allowed this representation to outperform the original slightly\nOne promising direction of future research concerns using other pairs of sensors to obtain a mor. comprehensive comparison of the methods. Combination of multiple bivariate histograms could alsc. be jointly compressed using CAE, if treating sensor pairs as separate channels, similarly to RGB i1 color images. Another idea is to exploit \"'repeated-measures'' aspect of historical information du to regular reporting of on-board sensor data, which could help to find effective representations witl. regard to temporal evolution and not only a single snapshot..\nResults of selected methods are compared in Fig.2 The selection was based on evaluation using the. validation sample for 74 detection tasks in each target dimensionality (2D, 3D and 10D), both for the original and RPCA-transformed data. The intervals around the average Cur (or its rank) are such that. two results being compared are significantly different if intervals are disjoint and are not significantly. different if intervals overlap. None of the unsupervised methods was able to improve over the initial 90-dimensional data, but similar performance could be achieved after CAE-based complex fine-tuning.\nComparison of dimensionality reduction methods for bivariate histograms revealed that Euclidean or diffusion distance-based t-SNE is useful for visualization purposes (i.e. for producing 2D or 3D representations), while classical 1 layer 32 filters CAE is useful for learning a more generic representa. tion. Low-dimensional CAE-based representation after supervised fine-tuning was able to outperform original representation slightly, but non-significantly, in various detection tasks. Bivariate histogram. can be effectively compressed into universal low-dimensional representation, which can be further. adapted to the supervised task at hand to achieve the discriminatory power of the original representation\n10 nal data dean 2D 0.8 ^ usion 3D ^ 1/32 10D 0.6 A nple 10D plex 10D 000 Ist PCA ] 0.4 osine 2D lation 3D 0.2 A diffusion 3D 1/32 10D cAE/1/32 10D O complex 10D 0.4 0.5 0.55 10 0 0.45 0.6 0 2 4 6 8 0 0.2 0.4 0.6 0.8 1\n1 Ko [ original data ] euclidean 2D 0.8 A A4 diffusion 3D cAE/1/32 10D 0.6 A J 0 simple 10D complex 10D 0.4 AA [ robust PCA 0o 0 O cosine 2D A correlation 3D 0.2 A A diffusion 3D cAE/1/32 10D cAE/1/32 10D I O complex 10D 0.4 0.45 0.5 0.55 0.6 0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 Mean of C. Mean rank of C. C.. of original representation\nFigure 2: Results of the multiple comparisons procedure (using Tukey's HSD criterion with 95% confidence) for detection performance: parametric repeated-measures ANOVA (left) and the non-parametric Friedman test (middle). Of the 10 presented methods, the top 6 lines use original data while bottom 4 lines use RPCA-transformed data. The best result is denoted by an asterisk (*), results similar to the best one by a cross () sign, and statistically significantly worse results are denoted by a plus (+) sign. Scatter plot (right) reveals performance for the 3 methods in each of 74 detection tasks.\nTimothy Dozat. Incorporating Nesterov momentum into Adam. In Proceedings of the 4th Internationc Conference on Learning Representations (ICLR), pp. 1-4, San Juan, Puerto Rico, May 2 2016\nAndrew E. Johnson and Martial Hebert. Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(5):433-449, May 1999. 1SSN 0162-8828.doi:10.1109/34.765655\nLaurens van der Maaten and Geoffrey Hinton. Visualizing high-dimensional data using t-SNE. Journal of Machine Learning Research, 9:2579-2605, November 2008"}] |
rknkNR7Ke | [{"section_index": "0", "section_name": "ABSTRACT", "section_text": "We propose a framework for training multiple neural networks simultaneously The parameters from all models are regularised by the tensor trace norm, so that each neural network is encouraged to reuse others' parameters if possible - this is the main motivation behind multi-task learning. In contrast to many deep multi- task learning models, we do not predefine a parameter sharing strategy by specify ing which layers have tied parameters. Instead, our framework considers sharing for all shareable layers, and the sharing strategy is learned in a data-driven way.\nMulti-task learning (MTL) (Caruana1997) aims to learn multiple tasks jointly, so that knowledge. obtained from one task can be reused by others. We first briefly review some studies in this area\nMatrix-based Multi-Task Learning Matrix-based MTL is usually built on linear models, i.e. each task is parameterised by a D-dimensional weight vector w, and the model prediction is = x . w = xT w, where x is a D-dimensional feature vector representing an instance. The objective\n1a1l1C IOllalWCllllVCCl J1CC1C x . w = xT w, where x is a D-dimensional feature vector representing an instance. The objective l(y, y) is a loss function of the true label y and predicted label y. T is the number of tasks, and for the i-th task there are N(i) training instances. Assuming the dimensionality of every task's feature is the same, the models - w(i)s -- are of the same size. Then the collection of w(i)s forms a D T matrix W of which the i-th column is the linear model for the i-t task. To achieve MTL we exploit a regulariser N(W) that couples the learning problems, typically by encouraging W to be a low-rank matrix. Some choices include the l2.1 norm (Argyriou et al.][2008), and trace norm (Ji & Ye]2009) An alternative approach (Kumar & Daume HI2012) is to explicitly formulate W as a low-rank matrix, i.e., W = LS where L is a D K matrix and S is a K T matrix with K < min(D, T) as a hyper-parameter (matrix rank).\nTensor-based Multi-Task Learning In the classic MTL setting, each task is indexed by a single. factor. But in many real-world problems, tasks are indexed by multiple factors. For example, to. build a restaurant recommendation system, we want a regression model that predicts the scores for. different aspects (food quality, environment) by different customers. Then the task is indexed by aspects customers. The collection of linear models for all tasks is then a 3-way tensor W of size. D Tj T2, where Tj and T2 is the number of aspects and customers respectively. Consequently Q(W) has to be a tensor regulariser (Tomioka et al.2010). For example, sum of the trace norms on. all matriciations'[(Romera-paredes et al.|2013), and scaled latent trace norm (Wimalawarne et al.. 2014). An alternative solution is to concatenate the one-hot encodings of the two task factors and feed it as input into a two-branch neural network model (Yang & Hospedales]2015).\nMulti-Task Learning for Neural Networks. With the success of deep learning, many studies have. investigated deep multi-task learning.Zhang et al.(2014) use a convolutional neural network to find. facial landmarks as well as recognise face attributes (e.g., emotions).Liu et al.(2015) propose a. neural network for query classification and information retrieval (ranking for web search). A key. commonality of these studies is that they use a predefined sharing strategy. A typical design is to use\nMatriciation is also known as tensor unfolding or flattening"}, {"section_index": "1", "section_name": "TRACE NORM REGULARISED DEEP MULTI-TASK LEARNING", "section_text": "the same parameters for the bottom layers of the deep neural network and task-specific parameter for the top layers. This kind of architecture can be traced back to 2000s (Bakker & Heskes2003. However, modern neural network architectures contain a large number of layers, which makes th decision of 'at which layer to split the neural network for different tasks?' extremely hard.."}, {"section_index": "2", "section_name": "2 METHODOLOGY", "section_text": "Instead of predefining a parameter sharing strategy, we propose the following framework: For 7 tasks, each is modelled by a neural network of the same architecture. We collect the parameters ir a layer-wise fashion, and put a tensor norm on every collection. We illustrate the idea by a simpl example: assume that we have T = 2 tasks, and each is modelled by a 4-layer convolution neura network (CNN). The CNN architecture is: (1) convolutional layer ('conv1') of size 5 5 3 32 (2) conv2' of size 3 3 32 64, (3) fully-connected layer ('fc1') of size 256 256, (4) fully connected layer 'fc2'(1) of size 256 10 for the first task and fully-connected layer ('fc2'(2)) of size 256 20 for the second task. Since the two tasks have different numbers of outputs, the potentiall shareable layers are conv1', 'conv2', and 'fc1', excluding the final layer of different dimensionality\nFor single task learning, the parameters are 'conv1'(1), 'conv2'(1), fc1'(1), and fc2'(1) for the first. task; 'conv1'(2), 'conv2'(2), 'fc1'(2), and fc2'(2) for the second task. We can see that there is not. any parameter sharing between these two tasks. In one possible predefined deep MTL architecture,. the parameters could be 'conv1', 'conv2', fc1'(1), and 'fc2'(1) for the first task; conv1', 'conv2'. 'fc1'(2), and fc2'(2) for the second task, i.e., the first and second layer are fully shared in this case.. For our proposed method, the parameter setting is the same as single task learning mode, but we put three tensor norms on the stacked {'conv1'(1), 'conv1'(2)} (a tensor of size 5 5 3 32 2), the. stacked {*conv2(1), conv2'(2)} (a tensor of size 3 3 32 64 2), and the stacked {'fc1'(1) 'fc1'(2)} (a tensor of size 256 256 2) respectively.\nFor an N-way tensor W of size D1 Dy. We define. X\nW() := reshape(permute(W, [i, 1,..., i - 1, i + 1..., ND), [Di, 1-, Dl) is the mode-i tensor flattening. This is the simplest definition. Given that in our framework, the last axis of tensor indexes the tasks, i.e., Dy = T, it is the most straightforward way to adapt the technique of matrix-based. MTL- reshape the D D, ... T tensor to D D,... T matrix..\nN IW|* Lvi||W(a)|+ (Tensor Trace Norm) Tucker i=1 N-1 (Tensor Trace Norm) TT IW|* >7i||W[i]|l i=1\nOptimisation Using gradient-based methods for optimisation involving trace norm is not a com mon choice, as there are better solutions based on semi-definite programming or proximal gra\nTensor NormWe choose to use the trace norm, the sum of a matrix's singular values ||X||* =1 Oi. It has a nice property that it is the tightest convex relation of matrix rank (Recht et al. 2010). When directly restricting the rank of a matrix is challenging, trace norm serves as a good proxy. The extension of trace norm from matrix to tensor is not unique, just like tensor rank has multiple definitions. How to define tensor rank depends on how we assume the tensor is factorised. e.g., Tucker (Tucker1966) and Tensor-Train Oseledets(2011) decompositions. We propose three tensor trace norm designs here. which correspond to three variants of the proposed method.\nTo advance, we define two kinds of tensor trace norm that are closely connected with Tucker-rank. (obtained by Tucker decomposition) and TT-rank (obtained by Tensor Train decomposition).\nHere W is yet another way to unfold the tensor, which is obtained by W[. reshape(W, [DiD2... D, Di+1Di+2... D]). It is interesting to note that unlike LAF, Tucker. and TT also encourage within-task parameter sharing, e.g, sharing across filters in a neural network context.\nsso7 4 0.36 STL 3 Aeerneey LAF Tucker 0.35 0.5 TT LAF Crsss Tucker 0.34 0 0 STL LAFTucker TT 0 200 400 Conv1Conv2Conv3 FC1 Number of Iteration 300 300 300 250 250 250 LAF Tucker 200 Womn 200 E 200 TT ION 150 150 150 LAF LAF Tucker Tucker 100 TT 100 100 TT 50 50 50 0 200 400 0 200 400 0 200 400 Iterations (LAF) Iterations (Tucker) Iterations (TT)\nFigure 1: Top-left: Testing accuracy. Top-mid: Training loss. Top-right: sharing strength by layer Bottom: Norms when optimising LAF (left), Tucker (middle), TT (right).."}, {"section_index": "3", "section_name": "3 EXPERIMENT", "section_text": "Our method is implemented in TensorFlow (Abadi et al.]2015), and released on Github2] we ex. periment on the Omniglot dataset (Lake et al.|2015). Omniglot contains handwritten letters in 50 different alphabets (e.g., Cyrillic, Korean, Tengwar), each with its own number of unique characters. (14 ~ 55). In total, there are 1623 unique characters, each with 20 instances. Each task is a multi. class character recognition problem for the corresponding alphabet. The images are monochrome of size 105 105. We design a CNN with 3 convolutional and 2 FC layers. The first conv layer has. 8 filters of size 5 5; the second conv layer has 12 filters of size 3 3, and the third convolutional layer has 16 filters of size 3 3. Each convolutional layer is followed by a 2 2 max-pooling. The first FC layer has 64 neurons, and the second FC layer has size corresponding to the number of unique classes in the alphabet. The activation function is tanh. We compare the three variants of the proposed framework - LAF (Eq.1), Tucker (Eq.2), and TT (Eq.3) with single task learning (STL) For every layer, there are one (LAF) or more (Tucker and TT) y that control the trade-off between. the classification loss (cross-entropy) and the trace norm terms, for which we set all y = 0.01..\nThe experiments are repeated 10 times, and every time 10% training data and 90% testing dat. are randomly selected. We plot the change of cross-entropy loss in training set and the values o norm terms with the neural networks' parameters updating. As we can see in Fig[1] STL has th lowest training loss, but worst testing performance, suggesting over-fitting. Our methods alleviat the problem with multi-task regularisation. We roughly estimate the strength of parameter sharin compared to the top ones. This reflects the common design intuition that the bottom layers are mor. data/task independent. Finally, it appears that the choice on LAF, Tucker, or TT may not be ver!. sensitive as we observe that when optimising one, the loss of the other norms still reduces..\nThis technique provides a data-driven solution to the branching architecture design problem in deep multi-task learning. It is a flexible norm regulariser-based alternative to explicit factorisation-based approaches to the same problem (Yang & Hospedales2017)\ndients since the trace norm is essentially non-differentiable. However, deep neural networks are usually trained by gradient descent, and we prefer to keep the standard training process. There- OX X(XT X)-2. A more numerical stable method instead of computing the inverse matrix square root. is X(XT X)-2 = UVT where U and V are obtained from SVD: X = UVT (Watson 1992)"}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew. Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath. Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,. Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin- cent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Watten-. berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning. on heterogeneous systems, 2015. URLhttp: //tensorf1ow. org/ Software available from. tensorflow.org.\nBart Bakker and Tom Heskes. Task clustering and gating for Bayesian multitask learning. Journal of Machine Learning Research (JMLR), 2003..\nRich Caruana. Multitask learning. Machine Learning. 1997\nXiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. Representa tion learning using multi-task deep neural networks for semantic classification and information retrieval. NAACL. 2015.\nI. V. Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Computing, 2011\nBernardino Romera-paredes, Hane Aung, Nadia Bianchi-berthouze, and Massimiliano Pontil. Mul tilinear multitask learnino. InInternationa( A/ ine Learning (CML). 2013\nL. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 1966\nKishan Wimalawarne, Masashi Sugiyama, and Ryota Tomioka. Multitask learning meets tensor factorization: task imputation via convex optimization. In Neural Information Processing Systems (NIPS), 2014.\nZhanpeng Zhang, Ping Luo, Chen Change Loy, and Xiaoou Tang. Facial landmark detection b deep multi-task learning. In European Conference on Computer Vision (ECCV), 2014.\nBrenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 2015..\nBenjamin Recht, Maryam Fazel, and Pablo A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev.. 2010..\nG.A.Watson. Characterization of the subdifferential of some matrix norms. Linear Alge bra and its Applications, 170:33 - 45, 1992. ISSN 0024-3795. doi:http://dx.doi.org/ 10.1016/0024-3795(92)90407-2. URL http://www.sciencedirect.com/science/ artic1e/pii/0024379592904072"}] |
r1PyAP4Yl | [{"section_index": "0", "section_name": "NEURAL CLUSTERING: CONCATENATING LAYERS FOR BETTER PROJECTIONS", "section_text": "Sean Saito & Robby T. Tan\nYale-NUS College\nSingapore, 138533\nEffective clustering can be achieved by mapping the input to an embedded space rather than clustering on the raw data itself. However, there is limited focus on unsupervised transformation methods that improve clustering and classification accuracies. In this paper, we introduce Neural Clustering'I a simple yet effective unsupervised model to project data onto an embedded space where intermediate layers of a deep autoencoder are concatenated to generate high-dimensional rep resentations. Optimization of the autoencoder via reconstruction error allows the layers in the network to learn semantic representations of different classes of data We then use the k-NN algorithm to classify the projected points. Our experimen tal results yield significant improvements on other models and a robustness across different kinds of datasets."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Clustering is a fundamental approach to unsupervised learning and is also used extensively for dat: visualization and analysis (Aggarwal and Reddy(2013)). In many cases, a given input domain is projected onto some embedded space for more accurate classification. It is an extensively studied field, with various methods for grouping data points. The k-NN algorithm, for example, takes the. k nearest neighbors of a query node based on Euclidean distance to assign a classification. Many. of these methods depend on the \"quality\" of the embedded space; that is, the input domain mus be transformed to a target space that provides greater separation of different classes and close projection of similar data. Clustering on the pixels themselves is naive and hence an effective trans formation is necessary. Hence the question arises: what kind of transformations achieve this? Wha kind of models can learn complex representations of the data? Unfortunately, research that address. these questions are limited.\nRecent work in this field involves deep embedded clustering (DEC) which uses autoencoders as non-linear mappings to the embedded domain (Xie et al.(2016)). This clustering method minimizes Kullback-Leibler (KL) divergence and uses the k-means algorithm to approximate k centroids gen erated by the encoded layer of the autoencoder, where k is the number of categories. However, this model requires the end-user to specify the number of centroids, k, and the target distribution p(x that is used to calculate KL divergence. This constitutes a naive approach, for it is possible for the optimal embedded distribution of a dataset to exceed k clusters. Moreover, the ideal target distribu tion is likely to differ depending on the data and hence is difficult to be determined heuristically.\nIn this paper, we introduce Neural Clustering, a simple unsupervised method that yields better clas- sification results. Rather than using just the central encoded layer of an autoencoder to generate embeddings, we concatenate the learned representations of all intermediate layers. The training of our model only consists of one stage - the optimization of the autoencoder itself. We then classify the embedded points using the k-NN algorithm. The main idea is to use the features learned by each neural network layer to generate a combined representation which can be used for effective cluster analysis and classification.\n' Implementation will be made publicly available at https://github.com/seansaito"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Our clustering model consists of two stages: the training stage, which involves optimizing an au toencoder, and the representation stage, whereby we extract the features learned by layers of the network to generate a descriptor.."}, {"section_index": "3", "section_name": "2.1 AUTOENCODER", "section_text": "An autoencoder is a type of deep neural network which task is to find mappings for reconstructing the input domain. It is extensively studied in the unsupervised learning domain and has a variety of applications, such as denoising images (Vincent et al.(2010)). The architecture consists of two main components, the encoder and the decoder. The encoder learns a deterministic non-linear mapping that transforms the input to some lower dimensional representation. The decoder aims to find the inverse mapping. The autoencoder is trained with backpropagation to minimize the distance between the input and the decoder output:\nwhere fe and ge are the encoder and decoder transformations with learnable parameters 0. Moreover, for 2-dimensional data such as images, we employ convolutional filters rather than vanilla deep neural networks. This is inspired by recent work that have significantly improved the benchmark for various image classification tasks using convolutional neural networks (LeCun et al.(1998); Krizhevsky et al.(2012); Girshick et al.(2014)). Using convolutional filters allows the autoencoder to identify local patches of spatial features. Certain visual features produce different outputs through a convolutional filter; this allows us to construct combined representations that act as discriminating descriptors."}, {"section_index": "4", "section_name": "2.2 COMBINED REPRESENTATION", "section_text": "Suppose an autoencoder with n layers of neurons between the input and the encoded layer. After the training phase of the autoencoder described above, we generate a descriptor for each input by. concatenating the intermediate outputs of each layer in the network. Hence d(x), the function for generating the descriptor, can be defined as:\nd(x)=(l1x),l2(x),...,ln(x)\nwhere lk is the transformation applied at encoder layer k:\nUsing all intermediate layers rather than the encoding alone allows the embedded space to represent. more complex semantic representations. For convolutional autoencoders, we exclude subsampling layers to avoid coarse representation of specific visual features. This idea is borrowed from work. on fully convolutional networks that have produced promising results in semantic segmentation. (Long et al.(2015)). Thus the intuitive idea behind the combined representation is to generate high. dimensional representations of the data that increases separates data of different classes and hence. increases clustering accuracy.\nAs shown in Figure|1|in the Appendix, the Euclidean distance of the combined representations is lower on average for data of the same class (the diagonals). This helps a distance-based classificatior algorithm such as k-NN make better classifications. We also observe that certain digits that look alike. such as 4's and 9's. have relatively lower distances. while those that are clearly dissimilar have higher distances. Figure|2|shows examples of how layers in the network react to different classes of data.\n1 E(x,0) xn- ge(fe(xn))) n n\nk(x) = Ok(Wklk-1(x) + 6k"}, {"section_index": "5", "section_name": "3.1 RESULTS", "section_text": "Table 1: Comparison of classification accuracies\nModel MNIST CIFAR-10 20newsgroups Deep Embedded Clustering 84.7% 18.6% 11.5% t-SNE-2+ k-NN (k = 3) 92.4% 36.6% 36.6% k-means (k = #classes) 53.5% 20.6% 19.4% k-NN(k = 3) 95.8% 50.5% 37.2% neural-clustering (ours). 96.6% 61.1% 82.8 %\nTable 2: Comparison of variations to our model\nThis paper proposes Neural Clustering, an unsupervised method for generating embedded represen. tations of data that enable effective distance-based classification. Our method does not depend or heuristics such as the number of desired centroids or the target distribution of the embedded space. Rather, optimization of an autoencoder allows each layer to learn a semantic representation of the. data. Combining these representations can generate descriptors which can help distinguish certair. categories from another\nOur results strongly support the effectiveness of this method. Not only does it produce state-of-the- art performances, it also demonstrates robustness across different types of datasets\nWe also would like to raise certain questions regarding this model. There lacks firm theoretical. grounding on why it outperforms others; there remains the question of how an autoencoder is able. to learn features that help with clustering even if it does not directly optimize clustering error. Future endeavors would attempt to address these issues as well as observe its performance across different. tasks and datasets.\nIn the testing stage, we use the k-NN algorithm to evaluate the classification accuracy of our model We conduct experiments on three datasets, namely MNIST, CIFAR-10, and 2Onewsgroups (14. classes). For MNIST and CIFAR-10, we use deep convolutional autoencoders, while for 2Onews-. groups we use vanilla deep neural networks and vectorize the input using the TF-IDF transforma tion. We compare our model performance with those of recent state-of-the-art techniques and other standard models. For the k-means algorithm, we set k as the number of classes..\nIn Table 1, we report the best performance of each algorithm. Note our method outperforms all other. models and produces state-of-the-art results. We observe significant differences for more complex. datasets with higher dimensions; this indicates a robustness to varying levels of complexity. Table 2. compares variations to our model. Given the descriptors generated from the autoencoder, we apply. t-SNE to reduce the dimension to either 2 or 3. This transformation yields faster inference, yet. results show that classifying the raw combined representations produces the best results.\nREFERENCES . C. Aggarwal and C. K. Reddy. Data clustering: algorithms and applications. Chapman a Hall/CRC, 2013 . Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate obj detection and semantic segmentation. In Proceedings of the IEEE conference on computer vis and pattern recognition, pages 580-587, 2014. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutio neural networks. In Advances in neural information processing systems, pages 1097-1105, 20 Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to docum recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. . Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 343 3440, 2015. . Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P-A. Manzagol. Stacked denoising auto coders: Learning useful representations in a deep network with a local denoising criterion. Jo nal of Machine Learning Research, 11(Dec):3371-3408, 2010. . Xie, R. Girshick, and A. Farhadi. Unsupervised deep embedding for clustering analysis International Conference on Machine Learning (ICML), 2016.\nPENDIX 0 13.0 12.5 2 12.0 11.5 4 11.0 10.5 6 10.0 9.5 8 9.0 0 2 4 6 8\nFigure 1: Heatmap representing the average Euclidean distances of the combined representations by class using MNIST data.\nFigure 2: Example outputs of different filters from a particular layer in the autoencoder\nFigure 3: Projection of MNIST points to 2-D and 3-D spaces. These embeddings are generatec by transforming the combined representations using the t-SNE algorithm. As discussed earlier, the embedding produces more than 10 clusters."}] |
rJFpDxfFl | [{"section_index": "0", "section_name": "DEEP ADVERSARIAL GAUSSIAN MIXTURE AUTO- ENCODER FOR CLUSTERING", "section_text": "Warith Harchaoui\nResearch and Development Oscaro.com Paris.\nwarith.harchaoui@oscaro.com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The computer vision field has recently witnessed major progress thanks to end-to-end deep-learning. systems since the seminal work of (LeCun et al.],1990) and more recently (Krizhevsky et al.,2012) Most of the work however has been carried out in a supervised context. Our effort leverages tha wealth of existing research but for clustering in a unsupervised framework with adversarial auto. encoders (Makhzani et al., 2015) inspired by generative adversarial networks (Goodfellow et al.. 2014).\nClustering is one of the most fundamental tasks in machine learning (Duda et al.], 2012). This tasl consists in grouping similar objects together without any supervision, meaning there are no labels tc guide the grouping. The goal is to find structure in data to facilitate further analysis. It is a desirable goal in data analysis, visualization and is often a preliminary step in many algorithms for example in Computer Vision (Ponce & Forsyth,2011)\nIn this paper, the clustering is achieved in the code space through a combination of auto-encoders (Vincent et al., 2010) and a Gaussian mixture model (McLachlan & Peel, 2004). We propose an al- gorithm to perform unsupervised clustering with the adversarial auto-encoder framework that we call DAC\" for Deep Adversarial Clustering. Indeed, we postulate that a generative approach, namely a tuned Gaussian mixture model, can capture an explicit latent model that is the origin of the observed data, in the original space.\n3 DEEP ADVERSARIAL GAUSSIAN MIXTURE AUTO-ENCODERS FOR CLUSTERING\nIn our approach, the embedding step role is to make the data representation easier to classify than in the initial space. We have chosen GMM for its nice theoretical properties (Fraley & Raftery,2002 Biernacki et al., 2ooo). Moreover, it is well-known that such clustering algorithms work better ir low-dimensional settings which motivates our use of dimension reduction performed here by the auto-encoder.\nPierre-Alexandre Mattei & Charles Bouveyron\n(pierre-alexandre.mattei,charles.bouveyron @parisdescartes.fr\nFeature representation for clustering purposes consists in building an explicit or. implicit mapping of the input space onto a feature space that is easier to cluster or. classify. This paper relies upon an adversarial auto-encoder as a means of building a code space of low dimensionality suitable for clustering. We impose a tunable Gaussian mixture prior over that space allowing for a simultaneous optimization scheme. We arrive at competitive unsupervised classification results on hand-. written digits images (MNIST) that is customarily classified within a supervised framework.\nIn clustering, we have a dataset of points (x1, ..., Xi, ..., xn) where each datapoint x; lives in a D- dimensional space. First. we build an auto-encoder that consists of neural-network-based encoder 8 and decoder D parametrized by 0g and 0p respectively. The encoder maps the data points from their original space to a code d-dimensional space (d < D). The decoder D maps them back from the code space to the original one, in such a way that each datapoint x; is roughly reconstructed through the encoder and decoder: D((x)) ~ x. The idea is that if the reconstruction is viable then we have compressed the information of each example without too much loss.\n1. the traditional auto-encoder reconstruction objective to minimize over 0e and 0p"}, {"section_index": "2", "section_name": "4 EXPERIMENTS", "section_text": "For our empirical experiments, we used the MNIST dataset of 70000 digits images in 10 groups Throughout the experiments, we used the same architecture from[Xie et al.(2015) and Jiang et al. (2016) D-500-500-2000-d (D is the dimensionality of the input space e.g. 784 for MNIST and d = 10 is the dimensionality of the code space) for the encoder for fair comparaisons. Furthermore, we use a d-100-10-1 neural network architecture for the discriminator that, thus, takes a code of dimension d as input and one probability as output of coming from a real data point or a generated random vector.\nSecond, similarly to the work ofMakhzani et al. (2015), we add up an adversarial part to the system. with: (i) a Gaussian-mixture-based random generator H whose proportions (g), means (k) and. covariance matrices () for k = 1, ..., K are parametrized by O. An instance of such generated. random vectors is noted z; and lives in the same code d-dimensional space as above; (ii) a neural- network-based adversarial discriminator A with weights and biases parametrized by 0A whose role. is to continously force the code space to follow the Gaussian mixture prior..\nn LR(0E,0D) = >l|x-D(E(xi)|l i=1\nn K lGMM(0H) =IIkN(E(xi),k,k) i=1 k=1\nLGMM = log(lGMM\nn n -1 LA loge(A(E(x;))) + loge(1 - A(zi)) 2n i=1 i=1\nK p(z|0h)= TkN(z,k,k) k=1\nNote that it is possible to generate such vectors using a multinomial random number an the Cholesky decomposition of covariance matrices (Bishop, 2006, p. 528).\ndone in Tensorflow (Abadi et al., 2015) and Scikit-learn (Pedregosa et al.,[2011) in Python soon available\n3 3 3 X 7\nFigure 1: Generated digits images. From left to right, we have the ten classes found by DAC and ordered thanks to the Hungarian Algorithm. From top to bottom, we go further and further in random directions from the centroids (the first row being the decoded centroids)."}, {"section_index": "3", "section_name": "4.1 RESULTS", "section_text": "The results of DAC compare favorably to the state-of-the-art in Table (1). We also get an extra- boost of accuracy thanks to an Ensemble Clustering method (Weingessel et al.,2003) that combines multiple outputs coming from multiple random initializations.\nDatasets MNIST-70k DAC EC (Ensemble Clustering over 10 runs) 96.50 DAC (median accuracy over 10 runs). 94.08 VaDe (Jiang et al., 2016) 94.06 DEC (Xie et al., 2015) 84.30 GMM 53.73 *: Results taken from (Jiang et al.2016)\nOur experiments show that an auto-encoder dramatically improves all clustering results as (Xie et al.2015) and (Jiang et al.2016) are based on auto-encoders. Furthermore, our adversar ial contribution outperforms all the previous algorithms"}, {"section_index": "4", "section_name": "5 CONCLUSION", "section_text": "Within the context of the algorithm laid out above, some symbiosis does operate between clustering and non-linear embedding while preserving the reconstruction ability. There are improvements thai can be made mainly to overcome problems in the adversarial part, the online Gaussian mixture model and better auto-encoders."}, {"section_index": "5", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank Oscaro.com for the funding and also Vincent Delaitre and Ross Girshick for fruitful discussions.\nOn the first row of Fig. (1) we show the decoded GMM centroids and each corresponds to a cluster of digits. On the other rows, we go further and further from the centroids and we can see the style of the digits becoming fancier along with the vertical axis."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrev Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunatl. Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah. Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin cent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Watten berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URLhttp: //tensorflow. org/. Christophe Biernacki, Gilles Celeux, and Gerard Govaert. Assessing a mixture model for clustering. 10t0d1lz0l:500d\nChristopher M. Bishop. Pattern recognition. Machine Learning, 128, 2006\nRichard O. Duda, Peter E. Hart, and David G. Stork. Pattern classification. John Wiley & Sons 2012.\nHarold W. Kuhn. The hungarian method for the assignment problem. Naval research logistic. quarterly, 2(1-2):83-97, 1955.\nGeoffrey McLachlan and David Peel. Finite mixture models. John Wiley & Sons, 2004\nF. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten-. hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and. E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research. 12:2825-2830,2011. URLhttp://scikit-1earn.org/\nJean Ponce and David Forsyth. Computer vision: a modern approach. 2011.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor mation Processing Systems, pp. 2672-2680, 2014.\nMatthew Stephens. Dealing with label switching in mixture models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 62(4):795-809, 2000.\nPascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(Dec):3371-3408. 2010.\nAndreas Weingessel, Evgenia Dimitriadou, and Eurt Hornink. An ensemble method for clustering"}] |
S1nFVFNYx | [{"section_index": "0", "section_name": "A SMOOTH OPTIMISATION PERSPECTIVE ON TRAINING FEEDFORWARD NEURAL NETWORKS", "section_text": "Hao Shen\nDepartment of Electrical and Computer Engineering Technical University of Munich, Germany\nWe present a smooth optimisation perspective on training multilayer Feedforward. Neural Networks (FNNs) in the supervised learning setting. By characterising the. critical point conditions of an FNN based optimisation problem, we identify the conditions to eliminate local optima of the cost function. By studying the Hessian structure of the cost function at the global minima, we develop an approximate Newton FNN algorithm, which demonstrates promising convergence properties.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Despite the recent great success of deep neural networks in various applications, training a deep. neural network is still among the greatest challenges in the field, cf. (Glorot & Bengio2010). In this abstract, we focus on the study of training the Feedforward Neural Networks (FNNs) to solve supervised learning problems. One major reason for the difficulty in training an FNN is that their. performance is highly dependent on various factors, such as the architecture of an FNN (Hornik 1991;Sun et al.][2016), the specific activation function (Mhaskar & Micchelli|[1993), and the choice of error functions (Falas & Stafylopatis|[1999), in a very complicated way.\nThe most popular FNN training algorithm is the backpropagation (BP) algorithm, cf. (Widrow &. Lehr 1990). Although the BP algorithm shares a great convenience of being very simple, early. works argue that problems with the BP algorithm are essentially its nature of being a gradient descent. algorithm, cf. (Sutton1986). Since a cost function for training an FNN is often in a large scale. and highly non-convex, BP algorithms often suffer from two major problems, namely, (i) potential. existence of undesired local minima, and (ii) slow convergence speed. Although BP algorithms are. suspected to be sensitive to initialisations, c.f. (Kolen & Pollack1990), recent results reported in. (Goodfellow et al.f[2015) suggest that modern FNN learning algorithms can overcome the problem. of local optima quite conveniently. Such an observation could be explained by the previous works. in (Yu]1992]Yu & Chen]1995] Gori & Tesi]1992} Kawaguchi]2016), which developed conditions on the structure of FNNs to eliminate undesired local minima. On the other hand, to deal with slow convergence speed, various modified BP algorithms have been developed, such as momentum based. BP algorithm (Vogl et al.1988), conjugate gradient algorithm (Charalambous1992), and BFGS algorithm (Le et al.]2011). Heuristic approximations of the Hessian matrix, such as a diagonal. approximation structure (Battitif|1992) and a block diagonal approximation structure (Wang & Lin. 1998), have also been proposed to construct approximate Newton's method. However, without a. true evaluation of the Hessian, performance of these heuristic approximations is hardly convincing.."}, {"section_index": "2", "section_name": "REVISITING THE BACKPROPAGATION ALGORITHM", "section_text": "We denote by L the number of layers in an FNN structure, and by nj the number of processing units. in the l-th layer with l = 1, ..., L. Specifically, by letting l = 0, we refer to the input layer. Let t-1 E IRmt denote the output from the (l - 1)-th layer, wi,k E Rmi the parameter vector associated with the (l, k)-th unit function fi,n(wi,k, i-1) E R in the l-th layer. By stacking all unit functions. tegether, we can define the l-th layer evaluation mapping as.\nFi: Rmxni x Rmi >Rn (Wi,$-1)+>[fl,1(W (wLnu$i-1l'"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "F: W Rno -> RnL, (W,$o)+> Ft(Wt,) c 0 F2(W2,) 0 F1(Wi, 90)\nT J: W ->R, J(W):=)`(EoF) (W,xi) i=0\nV(EoF)(Wi) = VE( =:wi i ERni\nwhich is a rank-one matrix update. By exploring the layer-wise structure of the FNN, the correspond- ing vector wj can be computed iteratively backwards from the output layer L. Such a backward mechanism in computing the gradient (EoF)(Wi) is referred to as the classic BP algorithm."}, {"section_index": "4", "section_name": "3 MAIN RESULTS", "section_text": "(i) RmiXnL\nTheorem 1 (Local minima free condition). Let the error function E: RnL -> R be strictly convex and a global minimum W* of the FNN learning cost be reachable. If the rank of matrix P as constructed in (7) is equal to T : nL, i.e., rank(P) = T : nL, then the FNN learning cost function J is free of local minima.\nRemark 1 (Choice of the number of NN variables). Given the number of rows of P being Nnet, the theorem suggests that the total number of variables in an FNN, i.e., Nnet, needs to be greater than or equal to T : nL.\nThe analysis of the Hessian is critically important for designing efficient numerical algorithms. The Hessian form of the FNN learning cost function J is a bilinear operator Hg : RNnet RNnet > R.. computed by computing the second derivative of J. Specifically, if a global minimum W* is. reachable, the Hessian form H, evaluated at W* in direction H E W is computed by.\ncRNnet(T nL\nvariables in an FNN. Obviously, if the rank of matrix P is equal to T nt, then the trivial solution of VE($)) = 0 for all i = 1, ..., T is the only solution of the parameterised linear system (7). If the error function E is chosen to be strictly convex, then such a trivial zero solution is corresponding to the global minimum of E. Hence, we present the following theorem.\n*(i) Q *(i) E RNnetXNnet i=1\nand\nRemark 2. According to the result in Theorem1 it is easy to that rank(H(W*)) T-nL Nnet. In other words, the rank of the Hessian at the global minima have the largest possible rank oj T . nt. When a specific FNN is constructed from scratch without insightful knowledge regarding the data, then it is very likely that the Hessian is degenerate, i.e., gradient based algorithms can suffer significantly from slow convergence speed.\nIt is important to notice that the Hessian H-(W*) is neither diagonal nor block diagonal, whicl demotivates the existing approximate strategies of the Hessian in (Battitil1992]|Wang & Lin]1998 With our explicit characterisation of the Hessian at global minima, we propose to approximate the Hessian of J at arbitrary point W with the structure as shown in Eq. (8)."}, {"section_index": "5", "section_name": "4 NUMERICAL EXPERIMENTS", "section_text": "1200 8 C 89 C Approximate Newton (1000 iterations) Classic BP (1000 iterations) 1000 Classic BP (19760 iterations) 80 X 3 800 14 (Mr 600 C 4 1 400 -2 200 3 0 0 131.4 262.8 394.2 525.6 657.0 788.4 919.8 1051.2 1182.6 1314 A Time (s)\n1200 89 Approximate Newton (1000 iterations) O CO Classic BP (1000 iterations) 1000 -Classic BP (19760 iterations) 800 /> (M)r 600 00 400 200 XX D A -3 A 0 4 A A 0 131.4 262.8 394.2 525.6 657.0 788.4 919.8 1051.21182.6 1314 A Time (s) A -3 -2 1 0 2 3 4\nFigure 1: Comparison of convergence in terms of cost function value (step size Q = 0.01)\nWe investigate performance of our proposed approximate Newton's (AN) algorithm on the four regions classification benchmark, as originally proposed in (Singhal & Wu1989). In R2 around the origin, we have a square area (-4, 4) (-4, 4), and three concentric circles with their radiuses being 1, 2, and 3. Four regions/classes are interlocked, nonconvex, as shown in Figure 1 (left). We draw randomly T = 1000 samples in the box for training, and specify the corresponding output to be the i-th basis vector in R4. We deploy an FNN architecture with two hidden layers, i.e., L = 3. In both hidden layer, there are 10 units each. Hence, we have no = 2, n1 = n2 = 10, and n3 = 4. All activation functions are chosen to be Sigmoid. Finally, the error function is an smooth approximation of the l1 norm as E(x) := l|x yll? + where we set = 10-6. We test both the classic BP algorithm and the AN algorithm. For running 1000 iterations, the BP algorithm took 61.1 sec., while the AN algorithm spent 1314.1 sec. On average, the running time for each iteration of AN was about 21.4 times as required for an iteration of BP. With the same data and the same random initialisation, we ran BP for 20760 iterations, which took the same amount of time as required for 1000 iterations of AN. As in Figure 1 (right), the first 1000 iterations of BP was highlighted in red with the remaining iterations being coloured in blue. The AN went up at the beginning, then smoothly decreased to the global minimal value, while the BP demonstrated strong oscillation towards the end. It is worth noticing that the prediction of the trained neural network natches exactly the label in the four region problem as the global min um was reached\nM. Gori and A. Tesi. On the problem of local minima in backpropagation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(1):76-86, 1992\nREFERENCES R. Battiti. First- and second-order methods for learning: Between steepest descent and newton's method. Neural Computation, 4(2):141-166, 1992. C. Charalambous. Conjugate gradient algorithm for efficient training of artificial neural networks.. IEE Proceedings G - Circuits, Devices and Systems, 139(3):301-310, 1992. T. Falas and A. G. Stafylopatis. The impact of the error function selection in neural network-based. classifiers. In Proceedings of the International Joint Conference on Neural Networks (IJCNN),. volume 3, pp. 1799-1804, 1999. X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks.. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. (AISTATS-10), volume 9, pp. 249-256, 2010. I. J. Goodfellow, O. Vinyals, and A. M. Saxe. Qualitatively characterizing neural network opti-. mization problems. Published at the 5th International Conference on Learning Representations. (ICLR). arXiv:1412.6544., 2015. M. Gori and A. Tesi. On the problem of local minima in backpropagation. IEEE Transactions on. Pattern Analysis and Machine Intelligence, 14(1):76-86, 1992. K. Hornik. Approximation capabilities of multilayer feedforward networks. Neural Networks, 4(2): 251-257, 1991.\n29, pp. 586-594. 2016. J. F. Kolen and J. B. Pollack. Backpropagation is sensitive to initial conditions. Complex Systems, 4. (3):269-280, 1990. Q. Le, J. Ngiam, A. Coates, A. Lahiri, B. Prochnow, and A. Ng. On optimization methods for deep learning. Proceedings of international conference on Machine Learning, 2011.. H. N. Mhaskar and C. A. Micchelli. How to choose an activation function. In Proceedings of the 6th International Conference on Neural Information Processing Systems, pp. 319-326, 1993. S. Singhal and L. Wu. Training multilayer perceptrons with the extended Kalman algorithm. In Advances in Neural Information Processing Systems, pp. 133-140, 1989. S. Sun, W. Chen, L. Wang, X. Liu, and T.- Y. Liu. On the depth of deep neural networks: A theoretical view. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, pp. 2066-2072, 2016. R. S. Sutton. Two problems with backpropagation and other steepest-descent learning procedures. for networks. In Proceedings of the 8-th Annual Conference of the Cognitive Science Society, pp.. 823-831, 1986. T. P. Vogl, J. K. Mangis, A. K. Rigler, W. T. Zink, and D. L. Alkon. Accelerating the convergence. of the back-propagation method. Biological Cybernetics, 59(4):257-263, 1988. Y.-J. Wang and C.-T. Lin. A second-order learning algorithm for multilayer networks based on block. Hessian matrix. Neural Networks, 11(9):1607-1622, 1998. B. Widrow and M. A. Lehr. 30 years of adaptive neural networks: perceptron, madaline, and back-. propagation. Proceedings of the IEEE, 78(9):1415-1442, 1990. X.-H. Yu. Can backpropagation error surface not have local minima. IEEE Transactions on Neural. Networks, 3(6):1019-1021, 1992. X.-H. Yu and Guo-An Chen. On the local minima free condition of backpropagation learning. IEEE Transactions on Neural Networks, 6(5):1300-1303, 1995.\nVal elwOrKs view. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, pp. 2066-2072, 2016. R. S. Sutton. Two problems with backpropagation and other steepest-descent learning procedures for networks. In Proceedings of the 8-th Annual Conference of the Cognitive Science Society, pp 823-831, 1986. T. P. Vogl, J. K. Mangis, A. K. Rigler, W. T. Zink, and D. L. Alkon. Accelerating the convergence of the back-propagation method. Biological Cybernetics, 59(4):257-263, 1988. Y.-J. Wang and C.-T. Lin. A second-order learning algorithm for multilayer networks based on block Hessian matrix. Neural Networks, 11(9):1607-1622, 1998. B. Widrow and M. A. Lehr. 30 years of adaptive neural networks: perceptron, madaline, and back- propagation. Proceedings of the IEEE, 78(9):1415-1442, 1990. X.-H. Yu. Can backpropagation error surface not have local minima. IEEE Transactions on Neural Networks, 3(6):1019-1021, 1992"}] |